See where your engineering team ranks on AI maturity.
The first public leaderboard for AI context management. 25 teams scored across 8 research-backed dimensions. Find out if you're in the top 20% — or in the bottom half pretending you're not.
Top 10 Trend
Score evolution across the past 12 weeks
Leaderboard
25 teams ranked by composite score
The 8 Dimensions
Each scored 0–100 against research-backed criteria. Weights vary by team size.
CLAUDE.md, .cursorrules, AGENTS.md, copilot-instructions.md — existence, quality, freshness
Cross-session memory, hierarchical context (global → project → subdirectory), session management
ARCHITECTURE.md, ADRs, API specs (OpenAPI), README depth — docs AI can consume
Shared rules, org-level instructions, team prompts, shared skills/subagents, CLAUDE.local.md pattern
Token budget management, context ordering, compression, subagent delegation
Naming consistency, type safety, directory predictability, monorepo structure, co-located tests
MCP servers, hooks, IDE-specific configs, plugins, skill/subagent definitions
AI code quality tracking, acceptance rates, turnover metrics, AI vs human code quality comparison
How it works
Four ways to assess. All produce the same score, scoped to the same rubric version.
Scan a public repo for context signals
16 questions, no signup required
npx context-index in any repo
Drop CLAUDE.md / .cursorrules / AGENTS.md