Context7 is Upstash’s up-to-date code-documentation MCP server — fetches version-specific docs and code examples for 97,640 libraries straight from the source, places them directly in the LLM’s context. Two modes: CLI + Skills (ctx7 library / ctx7 docs via npm ctx7) or MCP (hosted endpoint at https://mcp.context7.com/mcp with two tools: resolve-library-id + query-docs). One-command setup: npx ctx7 setup (OAuth + skill install + per-agent flags --cursor / --claude / --opencode). MIT (MCP server source); supporting backend / parsing / crawling are proprietary. 54,877 stars / 2,608 forks at fetch (2026-05-09); created 2025-03-26, last push same-day. PulseMCP ranks Context7 #2 this week, #3 overall in the entire MCP ecosystem (17.2m visitors / 1.8m this week). This wiki’s Claude.ai session has Context7 loaded as an MCP integration — the article canonicalizes that dependency the same way the QMD article canonicalized the wiki’s retrieval substrate.
Key Takeaways
Solves training-data staleness. LLMs trained on year-old data hallucinate APIs that don’t exist or generate code against deprecated patterns. Context7 fetches current docs at query time so generated code matches the library version actually in use. The five-word incantation: append use context7 to any prompt.
Two operating modes — pick by ergonomics.
CLI + Skills (no MCP required): a skill guides the agent to call ctx7 library <name> <query> (find library) → ctx7 docs <libraryId> <query> (fetch docs). Lower latency, no MCP-server setup, fewer moving parts.
MCP (hosted): registers Context7 as an MCP server, exposes resolve-library-id + query-docs natively. Works for any MCP client (Cursor, Claude Code, Claude Desktop, Cline, Windsurf, OpenCode, etc.).
The CLI-vs-MCP choice mirrors the Printing Press thesis (CLI tier 1 / API tier 2 / MCP tier 3) — CLI is more token-efficient for the same task. Context7 ships both because the install path matters more than the dogma.
One-command setup, OAuth-authenticated.npx ctx7 setup walks you through: OAuth login → API key generation → mode selection (CLI+Skills vs MCP) → skill install → per-agent integration (--cursor / --claude / --opencode). Removal: npx ctx7 remove. No env-var fiddling, no manual JSON editing.
Library ID format = github-org-shape./vercel/next.js, /reactjs/react.dev, /supabase/supabase, /anthropics/claude-code, /tanstack/query, etc. Mention the ID in your prompt to skip library-matching: use library /supabase/supabase for API and docs. Versions match contextually: How do I set up Next.js 14 middleware? use context7.
97,640 libraries indexed. Includes major frameworks (Next.js, React, Vue, Svelte, Tailwind, Expo, Supabase, Vercel AI SDK, TanStack, Better Auth) and infra (Cloudflare, MongoDB, Stripe). Each has a trust score (1-10), benchmark score, snippet count, and update freshness — visible on the rankings page. Top-trust-10 libraries on landing 2026-05-09: Next.js, React, Supabase, Tailwind CSS, Expo, Vercel AI SDK. Claude Code itself indexed at /anthropics/claude-code (trust 8.8, 764 snippets, 12-hour update).
Recent architecture change matters: server-side filtering, less context bloat. Per Upstash’s “Context7 Without Context Bloat” blog (Apr 2026 update): old behavior was 2-call (resolve ID → fetch docs); new behavior pushes search/filter onto Context7’s server, returns only the chunks that actually answer the question. Less LLM-side iteration = lower per-query token cost. Same insight as TinyFish’s 87% reduction and Printing Press’s CLI-vs-MCP measurements: where you do the filtering matters more than which protocol you use.
Claude Code plugin shipped recently. Repo includes .mcp.json, .claude-plugin/plugin.json, /context7:docs slash command, and three skills: docs-researcher agent (focused doc lookups), documentation-lookup (auto-triggers for library questions), docs (single-file skill for retrieval). Means a Claude Code session with Context7 plugin installed gets automatic doc-injection without writing rules manually. Cross-harness: also ships native install paths for Cursor, OpenCode, plus 30+ manual MCP clients.
Free tier + paid plans. No API key works at low rate limits; free API key at context7.com/dashboard raises them. Enterprise plans add SSO, private library indexing, etc. For an individual developer this is functionally free.
Open-source disclosure is honest. README explicitly states: “This repository hosts the MCP server’s source code. The supporting components — API backend, parsing engine, and crawling engine — are private and not part of this repository.” So the MCP server is MIT, but the heavy lifting (the actual doc-crawling and parsing infrastructure) is proprietary SaaS. Worth knowing if you’d want to self-host the whole pipeline (you can’t, fully — only the MCP-client-facing piece).
Trust-score system is the editorial layer. Not every library gets equal credibility — community-contributed projects can be flagged via the “Report” button. The 1-10 trust score appears on every library page. Context7’s editorial bar is what separates it from a generic doc scraper. Worth noting that Better Auth shows trust 7.6 (lower than the ecosystem standard 10) on the landing page — the score isn’t reflexively maxed.
Composes with this wiki’s existing retrieval architecture. The wiki has QMD for user’s own knowledge base (411 docs / 8,497 vectors local SQLite) and now Context7 for external library docs (97k libraries hosted). They’re complementary substrates: “what does the library say?” (Context7) + “what do I/my team already know about this?” (QMD). A Claude Code session with both attached can answer questions that need current upstream + historical local context without leaving the agent loop.
PulseMCP rank #2 this week / #3 overall is the load-bearing adoption signal. Among ~thousands of MCP servers tracked, Context7 sits in the top 3 by visitor traffic. 17.2m total visitors, 1.8m this week. Released 2025-04-13 — about 13 months in, dominant position holds. The “use context7” incantation has become a meme in the AI-coding subreddits and YouTube tutorials (Better Stack, Cole Medin, AICodeKing, several others featured in the README’s Media section).
Specific anti-pattern noted by Context7’s own session instructions:“Do not use for: refactoring, writing scripts from scratch, debugging business logic, code review, or general programming concepts.” The tool is for library-knowledge questions — API syntax, configuration, version migration, library-specific debugging, setup instructions, CLI tool usage. Not for general engineering tasks that don’t need fresh upstream documentation. Following this discipline keeps Context7’s marginal value high; misusing it for generic code questions wastes token budget on doc fetches that don’t help.
Live in this session. This wiki’s Claude.ai session has Context7 attached as mcp__claude_ai_Context7__resolve-library-id and mcp__claude_ai_Context7__query-docs. The Context7 MCP instructions say “Prefer this over web search for library docs.” That’s now a session-default behavior — the article exists to make the dependency visible alongside QMD in the documentation-retrieval substrate.
Library question with a named library → Context7 (current docs, version-aware)
Question about your own wiki / notes → QMD (local, decomposed sub-queries)
Question about today’s news, releases, regulations → Tavily / WebFetch
All three can fire in the same session; they’re substrates, not competitors.
Where this fits in the wiki
Documentation-retrieval substrate. Sits next to QMD as the external-doc end of the same problem QMD solves for personal knowledge. The wiki’s Claude Code workflow now uses both: QMD for “what does my vault say?” and Context7 for “what does the upstream library actually say right now?”
Already cross-referenced from Essential MCP Servers for 2026 — Context7 was on the radar there as a high-value MCP server; this article gives it the standalone treatment.
Pairs with SuperClaude which references Context7 in its workflow patterns.
Sister to last30days-skill (Matt Van Horn) and Printing Press in the external-knowledge ingestion at agent runtime cluster — different domains, same architectural shape (MCP server fronting a curated index).
Composes with Claude Managed Agents — long-running agents need fresh library docs; Context7 is the cheapest reliable source.
Calibrates the CLI-tier-1-API-tier-2-MCP-tier-3 thesis — Context7 ships both CLI+Skills and MCP modes; their own positioning of CLI+Skills as the “no MCP required” lighter-weight option validates the same argument from another vendor.
Implementation
Tool/Service: Context7 (upstash/context7) — hosted MCP server + npm CLI for library documentation retrieval.
Setup (one-command):
Auto-installer:npx ctx7 setup — OAuth login → free API key → mode selection (CLI+Skills vs MCP) → per-agent skill install. Use --cursor, --claude, or --opencode to target a specific agent.
Paid plans for enterprise / private library indexing.
Integration notes:
Two MCP tools (small surface): resolve-library-id (library name → Context7 ID) and query-docs (Context7 ID + question → relevant doc chunks).
Use Context7 ID directly (/vercel/next.js) to skip the resolve step — saves a round-trip.
Specify version inline:Next.js 14 middleware will trigger version-aware retrieval automatically.
use context7 literal incantation in any prompt triggers the skill / MCP. Without it, behavior depends on whether you’ve configured a project-level rule.
Project rule pattern (Claude Code): add to CLAUDE.md:
Always use Context7 when I need library/API documentation, code generation, setup or configuration steps without me having to explicitly ask.
Don’t use for: refactoring, writing scripts from scratch, debugging business logic, code review, or general programming concepts (per Context7’s own MCP session-start instructions).
Trust score on landing page is the editorial signal — read it before relying on a low-trust library’s docs.
Open Questions
Self-host story. MCP server is MIT but the API backend / parsing engine / crawling engine are proprietary. Is there any escape valve if Upstash sunsets the hosted endpoint? Per the README, no. The risk is small (Upstash is a real company with a real product line), but for any agency / enterprise deployment the dependency is on Upstash’s hosted infrastructure indefinitely.
Latency / token-budget characterization. Context7’s “without context bloat” rev returns only relevant chunks — but how big are typical chunks, how many round trips per question, and how does that compare to a single naive WebFetch of the official docs site? No published benchmark numbers in the README.
Trust score authoring process. 1-10 score is editorial — who scores libraries, on what cadence, with what criteria? README says “community-contributed… cannot guarantee accuracy.” How much weight should an agent put on a Trust 7.6 vs a Trust 10? Worth knowing before relying on it as a guardrail.
Library coverage gaps. 97,640 libraries indexed sounds like everything, but what about: enterprise SaaS API docs (Salesforce / SAP / Workday), payment SDKs (Stripe / Adyen / Braintree), MLOps tools (W&B / Modal / Banana), genomic stack (Bioconductor)? README doesn’t enumerate.
Privacy story for prompts. Every Context7 query forwards the user’s query field to Upstash. Is that logged / retained / used for training? Privacy policy should be checked before any client-data-touching workflow uses Context7.
Comparison with Cursor’s built-in @docs. Cursor ships its own doc-crawling layer. How does Context7’s freshness / coverage / trust compare? Useful experiment for any team standardizing across editors.
Custom library indexing. Repo references “Adding Libraries” — what’s the bar for getting a private repo indexed? Self-serve via context7.com/add-library, or gated through Upstash sales? Implication: for WEO Marketly’s internal libraries (OmniPresence / Pulse / Hermes), is Context7 a viable internal-doc index, or do you need their enterprise plan?
Try It
One-shot install + smoke test.npx ctx7 setup --claude, then in a Claude Code session ask: “How do I set up Next.js 15 middleware? use context7”. Watch the model fetch from /vercel/next.js and produce code matching current API.
Library-ID shortcut. Pick a library you already know the ID for (e.g., /supabase/supabase). Skip the resolve step: “Implement email/password auth. use library /supabase/supabase for API and docs.” — saves a round-trip, faster response.
Version-pinning test. Ask the same question with and without a version anchor: “Next.js 13 router pattern” vs “Next.js 15 router pattern” — verify Context7 returns version-appropriate code.
Check the trust score before relying. Visit context7.com/<library-id> (e.g., context7.com/better-auth/better-auth at trust 7.6) and read the trust-score breakdown. Calibrate how much weight to put on docs from libraries below 9.
Pair with QMD for cross-source synthesis. Ask a question that requires both: “How does our wiki’s documented Hermes integration interact with the Vercel AI SDK’s tool-use API?” — agent should call QMD for the wiki context AND Context7 for current Vercel AI SDK behavior, then synthesize.
Project-rule installation. Add the canonical example rule to your project’s CLAUDE.md:
Always use Context7 when I need library/API documentation, code generation, setup or configuration steps without me having to explicitly ask.
Now you don’t need the use context7 suffix in every prompt.
For Claude Code plugin route: install the Claude Code plugin from the Context7 repo (.claude-plugin/plugin.json) — gets you the /context7:docs command and the documentation-lookup skill that auto-triggers on library questions.
Discipline check. Avoid Context7 for refactoring / business-logic debugging / general programming concepts. The marginal value comes from upstream-library-doc questions; using it for generic code questions burns token budget without payoff.
Claude Managed Agents — long-running agents that need fresh library docs at arbitrary points in their lifecycle; Context7 is the cheapest reliable source
TinyFish — same architectural shape (server-side filtering, MCP front, lower client-side token cost) for web infrastructure rather than library docs
Shopping for Skills and Plugins — vetting framework; Context7 passes (named publisher = Upstash, MIT MCP server, active maintenance, real ecosystem traction)
Graph View
TopicsClaude AIPrompt engineeringAgents & agentic systemsAI marketingAI industry researchAI video content