Source: Anthropic Blog Opus 4 7 Claude Code Best Practices 2026 04 16 (Anthropic blog, Apr 16 2026 — https://claude.com/blog/best-practices-for-using-claude-opus-4-7-with-claude-code); Epoch AI Substack snapshot (ai-research/watchlist-snapshots/epochai-substack-com-2026-05-10.md) (ECI benchmark data, May 2026)

Anthropic’s official guidance for getting the most out of Opus 4.7 — the current strongest generally-available model, and now the default on Max and Team Premium as of Week 16 — when driving it through Claude Code. Covers the xhigh default effort level, adaptive thinking (no fixed thinking budgets), the updated tokenizer, shifts in tool-calling and subagent spawning behavior, and how to structure prompts when migrating from Opus 4.6.

Key Takeaways

  • Default effort is now xhigh. A new level between high and max. Anthropic explicitly calls it “the best setting for most coding and agentic uses” — strong autonomy without runaway token usage. Keep this unless you have a reason to change.
  • No fixed thinking budget. Extended Thinking with a fixed budget is not supported in Opus 4.7. Thinking is adaptive — the model decides at each step whether to think and how much. This breaks any harness that set a budget. Migrate by switching to prompt guidance (“think carefully…” or “prioritize responding quickly…”).
  • Delegate, don’t pair-program. The blog is explicit: “treat the model as a capable engineer you’re delegating to rather than a pair programmer you’re guiding line by line.” Specify intent, constraints, acceptance criteria, and file locations upfront in the first turn. Batch questions. Reduce back-and-forth.
  • Auto mode exists (research preview, Claude Code Max users, Shift+Tab). For trusted execution on long-running tasks when full context is provided upfront.
  • Updated tokenizer + more thinking in late turns = token budget drift. Two changes together mean harnesses tuned for 4.6 may consume materially different tokens on 4.7. Re-tune prompts and usage assumptions when migrating.
  • Fewer tool calls, more reasoning. Opus 4.7 “calls tools less often and reasons more.” If your workflow depends on tool-heavy exploration, explicitly tell the model to use tools.
  • Subagent spawning is more judicious. You now have to ask for parallel subagents, particularly “when fanning out across items or reading multiple files.” Don’t assume the model will spawn them by default.
  • Response length is task-calibrated. Shorter on simple lookups, longer on analysis. State length requirements explicitly if you need a specific shape of answer.
  • Reduced overthinking. Anthropic flags that 4.7 overthinks less than 4.6. If you see short, direct answers where 4.6 would have rambled, that’s expected.
  • ECI score: 156. Epoch AI’s independent Epoch Capabilities Index places Opus 4.7 at 156 — near the frontier but behind GPT-5.4, Gemini 3.1 Pro, and GPT-5.4 Pro. GPT-5.5 Pro leads at 159 (as of May 2026). Strong absolute performance; not the frontier leader.

Benchmarks

Independent benchmark via Epoch AI Substack (May 2026):

ModelECI ScoreNotes
GPT-5.5 Pro159New frontier as of May 2026
GPT-5.4 Proabove 156Ranked above Opus 4.7
Gemini 3.1 Proabove 156Ranked above Opus 4.7
GPT-5.4above 156Ranked above Opus 4.7
Opus 4.7156Near-frontier; 4th or lower

ECI (Epoch Capabilities Index) is Epoch AI’s composite benchmark combining multiple evaluations. Source: Epoch AI Substack posts “Opus 4.7 scores near frontier on ECI” and “GPT-5.5 Pro achieves a new high score on the ECI” (May 8, 2026 brief).

The 3-point gap (156 vs 159) situates Opus 4.7 as a strong second-tier performer: exceeds most production models, trails the latest OpenAI frontier release by a measurable margin. Use this as context when choosing Opus 4.7 vs API access to GPT-5.4 family for agentic use cases.

Effort settings reference

LevelAnthropic’s recommendation
lowCost/latency-sensitive work; outperforms Opus 4.6 at equivalent levels
mediumCost/latency-sensitive work; outperforms Opus 4.6 at equivalent levels
high (default for API-key / Bedrock / Vertex / Foundry / Team / Enterprise as of Week 15, v2.1.92+)Balances intelligence and cost for concurrent sessions
xhigh (default for Pro / Max subscriptions on Opus 4.7; applied on first switch to 4.7)Best setting for most coding and agentic uses — strong autonomy, bounded token usage
maxGenuinely hard problems; diminishing returns; prone to overthinking

Default-effort tier split summary (as of Week 17):

  • Pro / Max on Opus 4.7: xhigh by default (set automatically on first switch to 4.7 in W16).
  • Pro / Max on Opus 4.6 / Sonnet 4.6: high by default (raised from medium in Week 17).
  • API-key, Bedrock, Vertex, Foundry, Team, Enterprise: high by default (raised from medium in Week 15). Override with /effort per-session or --effort per-invocation. Re-baseline cost dashboards when migrating plans — all tiers silently shifted upward across W15–W17.

Opus 4.7 1M context window fix (Week 17, v2.1.117+): Opus 4.7 sessions now correctly compute context against the model’s native 1M token window. Before this fix, /context showed inflated percentages and sessions triggered autocompaction prematurely. If you saw early compaction after the W16 Opus 4.7 launch, upgrade to v2.1.117+ to resolve it.

Prompt patterns

  • More thinking: “Think carefully and step-by-step before responding; this problem is harder than it looks.”
  • Less thinking: “Prioritize responding quickly rather than thinking deeply. When in doubt, respond directly.”
  • Subagent hint: “Use parallel subagents when reading multiple files or fanning out across items.”

Implementation

  • Tool/Service: Claude Code (CLI / web / desktop)
  • Setup: Update to the latest Claude Code release; accept default model (Opus 4.7); leave effort at xhigh; remove any explicit thinking-budget configuration from custom harnesses.
  • Cost: Usage-based per Claude Platform rates. xhigh intentionally bounded to avoid runaway token use. Observe token usage on first turns and adjust prompt length/scope accordingly.
  • Integration notes: Applies to Claude Code sessions, routines (Routines), managed agents (Managed Agents), and any workflow running Opus as the executor model. The advisor strategy (Advisor Strategy) is orthogonal — it affects which model answers which parts of a request, not effort or thinking settings.

Migration checklist (Opus 4.6 → 4.7)

  1. Remove any hardcoded thinking_budget parameters — not supported in 4.7.
  2. Verify default effort is xhigh; don’t silently downgrade to high for budget reasons.
  3. Run your existing prompts with no changes; observe token usage and response length.
  4. For tool-heavy workflows, audit whether tool invocations dropped materially. If yes, add explicit instructions to use tools.
  5. For multi-file reads or fan-out tasks, explicitly instruct the model to use parallel subagents.
  6. Re-audit token budgets in any long-running or routine-based jobs (new tokenizer + more late-turn thinking = different cost profile).
  7. Consider the advisor pattern (advisor_20260301) when running Sonnet/Haiku as the executor and wanting Opus 4.7 intelligence on-demand.

Open Questions

  • Thinking budget alternatives. With fixed thinking budgets removed, is there any way to impose a hard cap on thinking tokens, or is the only lever prompt guidance plus the effort-level dial?
  • Auto mode availability. Research preview only for Claude Code Max users on Shift+Tab. When does it reach Pro and Team tiers? What are its failure modes?
  • Tokenizer delta quantified. The blog notes “updated tokenizer” but doesn’t publish the delta (how many more/fewer tokens for typical inputs). Requires empirical measurement.
  • Interaction with advisor tool. Does advisor_20260301 with Opus 4.7 as advisor inherit xhigh defaults, or is advisor effort separate?

Try It

  1. Baseline re-prompt. Take one of your most-used Claude Code prompts (a refactor, a bug hunt, a feature scaffold). Run it on 4.7 with no changes and compare token usage + output quality against your last 4.6 run.
  2. Audit thinking-budget configurations. Grep your dotfiles, CLAUDE.md files, and any harness code for thinking_budget or extended-thinking config. Remove or replace with prompt-level guidance.
  3. Test auto mode on a long-running task you trust. Shift+Tab, provide complete context upfront, walk away. Measure whether the output is usable without intervention.
  4. Add subagent hints to any CLAUDE.md instructions for multi-file work. Opus 4.7 won’t spawn parallel subagents by default the way 4.6 sometimes did.
  5. Read the linked resources — especially the Opus 4.7 launch announcement and the official prompting best practices — for deeper model-behavior detail not covered in the blog post.