Source: Claude Ultrareview Docs 2026 04 19

Product: Claude Code — /ultrareview slash command + claude ultrareview [target] CLI subcommand Docs: https://code.claude.com/docs/en/ultrareview Availability: Public research preview as of Week 17 (April 20–24, 2026). First shipped in Week 16 (v2.1.111) with adversarial critique pass and diffstat in launch dialog.

Pre-merge code review that runs a fleet of reviewer agents in a remote cloud sandbox and returns only verified findings. An adversarial critique pass validates each finding against the others before surfacing — reducing false positives compared to a single-pass review. The launch dialog shows a diffstat so you know exactly what’s going up before confirming. Terminal stays free while it runs in the background. Positioned as the deeper counterpart to local /review. Complements the Advisor Strategy and Managed Agents pattern — another case of Anthropic pushing high-cost reasoning off the local session into cloud-managed multi-agent orchestration.

Key Takeaways

  • Multi-agent fleet, not single-pass. /ultrareview spawns many reviewer agents in parallel against the same diff. A single-pass /review is linear; ultrareview is broader by construction.
  • Verified findings only. Every reported bug is independently reproduced inside the sandbox before it surfaces. Filters out style/nit noise; optimized for “real bugs I’d regret not catching.”
  • Remote sandbox, background execution. Runs on Claude Code on the web infrastructure. Terminal stays free; you can close the session and come back.
  • Three invocation modes. (1) /ultrareview — interactive, bundles local working tree. (2) /ultrareview <PR-number> — interactive, clones from GitHub directly (required for large repos; requires github.com or now GitLab/Bitbucket/GHE remote). (3) claude ultrareview [target] — non-interactive CLI subcommand added in v2.1.120, for scripted/CI use without a running session.
  • Duration: 5–10 minutes per run. Long enough that Anthropic built task tracking (/tasks) for it — see running reviews, open detail views, stop mid-run (partial findings are discarded).
  • Opt-in only. Claude never launches /ultrareview on its own. Always user-initiated and gated behind a confirmation dialog showing scope, remaining free runs, and estimated cost.
  • Pricing: free trial, then extra usage. Pro and Max: 3 free runs, one-time per account (no refresh). Team/Enterprise: no free runs. After free runs, each review bills as extra usage — typically $5–$20 depending on diff size. Extra usage must be enabled on the account/org or launch is blocked.
  • Auth constraints. Requires Claude.ai account login (not API-key-only). Not available on Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry, or Zero Data Retention organizations.

When to use it vs /review

Dimension/review/ultrareview
Where it runslocal sessioncloud sandbox
Depthsingle-passmulti-agent fleet with verification
Durationseconds to minutes~5–10 min
Costcounts toward normal usage3 free runs (Pro/Max), then $5–$20 as extra usage
Best forquick feedback while iteratingpre-merge confidence on substantial changes

Rule of thumb from the docs: /review is the iteration tool; /ultrareview is the pre-merge gate.

Implementation

  • Tool/Service: Claude Code slash command /ultrareview (research preview)
  • Setup:
    1. Upgrade Claude Code to v2.1.86 or later.
    2. If signed in with an API key only, run /login and authenticate with a Claude.ai account.
    3. From any git repo, run /ultrareview for a branch review or /ultrareview <PR-number> for PR mode.
    4. Confirm the scope/cost dialog.
    5. Keep working — findings appear as a session notification when the run completes.
  • Cost: 3 free runs per Pro/Max account (one-time). After that, ~$5–$20 per run billed as extra usage. Team/Enterprise has no free allotment.
  • Integration notes:
    • /tasks lists running/completed ultrareview sessions and lets you stop them mid-run.
    • /extra-usage checks or toggles the extra-usage setting required for paid runs.
    • PR mode requires github.com remote. If the repo is too large to bundle, Claude Code prompts you to switch to PR mode — push a draft PR first.
    • Not available on Bedrock / Vertex AI / Foundry deployments or Zero Data Retention orgs. These orgs should stay on local /review.

How it fits the Claude Code stack

  • vs Routines: Routines are scheduled / trigger-based cloud execution of a prompt. Ultrareview is one-shot, user-initiated, specialized for code review. Both live on the Claude Code on the web infrastructure but serve different workflow slots (recurring automation vs pre-merge gate).
  • vs Managed Agents: Managed Agents is the general hosted-agent primitive (sandboxed, checkpointed, OAuth-enabled). Ultrareview is a purpose-built product on top of that class of infrastructure — more opinionated, narrower, with its own pricing model.
  • vs Subagents / Agent Teams: Subagents run inside your local session with your own permissions. Ultrareview’s fleet runs remotely and never touches local tools. The multi-agent pattern is similar; the isolation model is not.
  • vs Cross-Topic Connections: Adds a fourth primitive to the Routines / Managed Agents / Dispatch triad — “cloud-hosted on-demand fleet” is structurally distinct from scheduled cloud tasks (Routines), long-lived cloud agents (Managed Agents), and phone→desktop handoff (Dispatch).
  • vs Cross-Topic Connections: Explicit per-run pricing ($5–$20) is an example of the pay-for-depth pattern: cheap single-pass review stays included; deep verified multi-agent review is metered extra-usage. Fits the same “buy intelligence when you need it” lever as the Advisor tool and Opus xhigh effort.

Open Questions

  • Ultraplan counterpart. The docs reference /ultraplan as “the planning counterpart to ultrareview for upfront design work.” See Ultraplan for the paired cloud planning feature.
  • Finding taxonomy. No public examples in the docs of the reported-finding format. What does a typical verified finding look like (severity, reproduction steps, suggested fix)?
  • Concurrency limits. No documented cap on parallel ultrareview runs per account. What happens if a Max user kicks off 10 ultrareviews simultaneously?
  • Sandbox scope. The sandbox runs tests (implied by “independently reproduced and verified”) — does it need the repo’s test harness to be runnable? What happens for monorepos that require env vars, secrets, or long build steps?
  • Private-package handling. Bundling the working tree uploads files; how are .env, private lockfile registries, or gitignored-but-required artifacts handled?
  • Extra Usage ceiling. $5–$20 per run × team-wide adoption implies meaningful spend. No per-seat cap mechanism in the docs — does extra-usage admin surface ultrareview specifically?
  • Relationship to GitHub code review UX. PR-mode findings appear in the Claude Code session. Is there a path to inline-comment on the PR itself, or does it stay in-session?
  • First-run data before sunset. Research preview — pricing/availability “may change.” Worth a refresh pass after general availability.

Try It

  1. Upgrade + authenticate. Get Claude Code to v2.1.86+ and run /login to confirm Claude.ai auth. API-key-only sessions cannot launch ultrareview.
  2. Burn one free run on a real PR. Pick an open PR on a meaningful branch (not a 3-line fix). Run /ultrareview <PR>. Keep working while it runs. When findings arrive, note which would have actually blocked the merge vs which /review would have also caught.
  3. Side-by-side compare. Same diff: run /review locally, then /ultrareview remotely. Log the finding sets. Use the diff to calibrate when ultrareview’s premium is worth it for your team.
  4. Use /tasks as a review queue. For multi-change sessions, fire off ultrareviews for 2–3 branches and come back via /tasks when they’re done.
  5. Wire into the pre-merge checklist. For changes over a threshold (e.g., 500+ lines, critical path, security-sensitive), require /ultrareview as the last step before merge. Fits naturally with Routines if you want the trigger automated — though ultrareview itself is user-initiated only.
  6. Turn on /extra-usage before you need it. The first time a Pro/Max user burns their three free runs, a paid run is blocked until extra usage is enabled. Enable preemptively if ultrareview becomes part of the normal workflow.
  7. Skip entirely on Bedrock / Vertex / Foundry / ZDR. Unsupported environments — lean harder on local /review and the Advisor Strategy for deeper passes.