Source: Obra Superpowers 2026 04 26

Repo: https://github.com/obra/superpowers Author: Jesse Vincent (obra) of Prime Radiant Stars: 168,531 (at time of ingest 2026-04-26) License: MIT Original release announcement: blog.fsck.com — Superpowers (Oct 9, 2025)

Superpowers is an opinionated end-to-end software-development methodology that ships as a plug-in skills bundle for coding agents. It is one of two Anthropic-vetted entries on the official Claude plugin marketplace and runs across Claude Code, OpenAI Codex CLI/App, Copilot CLI from the same source repo. The skills auto-trigger — the agent recognizes “you’re building something” and silently activates the right skill at the right moment, so the user gets the methodology for free without remembering command names.

Key Takeaways

  • Anthropic-blessed distribution. Superpowers ships through the official claude-plugins-official marketplace, so it installs with a single /plugin install superpowers@claude-plugins-official and auto-updates. That’s a sharper trust signal than community-only frameworks.
  • Cross-agent methodology, single repo. Same skills work in Claude Code, Codex CLI, Codex App, Cursor, OpenCode, Copilot CLI, and Gemini CLI — installation paths differ per agent but the underlying methodology is identical.
  • Skills auto-trigger. The agent checks “is there a Superpowers skill for what I’m about to do?” before any task. The user doesn’t /skill anything — the methodology is mandatory rather than menu-based.
  • The methodology is opinionated. Strict red/green TDD, YAGNI, DRY, brainstorming-before-code, design-in-chunks, plan-then-execute, subagent-per-task review. Not a configuration framework — a closed methodology you opt into wholesale.
  • Subagent-driven development is the execution engine. Each task in a plan gets its own fresh subagent with two-stage review (spec compliance, then code quality). Lets Claude run autonomously for “a couple hours at a time without deviating from the plan.”
  • 14 skills in 4 categories. Testing (1), Debugging (2), Collaboration (10), Meta (2). The Collaboration tier is the bulk — brainstorming, writing-plans, executing-plans, dispatching-parallel-agents, requesting/receiving-code-review, using-git-worktrees, finishing-a-development-branch, subagent-driven-development.
  • Closed contribution model on skills. The README explicitly says “we don’t generally accept contributions of new skills.” Updates must work across all supported coding agents. Stable surface, slow evolution.
  • 168k GitHub stars, 14.8k forks. One of the largest single-author Claude Code-adjacent projects. Active (last push the day of this ingest).

The Seven-Step Workflow

  1. brainstorming — Activates before writing code. Refines rough ideas through Socratic questions, explores alternatives, presents design in sections for human validation. Saves a design document.
  2. using-git-worktrees — Activates after design approval. Creates an isolated workspace on a new branch, runs project setup, verifies a clean test baseline.
  3. writing-plans — Activates with approved design. Breaks work into 2–5-minute tasks. Every task has exact file paths, complete code, verification steps. Plans are written for “an enthusiastic junior engineer with poor taste, no judgement, no project context, and an aversion to testing.”
  4. subagent-driven-development (or executing-plans) — Activates with plan. Dispatches a fresh subagent per task with two-stage review (spec compliance, then code quality), or runs in batches with human checkpoints.
  5. test-driven-development — Activates during implementation. Enforces RED-GREEN-REFACTOR. Deletes code written before tests.
  6. requesting-code-review — Activates between tasks. Reviews against plan, reports issues by severity. Critical issues block progress.
  7. finishing-a-development-branch — Activates when tasks complete. Verifies tests, presents merge/PR/keep/discard options, cleans up the worktree.

The unifying claim: each skill activates automatically when the agent recognizes the right context. The user doesn’t enter “plan mode” or call a slash command — they say what they want, and the skills walk the agent through the right sequence.

How it fits the Claude Code stack

  • vs oh-my-claudecode: Both are large-scale community frameworks for Claude Code. OMC is orchestration-first (six modes, ~19 specialized agents, multi-model bridging, HUD statusline) and is invoked by mode/keyword. Superpowers is methodology-first (seven mandatory phases anchored in TDD) and triggers automatically. Different shapes — pick OMC when you want a multi-agent orchestrator, Superpowers when you want a TDD-strict, single-track development pipeline.
  • vs SuperClaude Framework: SuperClaude is an open configuration framework — 30 /sc:* slash commands, 20 agents, 7 behavioral modes, swap pieces in/out. Superpowers is closed methodology — fixed skills set, mandatory workflow, no à-la-carte. SuperClaude exposes its parts; Superpowers hides them.
  • vs Claude Code Skills Ecosystem: Superpowers is a skills bundle — its 14 entries are first-class Claude skills. The novelty is the methodology layer tying them together with auto-triggering instructions, not the skills primitive itself.
  • vs Plugins and Marketplaces: Superpowers is distributed as a plugin (officially packaged + auto-updating). The plugin layer is the delivery vehicle; the skills + methodology are the payload.
  • vs Agent Teams / Subagents: Subagents are Anthropic’s primitive for isolated workers. Superpowers’ subagent-driven-development uses that primitive as its execution model — fresh subagent per task, two-stage review. It’s a pattern on top of subagents, not a different mechanism.
  • vs Ultraplan / Routines: Ultraplan is cloud plan-mode RP; Routines are durable cloud automation. Superpowers is local CLI methodology. Plan → execute lives in your terminal, not on the web.

Installation

/plugin install superpowers@claude-plugins-official

Auto-updates by default per Anthropic marketplace policy. This is the canonical install path for Claude Code users.

Claude Code (Superpowers’ own marketplace)

/plugin marketplace add obra/superpowers-marketplace
/plugin install superpowers@superpowers-marketplace

Use this if you want Jesse’s adjacent plugins (the marketplace bundles Superpowers + related work).

Other coding agents

  • OpenAI Codex CLI: /plugins → search “superpowers” → Install.
  • OpenAI Codex App: Plugins sidebar → click + next to Superpowers in the Coding section.
  • Cursor: in Agent chat, /add-plugin superpowers.
  • OpenCode: Fetch and follow instructions from https://raw.githubusercontent.com/obra/superpowers/refs/heads/main/.opencode/INSTALL.md.
  • GitHub Copilot CLI: copilot plugin marketplace add obra/superpowers-marketplace && copilot plugin install superpowers@superpowers-marketplace.
  • Gemini CLI: gemini extensions install https://github.com/obra/superpowers (update with gemini extensions update superpowers).

Philosophy (verbatim from the README)

  • Test-Driven Development — Write tests first, always.
  • Systematic over ad-hoc — Process over guessing.
  • Complexity reduction — Simplicity as primary goal.
  • Evidence over claims — Verify before declaring success.

The “evidence over claims” line is load-bearing: it’s why the workflow has explicit verification gates between phases, why TDD is non-negotiable, and why the agent is told to show its work in chunks rather than hand-wave it.

When to use it

  • You want a single methodology applied uniformly across multiple coding agents (Claude Code at home, Codex/Cursor at work).
  • You’re already TDD-disciplined and want the agent forced into the same discipline.
  • You don’t want to invent your own multi-skill workflow — Superpowers already made one and tested it across thousands of users.
  • You want the methodology to just happen without adding /plan or /think to every prompt.

When not to use it

  • You need à-la-carte commands you can swap in/out — pick SuperClaude Framework instead.
  • You want explicit team orchestration with specialized agents — pick oh-my-claudecode (or native Agent Teams for the Anthropic primitive).
  • You don’t want enforced TDD — Superpowers will delete code written before tests, by design. Disable or uninstall if that’s a non-starter.
  • You’re doing pure exploration / research / spike work — the seven-phase workflow assumes you’re shipping software with a spec.

Tradeoffs

  • Auto-triggering is opaque. The skill activates because the agent decided it should — diagnosing “why did Claude pause to brainstorm?” requires knowing Superpowers’ triggering rules. New users sometimes mistake the brainstorm phase for an LLM stall.
  • Closed contribution model. New skills are not generally accepted; updates must work across every supported agent. Stable but slow.
  • Methodology-on-rails. If you want to skip a phase (e.g., go straight from idea to code without a written plan), you’re fighting the framework. The auto-trigger model means there’s no clean “off switch” short of uninstalling.
  • TDD assumes a testable codebase. For codebases without a test suite or with poor scaffolding, Superpowers’ insistence on RED-GREEN slows the first task significantly while the suite is bootstrapped.

Try It

  1. Install via the official marketplace: /plugin install superpowers@claude-plugins-official.
  2. Start a new feature with a deliberately vague prompt (“I want to add commenting to the wiki articles”) — observe the brainstorming skill activate without being asked.
  3. Walk through the design-in-chunks phase. Approve or push back on each section before moving on. This is the highest-leverage interaction Superpowers offers.
  4. When the plan is generated, audit it against the project’s actual structure — Superpowers writes for “an enthusiastic junior with no project context,” which means it sometimes over-specifies obvious things and under-specifies project conventions.
  5. Let it run subagent-driven-development on the first task. Watch the two-stage review (spec compliance → code quality) and decide whether the gate is too strict, too loose, or just right for your codebase.
  6. Compare against running the same feature through plain Claude Code and through oh-my-claudecode’s team mode. Pick the framework whose shape matches your work shape.

Open Questions

  • How does the official-marketplace version differ from obra/superpowers-marketplace in update cadence? The README implies parity but doesn’t pin a version policy.
  • Does the auto-trigger model interact well with Anthropic’s Hooks (SessionStart, UserPromptSubmit)? Specifically: can a project-level UserPromptSubmit hook interrupt or steer Superpowers’ brainstorm phase?
  • Are there published metrics on Superpowers’ impact on cycle time, defect rate, or token usage? The README claims hours-of-autonomous-execution but doesn’t quantify outcome quality vs ad-hoc agent use.
  • Compatibility with Agent Teams when CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 is set — does Superpowers gracefully use a teammate, or does it always insist on a fresh subagent per task?