Source: Claude Code CLI Reference + Claude Cowork + ultraplan + ultrareview + Computer Use + Cowork Dispatch + Opus 4.7 Best Practices + Surfaces Decision Framework
Time: Spine — Read 12 min | Watch 25 min | Practice 0 min — total ~37 min (Operator-track stops here). Builder Track adds ~120 min.
Watch First
Pick one 5–10 minute walkthrough before you read the rest. The goal is to see a Claude Code session running before we describe what’s happening — terminal output reads differently when you’ve watched it scroll past once.
- Anthropic’s Claude Code overview — search for the official “Claude Code introduction” or “Claude Code overview” walkthrough on Anthropic’s YouTube or docs site. Pick a video that shows the terminal, a CLAUDE.md file, and at least one file edit happening live (~7 min).
- A community Claude Code session — any recent walkthrough showing someone use Claude Code on a real task. The exact creator matters less than seeing the loop: prompt → Claude reads files → Claude proposes changes → human approves or redirects → Claude edits files → done.
If you’re Operator-track and you only have time for one, watch the Anthropic overview. You’re not learning to install — you’re learning to recognize the surface so you can ask for the right thing in Section 6.
Why It Matters at WEO
WEO is a mixed-role agency. Most of the team uses claude.ai or Cowork for daily work, and that’s the right call — installing Claude Code is overkill for someone whose job is writing copy, planning campaigns, or running client calls. A handful of folks (Mel, Amber, marketing leads, dev-adjacent roles) sit closer to the building work and benefit from picking it up.
But here’s the catch: even if you’ll never type claude in a terminal, knowing what Claude Code does is what lets you ask for the right thing. When a Claude Code-equipped teammate offers to “build a slash command” or “run an /ultraplan,” you want to know whether that’s the right shape for your problem. Operator track learns the surface from the outside. Builder track installs and ships one slash command.
This module covers both. The spine is for everyone — what Claude Code is, when marketers should reach for it, what /ultraplan and /ultrareview do, where Cowork fits, and the four asks an Operator should be able to make. The Builder Track is opt-in.
Section 1 — What Claude Code Is in Plain English
Claude Code is terminal-based Claude that can read files, write files, run shell commands, install tools, navigate codebases, and execute multi-step engineering tasks autonomously. That’s the whole product description. Everything else is consequences of that.
The mental shift from claude.ai chat is the first thing to internalize:
- Claude.ai talks to you. You type a question, Claude types a reply, you copy text back to wherever it needs to live. Claude is a conversation partner with a clipboard.
- Claude Code does the thing. You describe a task. Claude reads your files, edits them, runs the commands, hands you back a finished result. Claude is a teammate with a keyboard.
That sounds like a small distinction. It isn’t. The implications cascade.
File system access is the killer feature
A claude.ai session can see whatever you paste in. A Claude Code session can see — and edit — anything in the working directory it’s started in. If you cd into a folder of blog drafts and start Claude Code there, every draft, every image, every config file is fair game. Claude can read, summarize, edit, rename, and restructure all of them.
This is what makes the multi-file work in Section 2 possible. Asking claude.ai to “audit every patient-facing page in our CMS for banned phrases” is impossible — you can’t paste 47 pages into a chat. Asking Claude Code to do the same thing in a folder of 47 markdown files is one prompt.
Slash commands, CLAUDE.md, and same models
Three more concepts to lock in before Section 2:
-
Slash commands are markdown files saved under
.claude/commands/(project-scoped) or~/.claude/commands/(user-scoped). Typing/your-command-nameloads that file as the system prompt for the turn. Same pattern as the v3 prompt artifact from Module 1 — except invoked with two characters and a Tab. A 80-line prompt becomes/voice-faq. Your library becomes a tool palette. You’ll build one in Builder 5. -
CLAUDE.md is system prompt and persistent memory for the directory. Every Claude Code session reads a
CLAUDE.mdfrom the working directory if it exists, and that file becomes part of the session’s system prompt. Brand voice rules, banned phrases, directory conventions, “always do X before Y” — that all goes inCLAUDE.md. Saved once, loaded by every session in that directory. You stop re-explaining context. -
Same models. Claude Code uses the same Opus 4.7, Sonnet 4.6, and Haiku 4.5 you know from claude.ai. Same intelligence, same limits. What changes is what Claude can do with that intelligence — chat-and-artifacts in claude.ai, vs chat-and-artifacts-and-files-and-shell-and-subagents in Claude Code. Picking the surface isn’t a quality decision; it’s a workflow decision.
Section 2 — When Marketers Should Ask For It
Most marketing work doesn’t need Claude Code. A weekly newsletter draft, a couple of headline variants, a campaign brief — all of that lives comfortably in claude.ai or Cowork, and reaching for the terminal would be cargo-culting.
The work that does benefit from Claude Code has three telltale shapes.
Recurring patterns
You’ve done a task five times this month. Every time, you opened a fresh chat, pasted the same context, ran the same prompt, copied the same kind of output to the same kind of place. The work is muscle-memory by now, but every run still costs you 15 minutes of setup.
That’s the signal. Anything you’ve done five times the same way is a candidate for a slash command. Once it’s a slash command, the 15 minutes of setup becomes 2 seconds of typing. You spend the rest of the time on the work that actually needs your judgment.
The Smile Springs flavor: every Sunday you do a content sweep across 12 dental practices’ competitor blogs. You open 12 tabs, scan each one, take notes, write up a summary. It’s a 90-minute job, every week. It should be a slash command. Once it is — /competitor-sweep — the same 90-minute job becomes “kick off the command, walk away, come back to read the summary.”
Multi-file work
Touch ten files at once instead of copy-paste-copy-paste. Audit a folder of 47 patient FAQ pages for banned phrases. Update the brand voice rules in 23 different blog posts that all referenced the old voice doc. Generate next month’s social calendar by writing 18 markdown files in one go.
In claude.ai, this work is functionally impossible — context windows can’t hold that many files, and pasting them in one at a time makes you the bottleneck. In Claude Code, this work is one prompt: “Read every file in clients/smile-springs/blog/ and flag any that use ‘leverage,’ ‘streamline,’ ‘world-class,’ or ‘state-of-the-art.’ Output a list of file paths and the offending lines.” Claude reads them all, returns the list, you fix what needs fixing.
Anything that touches dev infrastructure
Deploys, GitHub commits, build pipelines, package installs, repo restructures — these live in the terminal because the tools live in the terminal. Asking claude.ai to “deploy the staging branch” is meaningless; the chat session has nothing to deploy with. Asking Claude Code to do the same thing in a directory that has a deploy script is one command.
For most marketing-track folks this is the rarest of the three signals — most marketing work doesn’t touch deploys. But when it does (your team owns a static landing page on a custom domain, or a Notion-to-blog pipeline that needs a build step), Claude Code is the right answer.
The tell
The tell is a sentence you say out loud at your desk: “I keep doing this. Why isn’t there a button?”
Answer: build the button. That’s what a slash command is.
Three concrete recurring tasks that should be Claude Code work
Three real flavors so the abstraction grounds out:
- Monthly competitor-content sweep across 12 practices. A slash command (
/monthly-competitor-sweep) takes a list of URLs, reads them all, runs a comparison rubric, writes a dated summary. The 4-hour job becomes a 5-minute kick-off. - Pre-publish banned-phrase audit on every patient-facing page. A slash command (
/banned-phrase-audit) takes a folder path, reads every markdown file, checks againstCLAUDE.mdvoice rules, writes a report with flag locations. The half-day job becomes a 90-second job. - Fresh blog draft with brand voice plus three SEO inserts in one shot. A slash command (
/new-blog-draft) takes a topic and target keyword, loads voice fromCLAUDE.md, drafts 800 words with weave-in internal links, generates a meta description, saves todrafts/. The whole sequence becomes one command.
You don’t have to write any of these yourself. You just have to recognize the shape so you can ask for them.
Section 3 — Cowork as the Operator-Track Alternative
Most marketing-track folks don’t need Claude Code. They need Cowork.
This section is the load-bearing one for Operator-track learners. Skim the rest of the module if you want the conceptual map, but read this section twice.
What Cowork is, in one paragraph
Cowork is the Claude Desktop app’s coworker mode. It’s file-and-folder native — you point it at a project folder on your machine, it reads everything in that folder, and you give it tasks the same way you’d give them to Claude Code. It supports Connectors (Drive, Gmail, Slack, etc.), Skills (the marketplace you toured in Module 2), and MCP servers (the integrations from Module 3). It ships Dispatch, the async-delegation primitive you covered in Module 5.
In other words: it’s the everything Claude Code does, except it lives in a desktop app with buttons and panels instead of a terminal with a blinking cursor.
The mental model
The core mental model — and this is the one to hold on to:
- Claude Code is the Builder surface. Terminal, dev tools, programmatic, slash commands written in markdown, hooks configured in JSON, the works. Right answer if you’re shipping code or building automations end to end.
- Cowork is the Operator surface for everything Claude Code would do. Same capabilities (reads files, edits files, runs work, supports Skills/Connectors/MCP, ships Dispatch). Different surface — desktop app, click-and-type, no terminal.
If you’re picking what to learn: Operator → Cowork. Builder → Code. Both surfaces share the same Skills, Connectors, and MCP ecosystem, so investments in any of those pay off across both. The skill you wrote in Module 2 works in Cowork and in Claude Code. The Connector you authorized in Module 3 works in Cowork and in Claude Code. The discipline you’ve built across the course transfers cleanly.
Side-by-side: same task, different surface
Four worked examples, two columns each. Same job, two ways to do it.
| Task | How an Operator does it in Cowork | How a Builder does it in Claude Code |
|---|---|---|
| Audit a folder of patient FAQs against voice rules | Open Cowork, point it at clients/smile-springs/faqs/, install the voice-check skill from Module 2, type “Run the voice-check skill against every file in this folder. Output a flagged-issues report.” | Run claude in clients/smile-springs/faqs/, invoke /voice-check, Claude reads every file in the directory, writes the report to output/voice-check-2026-04-27.md. |
| Generate next month’s social calendar across 6 platforms | Open Cowork, point it at the Smile Springs project folder, type “Draft May’s social calendar — one post per weekday, 6 platforms, mix of educational + Saturday-hours promotion + patient-story posts. Use the brand voice rules. Save each platform as its own file in social/may/.” | Run claude in the project, invoke /social-calendar may, slash command embeds the brand voice and the rubric, writes 22 files in one shot. |
| Pull together a competitor teardown from links + screenshots | Open Cowork, drag in 4 screenshots and 6 links, type “Build a 1-page strategic memo on Westgate Family Dental’s content strategy. Use these screenshots and links. Save to competitive/westgate-2026-04-27.md.” Cowork reads the images, fetches the URLs, writes the memo. | Run claude, paste the URLs and image paths, ask for the same memo. Same output, different door. |
| Run a banned-phrase check on a draft before publishing | Open Cowork, drag the draft in, type “Check this against the banned-phrase list in our brand voice rules. Output any flags.” | Run claude in the drafts folder, invoke /check-blog-post drafts/may-launch-post.md, get the report at output/blog-checks/may-launch-post-check.md. |
Same task, four times. Cowork’s column is the Operator path. Claude Code’s column is the Builder path. There’s no difference in what gets accomplished. There’s a difference in setup cost (Cowork is install-and-go; Claude Code requires terminal comfort and a slash-command library you’ve built up over time) and in composability (Claude Code’s slash commands and hooks let you build deeper automations once you’re past the basics).
Why this matters for the rest of the course
Everything you learned in Modules 1–5 — prompt artifacts, skills, Connectors, multi-agent patterns, automation primitives — all of it works in Cowork without ever touching a terminal. The course teaches the patterns once; both surfaces deliver them.
For Builder-track folks: the choice isn’t “Claude Code instead of Cowork.” Most use both — Cowork for one-shot tasks, Claude Code for recurring patterns and dev-infra-touching work.
Hands-on Cowork recipes (Operator track, optional)
For the Operator track, three external Notion AI Recipes ship with copy-paste prompts you can run in Cowork the same week you finish this module. All three live in a curated UK-authored “AI Recipe Vault” catalog and complement what this module covers conceptually:
- Getting Started with Claude Cowork — the 15-minute Cowork onboarding spine: install → folder permission → first file-org task → spreadsheet creation → Claude in Chrome → connectors / plugins / Projects. Most direct way to feel Cowork’s autonomy and permission model. Adds to the side-by-side table above with five reusable use-case prompts (Hiring Manager / Group Travel Agent / SEO Auditor / Customer Intelligence Builder / Content Strategist) and the “Claude can build its own plugin” Lead Magnet Launch Kit example.
- Cowork + Apify Scraping Recipe — six numbered Cowork prompts that chain into a prospect-research workflow: Apify connector setup → local-business scrape → vibe prospecting on individuals → LinkedIn job-description scrape → PR-ready AI-skills research report. Concrete worked example of stacking Skills + Connectors that this section described abstractly.
- The LinkedIn Engagement Machine — four-prompt Cowork content chain demonstrating same-thread context retention as a deliberate architectural choice. Pairs naturally with the Module 1 reusable-artifacts discipline.
Caveat: source pages are UK-authored Notion templates (British spelling, AI-Recipe template format), not Anthropic-published docs. Treat as starting frameworks; layer the Module 7 governance scrutiny before running any of them on real client data.
Section 4 — Cloud Power Features: /ultraplan and /ultrareview
Two cloud features that matter to Operators because they’re the two requests every marketer should know how to make. Both are user-triggered, billed per run, and execute multi-agent in the cloud — they’re the send-it-to-a-panel version of single-agent Claude.
/ultraplan — cloud plan-mode roleplay
[[claude-ai/ultraplan|/ultraplan]] is Anthropic’s cloud plan-mode multi-agent feature. You hand it a goal — a refactor, a campaign brief, a launch scope — and it spawns a panel of specialist agents in the cloud who each think through the plan from a different angle (architecture, risk, sequencing, dependencies, alternatives). They return a synthesized strategy.
The shape: instead of one Claude reasoning about your goal, you get five or seven parallel Claudes each focused on one dimension, outputs reconciled. The result is denser and harder to bullshit than a single chat session would produce.
When to reach for it:
- Scoping a refactor or restructure. “Plan the migration from our old patient-portal CMS to the new one.”
- Sizing a campaign. “Plan the Q3 Smile Springs back-to-school campaign across web, email, social, and paid.”
- Planning a launch. “Plan the new Saturday-hours service launch — rollout, messaging, patient comms, team training.”
Cost: pay-as-you-go on top of base Claude usage. Higher per run than single-agent Claude. Worth it for high-stakes scoping where getting the plan wrong costs weeks of rework.
/ultrareview — multi-agent review
[[claude-ai/ultrareview|/ultrareview]] is the multi-agent review feature you saw demoed in Module 4. Same shape as /ultraplan — multiple specialist agents in parallel, synthesized output — but focused on reviewing a thing rather than planning one. A branch, a PR, a draft, a campaign brief.
Reach for it before merging anything load-bearing — a website-restructure PR, a new patient-intake form deployment, a campaign brief going to a client tomorrow. /ultrareview returns multi-perspective critique covering quality, risk, voice, accessibility, edge cases — whatever the agents are scoped to look for. Quality gate before publishing, not after.
Both features can be run from the terminal (Builder-track) or remotely via the cloud — meaning a teammate can fire one off for you and ping you the result. From an Operator’s perspective, the practical difference is “ask the right person to run it.” See Section 6.
When to ask Jonathon for one
Two phrases every marketer on the team should know:
“Could you run /ultraplan on this brief?” — when you’re scoping something multi-week and the cost of a wrong plan is high.
“Run /ultrareview on the [campaign / PR / branch / draft] before we ship.” — when you want a quality gate that catches what you’d miss.
Both are legitimate asks. The team has the budget for them. They produce real value when used at the right moment. They produce expensive noise when used as cargo-cult quality theater. The skill is recognizing which moment is which — and Section 6 walks the three scenarios where they fit.
Section 5 — Computer Use: When Neither Cowork Nor Code Is Enough
Computer Use is the last-resort surface. Claude reads pixels from a real browser or desktop app via screenshots, decides what to do, and operates a virtual cursor and keyboard to click, type, scroll, fill forms — anything a human could do with a real screen.
It’s the right answer when, and only when:
- The tool you need to operate has no API.
- It has no Connector.
- It has no MCP server.
- And you’d otherwise be doing the work by hand.
The cost shape
Every screenshot is tokens. A 10-minute Computer Use task can chew through hundreds of thousands of input tokens because Claude has to see the screen on every action. Real money compared to other surfaces. A weekly Computer Use Routine is a budget line item; a daily one is a budget conversation.
This is why Computer Use is last resort — not because it’s bad, but because it’s expensive, and most of the time you’d reach for it, a cheaper primitive (a Connector, an MCP, an API call) would do the same job for a fraction of the cost. Try the cheaper paths first.
Where it lives
Both surfaces support it:
- Cowork (Operator path) wraps Computer Use cleanly. You Dispatch a Computer Use task from Cowork, it runs in a Cowork-managed sandbox or attached to your desktop, you get the result back via the inbox.
- Claude Code (Builder path) exposes Computer Use programmatically. You can compose it with hooks, slash commands, or as a step in a longer Claude Code workflow.
Operator-track folks don’t need to know the API. They need to know that if a Cowork Dispatch task involves clicking around in a vendor portal, that’s Computer Use under the hood, and that’s why those tasks cost more than other Dispatches.
Two concrete examples
-
Pull this month’s reviews from a dental review platform with no API. Some review platforms haven’t shipped MCPs or Connectors, and their export buttons are buried or rate-limited. Computer Use can log in, navigate to the reviews tab, scroll through the last 30 days of reviews, copy them out, and save them as a CSV. Tedious by hand, perfectly fit for Computer Use, no other primitive can reach it.
-
Submit an insurance prior-auth in a portal that has no integration. A clinic uses an insurance portal that requires a human to log in, fill out a multi-page form, attach a PDF, click submit. Doing this 30 times a month is the kind of work Computer Use was built for. The catch: prior-auth is sensitive data, so this work also needs the governance scrutiny Module 7 covers — it’s not just “can we automate it” but “should we, and with what controls.”
When you reach for Computer Use, reach deliberately. It’s the heaviest of the surfaces.
Section 6 — The “What Would I Ask Jonathon For?” Framing
This is the capstone for Operator-track learners. The whole module’s payoff is here.
You’re a marketer at WEO Marketly. You don’t write code. You’ve been through this course. You know what Claude Code is, what Cowork does, what /ultraplan and /ultrareview look like, when Computer Use makes sense. The question now is: when something hits your desk this week, what do you ask for?
Three scenarios. Three asks. Operator-friendly framing.
Scenario 1 — Recurring task pain
“I keep doing the same 3-step content sweep every Sunday. I open 12 competitor blogs, take notes, write a Slack summary. It’s 90 minutes every week. There has to be a better way.”
The ask: “Can you build a Routine for this?”
Why a Routine: this is the Module 5 decision tree exactly — clock-shaped trigger (every Sunday), result lands somewhere you’ll read it later (Slack summary). Routines are the right primitive.
Could it also be a slash command? Yes — but if the work runs every Sunday and you want it to just happen without you kicking it off, a Routine is the right shape. A slash command is the right shape when you’re the one initiating the work each time. Routines win when the schedule is the trigger.
The follow-up ask, after the Routine ships: “Can we extend it to push the summary to GoHighLevel as a CRM note on the Smile Springs account record?” That’s a hook on the Routine’s completion — Builder-track work, but you don’t have to know the implementation to know the ask is reasonable.
Scenario 2 — Architecture decision
“We’re scoping a website restructure for a Smile Springs competitor — a full nav redesign, IA changes, the works. I’m not sure where to start. The brief is half-formed.”
The ask: “Could you run /ultraplan on this brief?”
Why /ultraplan: half-formed briefs are exactly when you want a panel of specialist agents thinking through dependencies, risks, sequencing, and alternatives in parallel. A single Claude in chat would give you one perspective — competent, but linear. /ultraplan returns the same kind of plan a Tuesday morning planning meeting would, with all the angles surfaced at once.
Cost is real — /ultraplan runs aren’t free. So the bar for asking is “this scoping decision matters and getting it wrong costs us weeks.” For a website restructure with multiple stakeholder dependencies, that bar is met. For “should we use serif or sans-serif on the homepage,” it isn’t.
Scenario 3 — Quality gate before publishing
“Before this Saturday-hours campaign goes live next Monday, can someone catch the things I’d miss? The voice, the messaging consistency, the patient-comms angle, the things our compliance team would flag.”
The ask: “Run /ultrareview on the campaign-prep branch before we ship.”
Why /ultrareview: this is a quality gate at the moment when catching a problem is cheap and shipping with a problem is expensive. /ultrareview returns multi-perspective critique covering whatever the agents are scoped to look for — voice consistency, messaging risk, accessibility, compliance flags. It’s the panel review before the team would normally do panel review.
The corollary: don’t run /ultrareview after merge. A panel review on something already shipped is just a list of things you can’t unship. The whole point of the primitive is the quality-gate-before timing.
The punchline
Knowing the surface lets you ask precisely. The four asks an Operator should be able to make:
- “Build a slash command for this.” — when you’ve identified a recurring 5+ time per month task that lives in claude.ai or Cowork chat today.
- “Set up a Routine for this.” — when the trigger is a clock and the result lands somewhere you’ll read it later.
- “Run /ultraplan on this brief.” — when you’re scoping something multi-week with high cost-of-getting-it-wrong.
- “Run /ultrareview on the [thing] before we ship.” — when you want a quality gate that catches what you’d miss.
Four asks. Worth memorizing. They cover most of what marketing-team Operators will ever need to request from a Claude Code-equipped teammate.
If you’re an Operator-track learner: this is your module-end. You’ve now got the surface map and the four asks. Go run the Try It [Operator] below — pick a real situation in your work this week, write a one-sentence ask for each of the four scenarios, submit it to your AI lead. That’s the kept artifact.
The rest of this module — the Builder Track, starting below the divider — is for team members ready to install Claude Code and ship a slash command of their own.
Builder Track
If you’re stopping here, the spine ended at Section 6. Everything below is for team members ready to install Claude Code, build a slash command, and ship it to a personal skills repo. About 2 hours, ~120 minutes of practice, ending with a real artifact.
Builder 1 — Install + Auth (~15 min)
Install once per machine, auth once per user. Anthropic publishes the canonical install command at claude.ai/docs/claude-code; the pattern is typically a single shell line that drops the claude binary on your PATH. Verify with claude --version.
Windows builders usually go via WSL2 (install Ubuntu, then run the macOS/Linux install inside the Ubuntu shell). PowerShell-native works for most cases but has rough edges.
Auth flow is one command:
claude auth loginA browser opens, you sign in with your Anthropic account, approve the pairing, close the browser. Authed.
Sanity check — cd into any folder, run claude, prompt “What files are in this folder?“. Claude reads the directory, lists the files. Two Ctrl+C exits cleanly. You now have a working Claude Code session.
Builder 2 — The Mental Model Shift (~15 min)
Three concepts to internalize. They’re the difference between using Claude Code as “a smarter chat” (wrong) and as a tool palette (right).
Terminal as conversation, with side effects. Every turn is prompt → reply, like chat. The difference: each turn can include file reads, file edits, and shell commands alongside the text. Claude proposes edits; you approve or reject before they land. The approve step is your safety valve — use it.
CLAUDE.md as system prompt for a directory. Whenever you find yourself typing “remember, this is the Smile Springs project, the voice is X, the banned phrases are Y” for the third session in a row — that paragraph belongs in CLAUDE.md. A starter file for a Smile Springs project:
# Smile Springs Family Dental — Project Context
## Brand voice
Warm, plainspoken, trustworthy — never clinical.
Audience: families with kids and adults 35-55, Columbus OH.
Differentiators: Saturday appointments, no-wait booking.
## Banned phrases
streamline, leverage, world-class, game-changer,
state-of-the-art, revolutionize, dazzle,
"in today's fast-paced world", "are you tired of"
## Conventions
- Drafts live in `drafts/`
- Output reports to `output/`
- Always run /check-blog-post before committing a draftThree short sections. Every Claude Code session in the directory now knows the rules.
Slash commands as reusable workflows. Save a markdown file under .claude/commands/ and its name becomes a slash command. Project-scoped commands live in .claude/commands/ inside the project; user-scoped live in ~/.claude/commands/ and are available everywhere. The marketer-shipping-their-own-slash-commands move is what stops you being a claude.ai user and starts you being a tool builder. Builder 5 is the worked example.
Builder 3 — First Skill, First Plugin Install (~20 min)
Claude Code has a plugin marketplace. From inside a session, /plugin marketplace add <repo> pulls a plugin and installs it. Plugins can include skills, MCP server configs, slash commands, or hooks — the fastest way to pick up community-built tooling.
Pick a generic, marketing-adjacent skill from the public marketplace. Run the six-question vetting framework from Module 2 first (publisher trust, last-updated, dependencies, data access, license, fit). If all six clear, install it. Verify with /plugin list. Test it on a small task. Decide whether to keep it.
This is the quick path to a personal Claude Code setup — most builders end up with 5–10 community plugins they trust, plus their own custom skills and commands.
Builder 4 — Hands-On with Cloud Features (~30 min)
You read about /ultraplan and /ultrareview in the spine. Now run each one against something real.
Run /ultraplan against a real planning task. Pick something real — a refactor you’ve been putting off, a campaign you’re sizing, a feature spec, a process change. /ultraplan is expensive; the exercise has more value when the output is something you’ll actually use. Run /ultraplan <your-prompt> from a Claude Code session, wait for the panel, read the output. Identify which workflow pattern from Module 4 it implements (most /ultraplan runs are parallel-with-synthesis). Decide what to act on — /ultraplan is a panel; you’re still the one who has to ship.
Run /ultrareview against a real branch or PR. Pick something low-stakes for your first run — your slash-command repo from Builder 6 is a fine target, or a personal project, or a sandbox PR. Run /ultrareview <branch>, read the report, note the categories the agents flagged, decide what to act on.
Internalize: most output is recommendations, not mandates. A /ultrareview flag isn’t an instruction — it’s a panel saying “we noticed this; here’s why; you decide.” Treat it like a Tuesday code review from senior teammates. Some flags you fix. Some flags are right but the tradeoffs make current behavior correct. Some flags are wrong. Judgment required.
Builder 5 — Worked Example: Smile Springs Blog Post Checker Slash Command (~30 min, the centerpiece)
This is the kept-artifact exercise. Build a real slash command, end-to-end.
The command: /check-blog-post. It takes a markdown file path as an argument, runs the voice-check skill from Module 2 against the file, and writes results to output/blog-checks/<filename>-check.md with pass/fail and specific flags.
The slash-command file
Create the directory if it doesn’t exist:
mkdir -p .claude/commandsThen create .claude/commands/check-blog-post.md:
---
description: Run the Smile Springs voice-check on a blog post and write a results report
argument-hint: <path-to-blog-post.md>
---
Read the markdown file at the path provided in the argument.
Run the Smile Springs voice-check skill against it.
Specifically, check the file for:
1. Banned phrases (per CLAUDE.md): streamline, leverage,
world-class, game-changer, state-of-the-art, revolutionize,
dazzle, "in today's fast-paced world", "are you tired of",
"more than ever"
2. Voice deviation from Smile Springs warm/plainspoken/
trustworthy register (per CLAUDE.md)
3. Clinical jargon (gingivitis, prophylaxis, periodontitis)
used outside of clinical-question context
4. Generic AI patterns: rhetorical-question openers, "At Smile
Springs Family Dental, we...", brochure-speak
Write your results to:
output/blog-checks/<basename-of-input-file>-check.md
The output file should follow this structure:
# Voice Check Report — <input-filename>
**Status:** PASS | FAIL
**Date:** <today>
## Banned phrase flags
- [Flag entry, line number, suggested fix]
- (none if clean)
## Voice deviation flags
- [Specific lines that read off-voice + why]
## Clinical-jargon flags
- [If any]
## Generic-pattern flags
- [If any]
## Summary
[1-3 sentence summary — is this draft ready to ship, or
does it need a revision pass?]
If the file PASSES (no flags in any category), still write
the report file with "Status: PASS" and the (none) entries.That’s the whole command file. Save it.
Running it
From a Claude Code session in the project root:
/check-blog-post drafts/may-launch-post.md
Claude loads the slash command as the system prompt for the turn, reads the file at the path, runs through the checks, and writes the report to output/blog-checks/may-launch-post-check.md.
Reading the output
Open the report. For each flag — fix the ones that are right, ignore the ones that are wrong (but note them; patterns of false positives signal the slash command needs tightening), document the judgment-call ones.
The output file is your kept artifact. Reproducible (same input, same output), reviewable (hand it to a teammate), improvable (each false positive sharpens the next version). You now own a tool. Run it on every Smile Springs draft from now on.
Builder 6 — Try It [Builder] (~30 min)
The graduation exercise.
- Ship the slash command from Builder 5. Save it, run it on at least three Smile Springs drafts, tighten the prompt based on false positives or missed flags.
- Commit it to a fresh personal skills repo. Pick a generic name (
<your-name>-claude-skillsor<your-name>-claude-toolkit). Your repo, not WEO’s. - Run
/ultrareviewon the PR before merging. Read the report, decide what to act on, then merge. The discipline:/ultrareviewruns before you ship, not after. - Add a README documenting install steps for the next builder. What the repo is, how to install (clone, copy commands into
.claude/commands/, restart Claude Code), what each command does, project-specific assumptions. The next builder should be able to install and run in 5 minutes.
You now have a personal Claude Code toolkit. From here, every recurring task worth automating becomes a new entry in the repo. The library compounds.
Common Pitfalls
The five mistakes that bite real intermediate users.
1. Treating Claude Code as a smarter chat. Symptom: you open Claude Code and type the same prompts you’d type in claude.ai — no file reads, no edits, no slash commands. You’re missing the whole point. Fix: every session, ask “what work could only happen here?” If the answer is “nothing” — close the terminal and open claude.ai. Reserve Claude Code for file-system, multi-file, slash-command, or shell-integration work.
2. Skipping CLAUDE.md and re-explaining context every session. Symptom: you start a fresh session and type “the brand voice is warm and plainspoken, the banned phrases are…” The discipline: the third time you re-explain context, that context becomes a CLAUDE.md line. Treat the file as project institutional memory.
3. Building one-off prompts instead of slash commands when the work is recurring. You’ve done a content sweep five times this month, fresh chat each time. That’s a slash command waiting to be born. Fix: every Friday, audit the week. Anything you did 3+ times the same way is a slash-command candidate. By month three you have a tool palette of 8–12 commands.
4. Running /ultrareview after merge instead of before. A review on already-shipped work is a list of regrets. Build /ultrareview before merge into your team’s PR template or campaign-prep workflow. Make it the default, not an exception.
5. Forgetting Cowork exists and reaching for Claude Code when Cowork would solve it in 1/10th the setup time. Symptom: 45 minutes setting up a Claude Code workflow that a Cowork user would have completed by dragging a folder in and typing one prompt. Fix: every time you’re about to spin up Claude Code, ask “is this multi-session, reusable-command, or dev-infra work? Or is it a one-shot Cowork would solve faster?” Operator surface for Operator work, even if you’re a Builder.
Key Takeaways
- Claude Code is terminal-based Claude that can read files, write files, run shell commands, install tools, and execute multi-step engineering tasks autonomously. The mental shift from claude.ai is “Claude.ai talks to you; Claude Code does the thing.”
- File system access is the killer feature. Slash commands are reusable workflows. CLAUDE.md is system prompt and persistent memory for a directory. Same models as claude.ai (Opus 4.7, Sonnet 4.6, Haiku 4.5) — different surface.
- Marketers should reach for Claude Code when work has one of three shapes: recurring patterns (5+ runs/month → slash command), multi-file work (10+ files at once), or anything touching dev infrastructure (deploys, GitHub, builds).
- Cowork is the Operator-track alternative to Claude Code. File-and-folder native, runs on the desktop, supports Connectors / Skills / MCP, ships Dispatch. Mental model: Claude Code is the Builder surface; Cowork is the Operator surface for everything Claude Code would do. Operator → Cowork; Builder → Code.
/ultraplanis multi-agent cloud plan-mode — use for scoping refactors, sizing campaigns, planning launches with high cost-of-getting-it-wrong./ultrareviewis multi-agent cloud review — use as a quality gate before merging or shipping anything load-bearing.- Computer Use is last resort. Reserved for tools with no API, no Connector, no MCP. Every screenshot is tokens; reach for it deliberately.
- The four asks every Operator should be able to make: “Build a slash command for this.” / “Set up a Routine for this.” / “Run /ultraplan on this brief.” / “Run /ultrareview on the [thing] before we ship.”
- Builder Track ships an actual artifact: install Claude Code, internalize the mental model, install a community plugin, run
/ultraplanand/ultrareviewagainst real work, build the/check-blog-postslash command, commit it to a personal skills repo, document the install for the next builder. - Common pitfalls: treating Claude Code as a smarter chat, skipping CLAUDE.md, building one-off prompts when slash commands are warranted, running
/ultrareviewafter merge, and reaching for Claude Code when Cowork would solve it faster.
Related
- Course index
- Module 1 — Prompts as Reusable Artifacts
- Module 2 — Skills at Depth
- Module 3 — Connecting Claude to Your Tools
- Module 4 — Multi-Agent Patterns
- Module 5 — Automation Primitives
- Claude Code CLI Reference
- Claude Cowork — the Operator surface
- Cowork Dispatch — autonomous async work
- ultraplan — cloud plan-mode multi-agent
- ultrareview — cloud multi-agent review
- Computer Use — last-resort pixel automation
- Opus 4.7 Best Practices
- Surfaces Decision Framework
- Claude Automation Primitives
- Claude Onboarding (intro course)
Try It
Three exercises, track-tagged. The first is the Operator capstone. The second is the Builder graduation. The third is for both tracks — the cross-track diagnostic.
Try It [Operator]
The four-asks practice. Time-budget: 20 minutes.
- Pick four real situations from this week’s work — one recurring task (slash command candidate), one clock-shaped automation (Routine candidate), one multi-week scoping decision (
/ultraplancandidate), one load-bearing thing about to ship (/ultrareviewcandidate). - For each, write a one-sentence ask in the format from Section 6.
- Submit the four asks to your AI lead. The practice is the writing of the asks, not the execution.
The kept artifact: four sentences, four shapes, four real situations. You now know how to ask precisely.
Try It [Builder]
Ship the slash command from Builder 5. Run /ultrareview on the PR before merging your slash-command repo. Commit to a fresh personal skills repo. Document the install steps for the next builder. Time-budget: 60 minutes (after Builders 1–5 are complete).
- Save
.claude/commands/check-blog-post.mdfrom Builder 5. - Run it on at least three Smile Springs drafts (or your own client equivalents). Tighten the prompt based on false positives or missed flags.
- Create a personal skills repo. Pick a generic name (
<your-name>-claude-skillsor<your-name>-claude-toolkit— your repo, not WEO’s). - Commit
check-blog-post.mdto the repo. - Run
/ultrareviewon the commit or PR before merging. - Read the report. Act on what’s right. Note what’s wrong as a learning input.
- Add a README documenting install steps, command list, and project-specific assumptions.
- Push.
The kept artifact: a personal skills repo with at least one shipped slash command, a clean /ultrareview pre-merge run, and docs that the next builder on the team can follow.
Try It [Both]
The cross-track diagnostic. Time-budget: 15 minutes.
- Pick one task you did manually 3+ times this month. Specific.
- Walk the decision: one-shot or recurring? (Recurring leans slash command.) Touches dev infra? (Yes leans Claude Code.) Needs to compose with hooks or other primitives? (Yes leans Claude Code.) High setup-friction or high execution-friction? (Setup-friction favors Cowork; execution-friction favors slash commands.)
- Write the decision in 3–5 sentences. Save it somewhere durable.
- Bring it to your next 1:1. If “Cowork task,” ship it in Cowork this week. If “Claude Code slash command” and you’re Builder-track, add it to your skills repo. If Operator-track, that’s the right ask for your AI lead.
The kept artifact: a documented decision about one real task. Picking the right surface before doing the work is the most transferable skill in this module.
Done with the spine? Operators stop here. Builders, finish Builder 6 if you haven’t already.
Move on to Module 7 — Governance v2 + Keep Learning when you’re ready.