The follow-on to Claude Onboarding. Where the intro course gets you from “never used Claude” to “competent claude.ai user with a small prompt library,” this course turns implicit production knowledge into teachable patterns and builds the bridge from claude.ai power-user to Claude Code builder for team members who are ready. Mixed-role, mixed-depth — every exercise is tagged for Operator track (claude.ai-only, most marketing/support staff) or Builder track (Claude Code-curious — Mel, Amber, marketing leads, dev-adjacent roles). Many exercises run on both tracks.
Why structured this way. The intro course anchored on Anthropic’s free Claude 101. This course can’t — there’s no equivalent intermediate course from Anthropic. Each module still anchors on official Anthropic references where they exist (Agent Skills, MCP, prompt caching, multi-agent patterns, Claude Code CLI, automation primitives, governance), then layers in (1) a dental-marketing overlay so the patterns feel concrete, (2) a recurring Smile Springs Family Dental worked example you can run yourself, and (3) a track-tagged Try It that produces a kept artifact. The course is about Claude — features, patterns, and how the team should use them; the dental scenarios are the teaching frame, not the subject.
Recommended outcome per team member:
- Operator track: a portfolio of 5–10 reusable prompt artifacts in your library, one community skill installed and vetted, one connector intake form submitted (sandbox or real), and one Routine scheduled.
- Builder track: all of the above, plus one Claude Code slash command shipped, one MCP server wired, one cloud
/ultrareviewrun, and a personal skills repo seeded with at least one custom skill.
Last refreshed April 27, 2026
First publish. Anchored on Anthropic’s prompting best practices reference, Agent Skills overview, MCP/Connectors documentation, and the four prereq articles shipped 2026-04-27 (Surfaces decision framework, Prompt caching for agencies, Troubleshooting Claude, Shopping for skills & plugins).
Prerequisites
- You’ve completed the intro course (or its equivalent — Anthropic’s Claude 101 plus enough hands-on time to be comfortable with RCTF, Memory, Projects, and Artifacts).
- You have a personal RCTF prompt library with at least 3–5 entries you actually use.
- You have a Smile Springs (or comparable client) Project in claude.ai with custom instructions you’ve iterated on.
- For Builder track: you’re willing to spend 2 hours installing Claude Code, walking through its mental model, and shipping one slash command.
If any of those aren’t true yet, finish the intro course first. The intermediate material assumes the intro is muscle memory.
Course Map
Preface
Read what-changed-5-min-preface below before starting Module 1 — it’s the bridge from intro-course thinking to intermediate-course thinking.
Spine (all tracks)
- Module 1 — Prompts as Reusable Artifacts (Read 12 min / Watch 20 min / Practice 45 min) — Promote prompts from chat messages to durable artifacts. 8 techniques (constraint stacking, XML structuring, multi-shot examples, chain-of-thought, structured output, validation-with-retry, interview pattern, self-correction) with the v1→v2→v3 Smile Springs FAQ generator as the worked example.
- Module 2 — Skills at Depth: Shop, Vet, Build (Read 10 min / Watch 20 min / Practice 60 min) — Ecosystem tour (Anthropic vs known orgs vs individuals vs anonymous), the 6-question vetting framework, and lightweight skill authoring. Builds the Smile Springs voice-check skill as a paste-and-use artifact.
- Module 3 — Connecting Claude to Your Tools (Read 12 min / Watch 20 min / Practice 45 min) — MCP vs Connectors vs Skills vs Cowork: when each is the right shape. The five dimensions for choosing what to install (data scope, auth model, where data lives, cost, fit) plus the Free/Pro vs Team vs Enterprise plan-tier picture. Smile Springs Drive + Gmail worked example with the disclosure pattern.
Production Patterns (all tracks)
- Module 4 — Multi-Agent Patterns (Read 15 min / Watch 30 min / Practice 60 min) — When one Claude isn’t enough. Decision tree (single Claude vs subagents vs Agent Teams vs Managed Agents), the three workflow patterns (sequential / parallel / evaluator-optimizer), a live
/ultrareviewwalkthrough showing multi-agent in action, and model tiering economics (Opus / Sonnet / Haiku). Worked example: build a 3-subagent Smile Springs blog assistant from scratch — research (Sonnet) → write (Opus) → voice-check (Haiku) — with a side-by-side cost comparison vs single-shot Opus. - Module 5 — Automation Primitives (Read 15 min / Watch 25 min / Practice 45 min) — Routines vs Scheduled Tasks vs Channels vs Hooks vs Dispatch vs Computer Use. The full primitive map with composition patterns, failure handling, and the weekly Smile Springs competitive-intel sweep as the worked example (Operator: Routines web UI; Builder-basic: hook-triggered push to CRM; Builder-advanced: Cowork Dispatch 30-min teardown).
Bridge & Graduation
- Module 6 — Claude Code: What It Is, When to Ask For It (Spine: Read 12 min / Watch 25 min / Practice 0 — Operators stop here. Builder Track adds ~2 hrs) — What Claude Code is in plain English, when marketers should ask for it, Cowork as the Operator-track alternative (file-and-folder native — most marketing-track folks reach for Cowork, not Code),
/ultraplanand/ultrareviewcloud features, Computer Use as the “neither Cowork nor Code is enough” surface, and the “what would I ask Jonathon for?” framing — the four asks every Operator should know. Builder Track ships a/check-blog-postslash command end-to-end. - Module 7 — Governance v2 + Keep Learning (Read 15 min / Watch 15 min / Practice 15 min) — Three real-world dilemmas fully worked (PHI-adjacent connector → patient outreach; community skill with banned phrasings → fork vs skip; auto-publishing Routine that ships a clinical claim). Intermediate edge cases (refusals, context exhaustion, tool-use silent failures, skill conflicts) with the governance angle each one needs. Forward path: CCA-F certification, Anthropic Academy menu, four creator categories, contributing to the WEO wiki. Closing track-specific commitments.
Exercises
Hands-on practice. Each exercise produces a kept artifact — no completion-checkbox-only modules.
- Module 1 → personal v3-style reusable prompt artifact (saved to your library)
- Module 2 → Smile Springs voice-check skill (
~/skills/voice-check-smile-springs.md) or one community skill vetted and decided on - Module 3 → one tool you use weekly classified into the right surface (MCP / Connector / Skill / Cowork) with a written rationale
- Module 4 → Smile Springs blog assistant spec (3 subagents producing one Smile Springs blog post end-to-end — research → write → voice-check)
- Module 5 → weekly Smile Springs competitive-intel Routine (scheduled and run once)
- Module 6 — Builder Track → Claude Code
/check-blog-postslash command shipped to your personal skills repo + one/ultrareviewrun - Module 7 → one real connector / skill / Routine walked through the three-dilemma decision flow with a written note filed with your AI lead
What Changed (5-min preface)
The intro course taught RCTF as a chat pattern. You wrote a prompt, ran it once, iterated in chat, and either copied the output into a deliverable or moved on. The prompt itself was disposable. That’s fine when the work is one-off — but most WEO work isn’t one-off. You write the same kind of prompt every week for the same kind of client. The intermediate move is treating those prompts as artifacts: durable, reusable, version-controlled, and teachable to a colleague.
Three shifts mark the transition from intro to intermediate:
1. From chat to artifact. A prompt you’d run twice belongs in your library; a prompt you’d run weekly belongs in a Project as custom instructions; a prompt you’d run across projects belongs in a skill. Module 1 and Module 2 are the practical version of this ladder.
2. From single Claude to a system of Claudes. A real production pipeline runs multiple specialized agents in sequence — model tiering matters (Opus for orchestration and creative work, Sonnet for analysis and editing, Haiku for high-volume derivative tasks), and the architecture is what makes it work, not any single agent. Module 4 is where the team gets shared vocabulary for that. Once you’ve toured one real WEO multi-agent system end-to-end, “subagent,” “agent team,” and “managed agent” stop being abstract.
3. From “Claude responds when I ask” to “Claude runs while I sleep.” Routines, Scheduled Tasks, Channels, Hooks, and Dispatch are the primitives that turn Claude from an interactive tool into an autonomous workforce member. Module 5 is the field guide. Most marketing roles will use Routines and Channels; only Builder-track folks will write Hooks. That’s fine — knowing what’s possible lets you ask for it.
If you’re an Operator-track learner: Modules 1, 2, 3, 4, 5, 7 are the core path. Module 6 is a 30-minute spine read so you can ask for Claude Code work intelligently — you don’t need to install or use it.
If you’re a Builder-track learner: same path, plus the Module 6 Builder Track (2 extra hours) and the optional code-tour exercises in Modules 4 and 5. By the end of the course you should have shipped one slash command and run one /ultrareview on real WEO code.
How to Use This Course
- Two-to-three weeks of light effort. The intro course was completable in a week. This one’s twice as long because the practice budgets are bigger and the artifacts compound.
- Do the Try Its. Reading the modules without producing the artifacts is the #1 reason intermediate material doesn’t stick. Each artifact you keep is leverage you’ll reuse for years.
- Smile Springs is the continuing setting. What you build is constant across modules; how you build it differs by track. Same skill, two delivery modes — claude.ai for Operators, Claude Code for Builders. Where this matters, the article calls it out explicitly.
- The 4 prereq articles. Modules 1, 2, and 3 reference four reference articles published 2026-04-27 specifically for this course: Surfaces decision framework, Prompt caching for agencies, Troubleshooting Claude, and Shopping for skills & plugins. They’re the load-bearing references — read them when the modules link to them.
Tracks At a Glance
| Operator track | Builder track | |
|---|---|---|
| Audience | Most marketing / support / web staff | Mel, Amber, marketing leads, dev-adjacent roles |
| Surface | claude.ai (web + Desktop) | claude.ai + Claude Code |
| Module 6 | 30-min spine read only | Spine + 2-hour Builder add-on |
| Optional code tours | — | Mods 4 & 5 |
| Final artifact | A library of reusable prompt artifacts + one skill + one Routine | All of the above + a slash command + a /ultrareview run + a Cowork Dispatch task |
You don’t pick a track once and lock in. The default is Operator. Builder is opt-in per module.
Slide Decks
Coming as modules ship — pattern mirrors the intro course decks.
Related Topics in This Wiki
- Claude Onboarding (intro course) — the prerequisite course.
- Claude AI reference — deep-dive articles on Claude Code, API, skills, MCP, plugins, hooks, scheduled tasks, channels.
- Prompt Engineering — Anthropic’s prompting best practices and applied prompt libraries.
- Agents & Agentic Systems — multi-agent patterns and frameworks.
- Cross-Topic Connections — synthesis articles including Prompt caching for agencies and Claude Automation Primitives.
- WEO AI Governance — the council, policies, and the connector intake form referenced in Module 3.