Source: Claude Automation Primitives — decision tree + Routines + Scheduled Tasks + Channels + Hooks + Cowork Dispatch + Computer Use + Claude Cowork

Time: Read 15 min | Watch 25 min | Practice 45 min — total ~85 min

Watch First

Two short videos before you read the rest. Both anchor concepts you’ll see referenced throughout the module.

  • Cowork Dispatch — Send Claude Off to Work — search for the most recent walkthrough; the surface UI changes faster than tutorials get re-recorded. Look for a video that shows the mobile-trigger → desktop-execution flow end to end (~10 min).
  • Routines on Claude Code — Cron in the Cloud — pick a walkthrough of setting up a recurring Routine via the Claude Code web UI. Focus on the bits where they show the schedule, the trigger, and the run history (~10 min).

If you only have time for one, watch the Routines walkthrough — it’s the surface most of you will use first.

Why It Matters at WEO

Modules 1 through 4 of this course taught you to write better prompts, package them as skills, connect Claude to your tools, and reason out loud. All of those modes share one thing: Claude responds when you ask. You type, Claude answers. You’re the trigger.

This module changes the shape. Automation primitives are the surfaces where Claude does work without you typing first. A Routine sweeps competitor sites every Sunday night while you sleep. A Channel triages an incoming patient question on the practice’s chat before you’ve read it. A Hook fires the voice-check on a blog draft the moment you save it. A Dispatch task does a 30-minute teardown while you’re in a meeting and pings you when it’s done.

The mental shift: Claude stops being something you call and starts being something that runs. The agency math changes too. A staffer who can write a sharp prompt is valuable. A staffer who can describe a job to Claude in such a way that Claude runs the job on a schedule without supervision is force-multiplied — that’s one human effectively running ten parallel weekly workflows.

The catch: every primitive has a different shape, a different cost, and a different failure mode. Picking the wrong one is expensive — not in dollars, but in architectural rework. A “this should run every Monday” task built as a Dispatch ties up your desktop every Monday morning. The same task built as a Routine runs in the cloud and you read the result over coffee. Same outcome, very different ergonomics.

This module’s job is to teach you the six shapes, when to reach for each, how to compose them, what breaks, and how to ship one yourself by the end of the practice block.

Section 1 — The Six Primitives

Six different surfaces where Claude can do work without a typed prompt from you. Each one solves a different shape of problem.

Routines — cloud-hosted recurring agentic work

A Routine is a recurring task running on Anthropic’s cloud infrastructure. You describe the job in natural language, set a schedule (cron — hourly, nightly, weekly, custom), and Claude runs it on time. Each run is a fresh agentic session with access to whatever Connectors you’ve authorized for it.

  • What it is. Cron-shaped automation. “Every Sunday at 11pm Eastern, do this thing.” Each run is stateless — no memory carries between runs, but Claude can read and write state via Connectors (Drive files, Slack messages, GitHub issues).
  • Where it runs. Anthropic cloud. Your laptop can be closed. The run still happens.
  • Trigger modes. Cron schedule, programmatic API call (each Routine has an endpoint and bearer token), or GitHub webhook (one session per PR, with the PR feed driving the session).
  • What it costs. Usage caps per plan — Pro ~5/day, Max ~15/day, Team/Enterprise ~25/day. Beyond that, paid extra usage. The cost shape is predictable and bounded.
  • Audience. This is the flagship Operator-track primitive. Setup is no-code: claude.ai web UI on a Team or Pro plan exposes a Routines pane.
  • Concrete dental example. Every Sunday at 11pm Eastern, sweep the websites of your three nearest Columbus dental competitors for new content (blog posts, service pages, promotions). Summarize what’s new vs last week. Post the summary to your team’s Slack competitive-intel channel. By Monday at 8am, your team has fresh competitive context with zero manual work.

When you reach for Routines: the work has a clock-shaped trigger (“every X”) and the result lands somewhere you’ll read it later (a Slack message, a Drive doc, an email, a CRM note).

Scheduled Tasks — /loop + cron + Monitor for in-session timers

Scheduled Tasks live inside a Claude Code session. They’re the in-session cousin of Routines. You’re sitting in a session, you have a long-running task you want to check on at intervals (a build, a deploy, a Routine you started, a remote process), and you want Claude to keep checking until something happens — without you babysitting.

  • What it is. A polling timer. /loop 5m <command> re-runs that command every 5 minutes. Or you let Claude self-pace, choosing intervals based on what it’s watching.
  • Where it runs. Inside your active Claude Code session. The session has to stay alive (or use the cloud-hosted background-task variant).
  • Trigger modes. Time-based intervals from inside the session. The Monitor tool streams events from a background process so each emitted line wakes Claude up.
  • What it costs. Token usage on each loop iteration. Cheap when intervals are sane (5–30 min). Expensive when you set 60-second intervals on a long task — every wake-up burns context cache.
  • Audience. Builder track only — you need a Claude Code session to use it.
  • Concrete dental example. You kick off a long batch SEO audit (300 pages, takes 25 minutes). Instead of staring at the terminal, you say “poll the audit job every 4 minutes; ping me with a summary when it finishes.” Claude wakes itself every 4 minutes, checks status, sleeps. When the audit completes, Claude summarizes the findings and stops the loop.

When you reach for Scheduled Tasks: you’re already in a session, the work doesn’t need to outlive the session, and you want Claude watching something rather than you watching it.

The difference from Routines: Routines run in the cloud on a calendar schedule, no session needed. Scheduled Tasks live inside a session and stop when the session ends.

Channels — push events from chat platforms into a Claude session

Channels invert the usual flow. Normally you type a message to Claude. With Channels, an event from Telegram, Discord, iMessage, or fakechat (a development surface for testing) gets pushed into a Claude session — Claude reads it as if you’d typed it, processes it, and can reply through the same channel.

  • What it is. Inbound triggering. The chat platform is the surface; Claude is the brain behind it.
  • Where it runs. A Claude session is bound to a channel. Inbound messages route to that session.
  • Trigger modes. Any message arriving on the configured channel, or specific filters (mentions, keywords, allowlisted senders).
  • What it costs. Token usage per message. Watch out for chat-heavy days — every inbound message that triggers Claude is a token spend.
  • Audience. Mostly Builder track for setup; Operator-friendly once a Builder has wired the channel. Triage workflows for marketing teams sit here.
  • Concrete dental example. A patient texts the practice’s Telegram contact line at 9pm: “Is the Saturday cleaning slot still available?” Claude reads the message, checks the practice’s booking link, replies with the answer (“Yes, 11am Saturday is open — book here”) and tags the message in the team’s CRM. The team reads the tag the next morning. The patient got a reply in 30 seconds; the team didn’t lose evening hours.

When you reach for Channels: an external chat surface is the source of work, and you want Claude triaging or responding before a human picks it up.

Hooks — deterministic event triggers from Claude Code lifecycle events

Hooks are small scripts that fire at specific points in Claude Code’s lifecycle: before a tool is used (PreToolUse), after a tool is used (PostToolUse), when a file changes (FileChanged), when the user submits a prompt (UserPromptSubmit), and others. They’re configured in settings.json and run deterministically — every time the event fires, the hook runs.

  • What it is. Event-driven automation tied to Claude Code’s internal lifecycle. Different from Channels (which is external chat events) and different from Routines (which is calendar events).
  • Where it runs. Locally on the machine running Claude Code. Hooks are shell commands or scripts.
  • Trigger modes. Lifecycle events from Claude Code itself.
  • What it costs. Whatever the hook script costs. Hooks themselves don’t burn tokens — they’re deterministic shell. But hooks that spawn another Claude session (a subagent for voice-check) do.
  • Audience. Builder track only. Configuring hooks requires touching settings.json.
  • Concrete dental example. Every time someone on the team saves a blog draft into the drafts/ folder, a PostToolUse hook fires the voice-check skill on the new draft. The skill reads the draft, runs it against the Smile Springs banned-phrase list and voice rules, and writes a critique file next to the draft. The author reads the critique before they hit publish. No one has to remember to run the check — it just runs.

When you reach for Hooks: the trigger is something happening inside Claude Code itself (a tool call, a file save, a session start), and you want a deterministic response every time, not a probabilistic LLM judgment call.

Dispatch (Cowork) — send Claude off to work autonomously, come back later

Dispatch is the autonomous-mode flagship of Claude Cowork. You give Claude a task, give it a workspace (a folder, a set of files, a project), and tell it to go work. Claude takes the task to its own desktop session, works on it for as long as it needs (minutes to hours), and pings you back through the Cowork inbox when it’s done. You review the artifact, accept it, redirect if needed.

  • What it is. Async delegation. The defining mode is “kick off from your phone, come back to the result on your desktop later.”
  • Where it runs. Either on your own desktop (if you’re using Cowork’s desktop-attached mode and the task needs your local apps) or on a remote workspace Cowork provisions. Most current Dispatch tasks run on Cowork’s managed compute, freeing your machine.
  • Trigger modes. Manual kick-off from the Cowork app or mobile. Some teams compose Dispatch with Routines — a scheduled Routine that initiates a Dispatch when conditions are right.
  • What it costs. Bundled into the Cowork plan (20/month Pro, 200 Max). No per-session metering — but desktop-attached Dispatch occupies your screen during the run.
  • Audience. Operator track’s flagship autonomous primitive. Designed for non-developers who want async multi-step work without learning Claude Code.
  • Concrete dental example. Sunday afternoon, you’re heading into a meeting. You open Cowork on your phone and Dispatch a task: “Do a 30-minute deep teardown of the Westgate Family Dental site. Walk every page. Build a 500-word strategic memo covering: voice and tone vs Smile Springs, three biggest content gaps we could exploit, two things they do better than us. Save the memo to /clients/smile-springs/competitive/2026-04-26-westgate-teardown.md.” You go to your meeting. Cowork pings you 35 minutes later — memo’s ready in the inbox. You read it on the way home.

When you reach for Dispatch: the work is multi-step, takes meaningful time (15+ minutes), can run unattended, and you want a finished artifact rather than a chat exchange.

The difference from Routines: Routines are scheduled and recurring; Dispatch is one-shot and triggered by a human kick-off. The difference from Scheduled Tasks: Scheduled Tasks watch something and report; Dispatch produces a deliverable.

Computer Use — Claude operates a real browser or desktop app via virtual cursor

Computer Use is the last-resort primitive. Claude reads pixels from a screen, decides what to do, and operates a virtual cursor and keyboard to click, type, scroll, fill forms — anything a human could do with a real browser or desktop app. It’s powerful and slow and expensive, in that order.

  • What it is. Pixel-and-cursor automation for tools that have no API, no Connector, and no MCP server.
  • Where it runs. Either on a Cowork-managed sandbox (cleanest model) or on your local desktop via the Computer Use API and a controlled environment. Cowork wraps the messy parts.
  • Trigger modes. Human kick-off, sometimes inside a Dispatch task. Routines that need Computer Use compose with Cowork.
  • What it costs. Every screenshot is tokens. A 10-minute Computer Use task can chew through hundreds of thousands of input tokens because Claude has to see the screen on every action. Real money compared to other primitives.
  • Audience. Builder territory mostly, but Operator-track folks who use Cowork’s UI can fire off simple Computer Use tasks via Dispatch without ever touching the API.
  • Concrete dental example. A vendor portal — an old practice-management tool you use to pull monthly patient counts — has no API, no MCP, no export button. Once a month, you Dispatch Claude to log into the portal, navigate to the report tab, set the date range to “last 30 days,” screenshot the result, and save the count to a CSV. Tedious by hand; chunky for Computer Use; uniquely fit for it because nothing else can reach that tool.

When you reach for Computer Use: every other primitive has been ruled out. No API, no Connector, no MCP. The tool is web-or-desktop-only and you have to use it. Computer Use is the answer of last resort because it’s the most expensive and the slowest.

The decision tree, as questions

When you’re choosing a primitive, walk these in order:

“Does the work happen on a schedule?”     → Yes, recurring on a calendar (every Sunday, every Monday morning, every hour) → Routine (cloud).     → Yes, but only for the duration of an active session (poll this build until done) → Scheduled Tasks (in-session).

“Does the work get triggered by an external event?”     → Event from a chat platform (Telegram, Discord, iMessage) → Channels.     → Event from Claude Code’s own lifecycle (file saved, tool used, prompt submitted) → Hooks.

“Does the work need to run unattended for a while and then return a deliverable?”     → Dispatch (Cowork).

“Does the work require interacting with a web or desktop tool that has no API, no Connector, and no MCP?”     → Computer Use — and only as last resort.

Most workflows you’ll automate this year are Routines. The other primitives matter when the shape of the trigger doesn’t fit a calendar.

For the deeper architectural breakdown — where each primitive runs, who triggers it, what permissions it inherits — see the canonical decision tree article.

Section 2 — Composition Patterns

Primitives compose. The interesting workflows aren’t single-primitive; they’re combinations. Three common patterns:

Pattern 1 — Channel-triggered Routine

The trigger is inbound from a chat platform; the response is a Routine that does real work and posts back.

Picture: a patient texts the practice’s Telegram contact line: “Do you have any availability this Saturday for a kid’s first cleaning?” The message arrives as a Channel event. Claude reads it, classifies it as a booking inquiry, and rather than answering blindly, it triggers a Routine that:

  1. Reads the practice’s calendar (Drive Connector → calendar API or shared calendar Doc)
  2. Finds available Saturday slots
  3. Checks the patient-management system for any flags on this number (existing patient? new lead?)
  4. Drafts a personalized reply
  5. Posts the reply back through the same Channel

The Channel is the trigger; the Routine is the worker. The composition lets you handle inbound chat at machine speed without burning a human’s evening.

In a diagram, it reads: Telegram message → Channel → triage Claude → trigger Routine → fetch/draft → reply on Channel. Five steps, three primitives (Channel, Routine, plus the Connectors the Routine uses), one happy patient with an answer in 30 seconds.

Pattern 2 — Hook-triggered subagent

The trigger is a Claude Code lifecycle event; the response spawns a focused subagent that does a narrow job.

Picture: someone on your blog team saves a draft into wiki/drafts/. A PostToolUse hook fires (because Claude just used the Write tool to save the file). The hook reads the file path, decides “this is a blog draft,” and spawns a subagent with a specific task — “Run the Smile Springs voice-check skill on this draft. Output a critique file next to the draft. Don’t edit the draft itself.” The subagent runs, produces the critique, and exits. The author opens the critique before they ship.

In a diagram: file save → PostToolUse hook → spawn voice-check subagent → critique written → subagent exits. The hook is the deterministic part — it always fires on save. The subagent is the LLM judgment part — it reads the draft and reasons about voice. You get reliability where you need it (always run the check) and intelligence where you need it (the actual critique).

This pattern is the workhorse of any Builder-track workflow that needs a quality gate. The trigger is deterministic; the work is intelligent.

Pattern 3 — Dispatch-then-merge

The trigger is a human kick-off; the work runs unattended; the human reviews and merges the artifact.

Picture: Sunday afternoon. You Dispatch Claude on the deep Westgate teardown described in the Dispatch primitive section. While Claude works, you go to a meeting. 35 minutes later, your phone pings with a Cowork inbox notification: teardown complete, 502 words, saved to /clients/smile-springs/competitive/2026-04-26-westgate-teardown.md. You read the memo on your phone. Three of the five claims are sharp. One is wrong (Claude misread their pricing page). One needs deeper context. You either accept the artifact, redirect Claude with notes, or escalate the wrong-claim back into a Dispatch with a tighter prompt.

In a diagram: human kick-off → Dispatch → autonomous work → ping → human review → accept/redirect/redo. The pattern’s value is in the asynchrony — you got back 30 minutes of attention you’d otherwise have spent staring at Claude work.

The merge step is what separates Dispatch-then-merge from Dispatch alone. “Send Claude off” is half the value; “come back, review, decide what to do with the artifact” is the other half. Plan for the review step. If you Dispatch and never review, you’re producing artifacts that nobody acts on — that’s not automation, that’s clutter.

The mental shape: Dispatch is a coworker you can clone. Treat the artifact like a real teammate’s first draft — give it the same review rigor.

Section 3 — Failure Handling

Automation primitives will fail. The question isn’t whether; it’s how you find out and what you do.

Retry strategies for Routines

Some Routines are safe to retry — re-running them produces the same outcome (a sweep, a summary, a report). Others aren’t — re-running them double-posts to Slack, double-emails a client, or duplicates rows in a CRM. The discipline:

  • Make Routines idempotent where you can. A Routine that posts a Slack message should check whether today’s message already exists before posting. A Routine that writes a Drive file should write to a date-stamped path so re-runs overwrite the same file rather than creating new ones.
  • Cap retries. Don’t let a failing Routine retry forever. Three retries with exponential backoff is the standard. Beyond that, the Routine should give up loudly (post a “I tried and failed” message to a Slack alert channel) rather than silently keep trying.
  • Log every retry. When you debug a misbehaving Routine three weeks later, you want a log of every attempt, not just the final state.

Monitoring — where you see Routine failures

Routines have a run history in the claude.ai web UI. Each run shows status (success, partial, failed), elapsed time, and a summary of what Claude did. For Cowork-based work, the Cowork inbox is the primary surface — failed Dispatches land there with an error context.

The discipline: check the run history weekly. A Routine that’s been silently failing for three weeks because a Connector’s auth expired is a Routine you’ve effectively stopped getting value from — and the team learns to distrust automation when it goes dark without warning. Five minutes a week reading the run history catches most of these.

For Builder track: pipe Routine outputs to a Slack alert channel that you actually read. Failure summaries land there too. If your alert channel is silent for two weeks, either everything’s perfect (rare) or the Routine isn’t actually running (more common — go check).

Graceful degradation when an external API is down

External APIs go down. Drive has incidents. GoHighLevel rate-limits. Tavily occasionally returns timeouts. A Routine that depends on three external services has roughly a 99.7% × 99.7% × 99.7% = 99.1% success rate just from infrastructure — meaning ~3 failures every 365 daily runs from third-party hiccups alone.

The discipline:

  • Build the Routine to tolerate partial results. If the Routine fetches three competitor sites and one fails, summarize the two that worked plus a note that one site couldn’t be reached. Don’t refuse to post anything because one source failed.
  • Distinguish hard failures from soft ones. Auth expired on a critical Connector → hard fail, post the alert, don’t try to substitute. One competitor site returned a 503 → soft fail, note the gap, continue.
  • Tell humans what was missing. A Slack post that says “Weekly competitive sweep complete. 2 of 3 sites swept successfully; Practice C’s site returned a 503 and was skipped this week — will retry next Sunday” is more trustworthy than a sweep that silently shipped a partial answer as if it were complete.

Rate-limit awareness

Every external service has limits. Track these per-Routine:

  • Tavily — free tier is 1000 requests/month. A weekly Routine that fires 30 Tavily searches per run uses 30 × 4 = 120 per month — fine. A daily Routine doing 30 searches per run uses 30 × 30 = 900 per month — close to the cap; watch it.
  • Drive — caps reads per second per user; bulk operations (read 50 files) can stall if not chunked.
  • Slack — message-posting caps per channel per minute.
  • GoHighLevel API — caps per second; bulk pulls need pagination.

The discipline: every Routine’s prompt should specify the budget for external calls. “Use Tavily for at most 5 searches across all three sites this run” is a guardrail that prevents a runaway Routine from chewing your free tier in one run.

For more detail on what breaks and how to recover, Module 3 Section 3 covers the five most common MCP/Connector failure modes (auth expired, scope denied, rate-limited, schema mismatch, tool selection ambiguity).

Section 4 — Worked Example: Weekly Smile Springs Competitive-Intel Sweep

This is the centerpiece. Build a real Routine end-to-end. Three variants — Operator, Builder basic, Builder advanced. Pick the one that matches your track.

The brief

Smile Springs Family Dental wants to know what their three nearest Columbus competitors are doing. Specifically: new content (blog posts, service-page updates), themes the competitors are pushing, promotions or discounts being run, claims being made. Mel reads this every Monday morning over coffee and uses it to shape the week’s content priorities.

Today, this happens manually. Someone on the team spends ~90 minutes every Monday morning visiting three competitor sites, scanning blog feeds, comparing to last week’s mental model, and writing up a summary for Mel. It’s tedious, it slips when the staffer is sick, and the quality varies based on who’s doing it.

A Routine fixes all four problems. Same task, runs Sunday at 11pm, summary lands in Slack by 6am Monday, Mel reads it before standup. Free up the 90 minutes.

The schedule

Sunday 11:00 PM Eastern. Cron expression: 0 23 * * 0 (Sunday at 23:00 UTC-5). Why Sunday late: gives the Routine a quiet network window, gives the team Monday morning fresh data, and a 1am-Eastern run wouldn’t pick up any Sunday-night content updates.

The sources

Three placeholder competitors — call them Practice A (Westgate Family Dental), Practice B (Riverside Pediatric), Practice C (Capitol Smiles). All three have public websites with blog feeds. Real competitor identification is something you’d swap in based on actual market analysis. The Routine’s structure works regardless of which three you pick.

The Routine prompt

The actual prompt body, ready to paste into the Routines pane:

<role>
You are a senior dental marketing analyst tracking three local
competitors of Smile Springs Family Dental in Columbus, Ohio.
You produce a weekly competitive-intel summary for Mel, the
marketing director.
</role>

<context>
Smile Springs Family Dental — Columbus, Ohio.
Audience: families with kids and adults 35-55.
Voice: warm, plainspoken, trustworthy, not clinical.
Differentiators: Saturday appointments, no-wait booking.

Three competitors to track this week:
- Practice A: westgate-family-dental.example.com
- Practice B: riverside-pediatric-dental.example.com
- Practice C: capitol-smiles.example.com
</context>

<task>
Sweep the last 7 days of new content on each competitor site.
Produce a Slack-ready competitive-intel summary for Mel that
covers, per practice:
1. New content (blog posts, service pages) published in the
   last 7 days — list titles + URLs.
2. Content themes detected (what topics they're pushing).
3. Any new claims, promotions, or differentiators added to
   the homepage or service pages.
4. Compared to last week's snapshot (saved at
   /clients/smile-springs/competitive/snapshots/last-week.json),
   what's new or changed?

Then a "Top 3 takeaways for Mel" closing section — the three
things Mel should actually act on this week.
</task>

<rules>
- Format as a Slack message (markdown, scannable, under 600 words total)
- Per-practice: short header, bullet points, no walls of text
- Active voice
- No clinical jargon unless quoting a competitor's claim
- If a site returned an error, note that explicitly — don't
  hide the gap
- Total Tavily search budget for this run: 6 searches max
- Save the new snapshot to
  /clients/smile-springs/competitive/snapshots/2026-04-26.json
</rules>

<workflow>
Step 1: For each competitor, use the Tavily MCP to find new
content from the last 7 days. Cap at 2 searches per practice.
Step 2: Read the existing snapshot from
/clients/smile-springs/competitive/snapshots/last-week.json
to compare. If it doesn't exist, treat this as the baseline run.
Step 3: For each practice, identify what changed vs last week.
Step 4: Draft the Slack summary following <rules>.
Step 5: Validate — total length under 600 words? Each practice
covered? Top 3 takeaways present? If any check fails, revise.
Step 6: Save the new snapshot for next week's comparison.
Step 7: Post the summary to Slack channel #smile-springs-intel.
</workflow>

<output>
The Slack message body, posted to #smile-springs-intel.
</output>

What the output looks like

A realistic mock of what lands in Slack at 11:30 PM Sunday:

*Weekly Competitive-Intel — Smile Springs (week of 2026-04-26)*

*Practice A — Westgate Family Dental*
- 2 new blog posts: "What to do when your child loses a tooth at school"
  and "5 myths about dental insurance"
- Theme this week: parents-of-kids angle, leaning into school-life stories.
  Different from their usual clinical-procedure focus.
- New claim on homepage: "Same-day appointments for emergencies."
- vs last week: tone has shifted — more parent-targeted, less clinical.

*Practice B — Riverside Pediatric Dental*
- 1 new service page: pediatric sealants.
- Promotion banner added: "$99 first cleaning for new kid patients
  through May."
- vs last week: aggressive pediatric pricing push. They didn't have
  a promotion last week.

*Practice C — Capitol Smiles*
- No new content detected.
- Site returned a 503 on second sweep — partial data only.
- vs last week: no detectable change.

*Top 3 takeaways for Mel*
1. Westgate is moving toward parent-targeted content — Smile Springs
   is already there, but tighten the differentiation in this week's
   blog post.
2. Riverside's $99 first-cleaning promo is the aggressive move of
   the week — consider whether to respond with a counter-offer or
   double down on no-wait booking + Saturday hours as the value prop.
3. Capitol Smiles partial-data; will retry next week. No action.

_Sweep complete. 6/6 Tavily searches used. New snapshot saved._

That’s an artifact Mel can read in 90 seconds and act on by 9am Monday.

How this differs from running it ad-hoc each week

The same task done by hand each Monday morning produces similar output, but with these costs:

  • 90 minutes of staff time every week — and the time is on the wrong day (Monday morning, when fresh-context work is most valuable).
  • Voice drift — different staff produce different summaries; some are tight, some ramble.
  • No memory — last week’s notes live in someone’s head; comparing this week to last week happens informally, if at all.
  • Stops when the staffer is out — the week the runner is on PTO, no intel ships.

The Routine version solves all four. Same output, runs on a quiet Sunday night, every week, with the same voice and the same structure. The snapshot file gives the Routine memory across runs — week 4’s summary can reference week 1’s themes, which is how you start to see real movement vs noise.

The compounding value is in the snapshots. Six months of weekly sweeps = 26 snapshots. That is institutional memory you couldn’t build any other way.

Variant patterns

Operator path — claude.ai / Cowork

The Routine runs in the cloud. You set it up once via the Routines pane in claude.ai (or Cowork’s equivalent). Sunday night it runs. Monday morning the summary is in Slack. You read it.

If it fails, the run history shows you what broke. If a Connector’s auth expired, you click reconnect and re-run. If a competitor’s site is down, the Routine handles that gracefully and notes the gap.

That’s it. No code, no terminal, no .mcp.json. The Operator path’s whole point is that automation should not require you to be a developer.

Builder path (basic) — Claude Code with a downstream hook

Same Routine, but ship it via Claude Code so you can compose it with other tools.

The build:

  1. Define the Routine in the Routines web UI (same prompt as above).
  2. In Claude Code, configure a PostRoutineRun hook in your project’s settings.json that fires after the Routine completes successfully.
  3. The hook runs a small script that takes the Routine’s Slack-summary output and pushes it to GoHighLevel as a CRM note on the Smile Springs account record.

The result: the team account-manager sees the same weekly summary land in their CRM workspace alongside other client notes. Nobody has to copy-paste from Slack into the CRM. The hook keeps the two surfaces in sync.

The hook config (sketch):

{
  "hooks": {
    "PostRoutineRun": [
      {
        "matcher": "smile-springs-competitive-intel",
        "command": "scripts/push-to-ghl.sh",
        "args": ["${routine.output}", "smile-springs-account-id"]
      }
    ]
  }
}

Document this in your project’s CLAUDE.md so a future teammate (or future-you in three months) knows what the hook does and why.

Builder path (advanced) — Cowork Dispatch for a deep one-practice teardown

A different shape: instead of a sweep across three practices, send Claude off on a 30-minute deep teardown of one practice. The output isn’t a 600-word weekly summary; it’s a 500-word strategic memo for one specific competitor.

The Dispatch prompt:

Spend up to 30 minutes doing a deep teardown of Westgate
Family Dental's website (westgate-family-dental.example.com).
Walk every page in their main nav. For each page, capture:
- The promise the page is making
- The voice/tone vs Smile Springs (warm/plainspoken/trustworthy)
- Any claims that are sharper or weaker than Smile Springs' equivalent
- The CTA and how it compares to Smile Springs' booking flow

Then produce a 500-word strategic memo for Mel covering:
1. The three biggest content gaps Smile Springs could exploit
2. Two things Westgate does materially better than us — and what
   we'd need to change to close the gap
3. The single tactical move Smile Springs should make this month
   in response to what Westgate is doing

Save the memo to
/clients/smile-springs/competitive/2026-04-26-westgate-teardown.md.

You kick this off Sunday afternoon while you’re heading into a meeting. 30 minutes later, the memo is in your Cowork inbox. You read it on your phone. Three of the five points are sharp. One is wrong (Claude misread their pricing page). You either accept the memo with a redirect note for next week, or you Dispatch a follow-up: “Re-read the Westgate pricing page carefully — I think you misread the family-cleaning bundle. Update the memo with the correction.”

The depth/breadth tradeoff: the weekly Routine gives you breadth (3 practices, light touch). The Dispatch gives you depth (1 practice, deep memo). They compose — run the Routine weekly for breadth; Dispatch a deep teardown once a quarter on whichever competitor your weekly summaries flagged as moving the most.

Common Pitfalls

The five mistakes you’ll see most often.

Routines that aren’t idempotent

A Routine that posts to Slack on each run, without checking whether today’s post already exists, double-posts on retry. Same problem with email Routines, CRM-note Routines, GitHub-issue Routines. The fix: every Routine that creates or sends should check first. Read the destination, see if today’s artifact is there, write only if it isn’t.

This bites hardest when a Connector hiccup causes a half-completed run to retry — Slack got the message, but the next step failed, so the whole Routine retries from scratch and Slack gets a duplicate. Idempotency is the discipline that makes retries safe.

Hooks that fire too often

A PostToolUse hook that runs heavy validation on every tool use is a hook that turns a 30-second Claude Code session into a 5-minute one. Two patterns to avoid this:

  • Debounce. Only fire the hook if the relevant condition has settled — e.g., “fire after the last file save in a 10-second window” rather than “fire on every save.”
  • Pre-commit instead of pre-save. A voice-check that fires on every blog draft save is overkill; a voice-check that fires when the author tries to commit the draft to the repo is exactly the right friction.

The decision rule: the heavier the hook, the rarer the trigger. A 30-second voice-check belongs on commit, not on save.

Dispatch tasks with vague success criteria

A Dispatch task with the prompt “do a competitive teardown of Westgate” is too vague. Claude doesn’t know when to return — it could work for 15 minutes or 90 minutes, and you don’t know what the deliverable shape should look like. Symptom: Dispatches that come back with rambling 2000-word memos when you wanted 500, or Dispatches that timeout on Cowork’s max-duration cap because Claude kept finding more to do.

The fix: every Dispatch prompt has a hard deliverable. Word count, file path, format, structure. “Produce a 500-word memo, saved to [path], covering exactly these three sections.” Now Claude knows when it’s done.

Computer Use as a first reach instead of last resort

A teammate has a tool integration question and someone says “just use Computer Use, it can read anything.” Six weeks later the Routine is burning $40/run because every screenshot is tokens. The diagnosis: Computer Use was the first answer, not the last.

The decision rule (re-stated from Section 1): try API → Connector → MCP → and only then Computer Use. If a Connector exists, use it. If an MCP exists, use it. Computer Use is for the genuine “no other option” tools — old portals, specific-vendor desktop apps, anything where you’d otherwise be doing screenshots manually.

Channels without rate limits

A Channel that triggers a Claude session on every inbound message can cost you serious tokens on a chat-heavy day. A patient sends 12 messages in a row asking follow-up questions — that’s 12 Claude sessions, each one re-reading context, each one burning tokens.

The fix: rate-limit the Channel handler. “Process at most one inbound message every 30 seconds per sender; queue and batch-respond if more than 3 arrive in that window.” Or: “Only fire the Routine if the message contains a booking-related keyword.” Either gates the Claude work to the messages that actually need it.

Key Takeaways

  • Six primitives, six different shapes: Routines (cloud cron), Scheduled Tasks (in-session timers), Channels (chat events in), Hooks (Claude Code lifecycle events), Dispatch (autonomous async delegation), Computer Use (last-resort pixel automation).
  • The decision tree walks four questions: schedule? external event? unattended deliverable? no-API tool? Most workflows you’ll automate are Routines.
  • Composition beats single primitives. Channel-triggered Routines, Hook-triggered subagents, and Dispatch-then-merge are the three patterns you’ll use most.
  • Failure handling matters more than first-run success. Build idempotent Routines, monitor the run history weekly, tolerate partial results, and budget your external API calls.
  • The Smile Springs weekly competitive-intel sweep is a representative shape: scheduled trigger, multi-source fetch, comparison against last week’s state, formatted output to a chat surface, snapshot saved for next week. Most marketing-team automation looks like this.
  • Operator track ships Routines via the web UI; Builder track composes them with Hooks and Dispatch for downstream effects. Both produce the same kept artifact.
  • Computer Use is last resort. Try API, Connector, MCP first. Every screenshot is tokens.
  • Dispatch needs a hard deliverable in the prompt — word count, file path, format. Otherwise Claude doesn’t know when to return.

Try It

Three exercises by track. The fourth is a thinking-only assignment for both tracks.

[Operator] Ship the weekly Smile Springs sweep via Routines (~30 min)

  1. Open the Routines pane in claude.ai or Cowork.
  2. Create a new Routine. Schedule: weekly, Sunday 11:00 PM Eastern.
  3. Paste the prompt body from Section 4 into the Routine. Adjust the three competitor URLs to real practices in your account’s market (or keep the placeholders for the practice run).
  4. Connect the Slack Connector to the Routine. Specify the channel (your team’s competitive-intel channel; create one if needed).
  5. Run the Routine once manually using the “Run now” button. Watch the run complete.
  6. Verify the Slack post landed and the snapshot file was created in Drive.
  7. If anything failed, open the run history, read the error, fix the cause (most likely an unauthorized Connector or a misnamed channel), and re-run.

You should now have a recurring weekly sweep that requires zero manual work going forward.

[Builder, basic] Ship it via Claude Code + a GHL hook (~45 min)

  1. Define the same Routine in the web UI as above.
  2. In your Claude Code project, add a PostRoutineRun hook to settings.json (sketch shown in Section 4).
  3. Write a small shell script (scripts/push-to-ghl.sh) that takes the Routine output, hits the GoHighLevel API to create a CRM note on the Smile Springs account record, and exits cleanly.
  4. Test the hook by triggering the Routine manually. Verify the Slack post lands AND the GHL note appears.
  5. Document the hook config in your project’s CLAUDE.md so future teammates can find it. Cover: what it does, what API token it uses, what scope, who maintains it.

You should now have the same weekly sweep, plus a downstream system kept in sync via a deterministic hook.

[Builder, advanced] Run a Cowork Dispatch deep teardown (~45 min)

  1. Pick one competitor practice from your real market. Real one — this exercise produces an artifact you can actually use.
  2. Open Cowork on your phone or desktop. Compose a Dispatch task using the prompt from Section 4 (the deep-teardown variant). Tighten the prompt to your specific competitor.
  3. Set a 30-minute time-box mentally. Note when you started.
  4. Hit Dispatch. Walk away. Don’t watch.
  5. Come back 30 minutes later. If Cowork has pinged, read the memo on your phone. If it hasn’t, check the inbox — Claude may still be working, or may have hit an issue.
  6. Review the memo. Is the analysis sharp? Is anything wrong? Is anything genuinely useful?
  7. Decide: accept the memo as-is, redirect Claude with a follow-up Dispatch, or escalate the wrong-claim back into a tighter Dispatch.
  8. Document what you learned about the prompt — what worked, what didn’t. Tighter Dispatch prompts are the result of running Dispatch enough times to know what shape Claude needs.

You should now have a kept artifact (the memo) plus a clearer mental model of what Dispatch can and can’t do well.

[Both] Identify one task to automate — but don’t ship it yet (~15 min)

  1. Pick one weekly task you currently do by hand. Real one. Something that makes you sigh on Monday morning.
  2. Walk the decision tree from Section 1. Which primitive is the right shape?
  3. Sketch the Routine prompt (or Channel filter, or Hook config) you’d write. Specifics — what data sources, what output, what destination.
  4. Document the decision in a Drive doc or your team’s notes. Don’t ship it yet — the goal is to practice the decision-making, not to spawn another half-finished automation.
  5. Bring the doc to your next 1:1 or team meeting. If the prompt is sharp and the primitive fits, that’s a candidate for the next sprint.

The discipline this exercise builds: automation starts with the right shape, not with the prompt. Picking the primitive is the first move; writing the prompt comes after.

Done? Move on to Module 6 — Building Your First Skill from Scratch.