Source: Build Sell Claude Code Operating Systems 2+ Hour Course Cleaned, Nateherkai Ais Os Readme 2026 05 01 Sources:
- Course transcript:
raw/Build_Sell_Claude_Code_Operating_Systems_2+_Hour_Course_cleaned.md - Repo README:
ai-research/nateherkai-ais-os-readme-2026-05-01.md
Creator: Nate Herk (AI Automation Society — 350,000-member free community + paid Skool community)
Course URL: https://www.youtube.com/watch?v=bCljOfCH8Ms
Duration: 2+ hours
Platform: YouTube
Repo: https://github.com/nateherkai/AIS-OS — public, MIT, 75 stars / 28 forks, ~28KB, created 2026-05-01 (same week as course release)
Repo description: “AI Operating System starter kit for Claude Code — three-skill kit (/onboard, /audit, /level-up) + 3Ms framework. Companion to the AIOS masterclass.”
Trademarks: “The Three Ms of AI™” and “The Four Cs of an AIOS™” — both ™ © 2026 Nate Herk; ship in the repo with attribution; “use freely; don’t repackage as your own.”
The most comprehensive third-party Claude Code “AI operating system” framework documented to date. Nate Herk’s 2+ hour course walks through building a personal AIOS in Claude Code — the same system that powers his AI automation YouTube channel and 350k-member community. Filed in claude-ai as the central article for the AIOS pattern; functions as a hub linking to nearly every Claude Code surface, plus the Karpathy LLM-wiki pattern this vault implements. Two governing frameworks: The Three Ms (Mindset / Method / Machine) for the human side; The Four Cs (Context / Connections / Capabilities / Cadence) for the technical build. Distinct from the Builder-track Brandon Storey lead-magnet course (single-workflow tutorial) and the Operator-track Rick Mulready use-cases tour (Cowork-only) — Nate’s course is the architecture-first systems view, not a use-case demo.
Key Takeaways
- The Four Cs of an AIOS™ are the central framework. Build in order: (1) Context — what AI knows about you, your team, your business, your voice; (2) Connections — what data it can reach (MCPs, APIs, CLIs, browser); (3) Capabilities — what it can produce (skills); (4) Cadence — when it acts on its own (routines, scheduled tasks,
/loop). Dependency graph per the README: Context is non-skippable; Connections + Capabilities can build in parallel; Cadence is last — don’t automate workflows that don’t work manually. Run the bundled/auditskill any time to score yourself across the four pillars. - The fastest test: open a fresh Claude session, ask a question about your business. Does it answer like a teammate / executive assistant, or like a stranger who met you five seconds ago? That gap is the Context-and-Connections deficit, scored.
- Tool-agnostic by design. Nate moved his AIOS from n8n → Claude Code; ported it to Codex in ~2 minutes as a portability test; explicitly flags Antigravity as a future target. The framework is durable, the tool is replaceable. The rule: never lock in to a tool that’s younger than 6 months.
- Skills (not workflows) are the capability primitive. A skill = a folder at
.claude/skills/<name>/with aSKILL.mdcontaining YAML frontmatter (name,descriptionrequired) + step-by-step SOP. Reference files live alongside or in sharedreferences//scripts/folders. Once defined, “write me a LinkedIn post” triggers all five steps the skill encodes. - The bundled AIS-OS repo ships exactly three skills, intentionally lean.
/onboard(one-time setup wizard with 7-question interview generating Day-1 file set),/audit(recurring “is the AIOS built right?” form check — Four-Cs gap report, read-only),/level-up(recurring “what business leverage am I missing?” function check — Three Ms interview producing one shipped artifact per run). Per the README: “Skills here are ideation prompts and thinking tools, not heavy automations. You hack on top of the structure.” Repo is MIT-licensed and public on GitHub at nateherkai/AIS-OS — no Skool signup required for the repo itself, though the companion masterclass video lives in his free Skool community. - Map your business across the seven Tier-One Buckets before opening anything: Revenue / Customer / Calendar / Comms / Tasks / Meetings / Knowledge. Sketch on paper, in Excalidraw, in Miro — any medium works. The AIS-OS Claude Code template is trained on these seven domains and the system maps your tools to them as you go.
- API endpoints beat MCP servers for token efficiency. Nate’s strong preference: instead of installing the ClickUp/Notion/Slack MCP (which exposes every endpoint and eats tokens just by being loaded), have Claude Code research the public API docs, write a markdown reference file, and use 2-3 endpoints directly via
.env-stored API keys. Skill files can hardcode the endpoints they actually use. - Use a separate AI account, not your personal credentials. Nate runs an “Up AI” ClickUp account with its own permissions. Reasons: tighter scope, clean blast-radius isolation (“AI deleted production database” headlines), per-agent spend attribution.
- Memory + Local Markdown over RAG (at this scale). The bonus section explicitly endorses Karpathy’s LLM-wiki pattern — drop raw documents into
raw/, let Claude Code organize into awiki/folder, navigate in Obsidian. No vector DB, no embeddings, no chunker. Cited result from one X user: 95% drop in tokens used to query a 383-file + 100-meeting-transcript corpus. - Productivity drops before it climbs (the change curve). Expect a ~20% productivity dip during workflow transition. Break-even ~day 3, well-ahead by day 4-5. By two weeks in, taking the AIOS away would feel painful. The dip is where most people quit.
- Cloud routines have specific gotchas worth knowing in advance.
.envdoesn’t exist in the cloud clone (use environment variables), default network access is “trusted only” (change to “full” for non-Anthropic-vetted domains like ClickUp), browser-automation-with-cookies fails (no local cookie state), each run is stateless (cloud GitHub clone destroyed after run). Nate documented these from migrating his own routines. - Standard editorial caveat — the course is itself a lead magnet. Free 2+ hour course bundled with a free Skool community membership and a paid community pitch. Same lead-magnet-funnel pattern documented in Brandon Storey’s course. The technical content is genuine and substantive; Skool conversion is the secondary outcome.
The Litmus Test (the design principle every layer rolls up to)
From the README — the single sentence the entire kit is designed to deliver:
“While you’re not at your desk, your AIS-OS observes one real-world event and produces an output that’s faster and more accurate than what you’d produce yourself.”
If a layer, skill, or template doesn’t contribute to that test, it doesn’t ship.
The Two Governing Frameworks
The kit teaches two complementary frameworks. Three Ms first, Four Cs second. Per the README: “Without the brain rewire, the architecture is just a folder structure.”
The Three Ms of AI™ — operator brain (how you think)
The human-side framework. Each M has its own sub-framework. Full breakdown lives in references/3ms-framework.md in the repo; the /level-up skill walks you through all three weekly.
Mindset — three habits:
- Default Shift. Before any task, ask: “How could AI do this — or at least 30% of it?” Rarely 100%; 50-75% is still a major win. Worked example from Nate: 300+ YouTube description tracking link updates. Old default: click each one for an hour. New default: brainstorm with Claude Code → API approach → never touch it manually.
- Function Breakdown. Your job is a tree of tasks. “Automate a YouTube video” is too vague to act on; ideation → scripting → slide creation → packaging → descriptions → comment replies are each automatable separately, and the slide-deck chunk drops cleanly into a meeting-prep workflow.
- Curiosity Rule. Treat AI as a mentor, not a vending machine. The “dark code” failure mode — using AI to write code you don’t understand — extends to skills, prompts, and SOPs.
Mindset’s central question: “To what extent can AI be leveraged here?”
Method — five-step decision pattern (per the README):
- Find Constraint — what’s actually slowing the business right now?
- EAD — Eliminate, Automate, or Delegate? (Eliminate first; not all work needs to exist.)
- Map Process — chart the steps before automating any.
- Pick Autonomy Level — full agent, hybrid, deterministic workflow, or just-AI-assist?
- Tie to KPI — automation without a metric is theater.
Machine — five rules (per the README):
- Lego Principle — build small reusable parts, not monoliths.
- Validation Chain — every step verifies the previous step’s output.
- Bike Method — start with the simplest moving version; iterate, don’t pre-design perfection.
- Intern Rule — if a smart intern could do it but explaining would take longer than doing, that’s a skill candidate.
- Kill Switch — every autonomous workflow needs a clean way to stop and roll back.
Machine’s central principle: “Boring is beautiful. Workflows beat agents.”
Two non-obvious cross-cutting takeaways:
- Never binary. The question is never “will AI do this?” — it’s “to what extent can I leverage AI here?” Some answer between 0% and 100%. Most tasks have a useful percentage; you have to find it.
- Mindset isn’t motivation. It’s the lens that finds that percentage. Without it, you don’t see the leverage.
The Four Cs of an AIOS™ — architecture (what you build)
The technical-build framework. Each layer ships with its own “is this layer in place” test (per the README):
| # | Layer | One-liner | ”This layer is in place” test |
|---|---|---|---|
| 1 | Context | Knows your business | Fresh Claude session answers “what does this business do and who works here?” without browsing |
| 2 | Connections | Reaches your stuff | ”What’s on my calendar tomorrow and what tasks are due?” → live data, no paste |
| 3 | Capabilities | Knows how to do the work | A short phrase triggers a multi-step workflow that produces an artifact |
| 4 | Cadence | Runs without being asked | Laptop closed. A brief lands in the inbox. A teammate messages it and gets a real answer |
Brand line (per the repo): Context. Connections. Capabilities. Cadence.
Dependency graph: Context is non-skippable. Connections + Capabilities can build in parallel. Cadence is last — don’t automate workflows that don’t work manually. The /audit skill (bundled in the AIS-OS template) scores you across all four with gap recommendations.
The Seven Tier-One Buckets
Before opening Claude Code at all, map your business across these seven domains. The AIS-OS template is trained on this taxonomy.
| Bucket | Typical tools | What it captures |
|---|---|---|
| Revenue | Skool / Stripe / QuickBooks | P&L, runway, growth |
| Customer | Skool / YouTube / community platforms | Where customer interactions live |
| Calendar | Google Workspace (Calendly + ClickUp calendar sync) | Time |
| Comms | Google Workspace email / ClickUp / Slack | Internal + vendor |
| Tasks | ClickUp / Notion | Internal projects + vendor projects |
| Meetings | Fireflies | Transcripts |
| Knowledge | YouTube transcripts / Workspace / local files | Long-term context |
You don’t need every bucket on day one. Start with the most important core tools per bucket. As of 2026, every business tool has some AI integration path — MCP, REST API, or worst-case browser automation via Computer Use.
The AIS-OS Repository
Public on GitHub at nateherkai/AIS-OS (MIT, 75 stars / 28 forks at time of ingest, created 2026-05-01 alongside the masterclass). Clone with standard git clone — no Skool signup required for the code itself.
Folder structure (verified against the README):
AIS-OS/
├── README.md
├── CLAUDE.md ← Your operating manual (filled by /onboard)
├── EXPANSIONS.md ← What to add as you grow
├── LICENSE ← MIT
├── .gitignore
├── aios-intake.md ← Source-of-truth for /onboard. Edit + re-run any time.
├── connections.md ← Registry of every system your AIOS can reach
├── context/ ← About you, your business (filled by /onboard)
├── references/
│ └── 3ms-framework.md ← The operator brain (full Three Ms breakdown)
├── decisions/
│ └── log.md ← Append-only record of what was decided and why
├── archives/ ← Old stuff. Don't delete. Move here.
└── .claude/
└── skills/
├── onboard/SKILL.md ← Setup wizard (one-time, Day 1)
├── audit/SKILL.md ← Form check (recurring, Day 7+, weekly)
└── level-up/SKILL.md ← Function check (recurring, Day 14+, weekly)
CLAUDE.md is the master prompt — has a “Your Skills” heading (so Claude knows which skills to invoke when), a “Where things live” section explaining each folder, and paths it should read. Nate edits his CLAUDE.md ~2× per day. Nothing about the AIOS is set in stone.
EXPANSIONS.md documents what to add as the AIOS grows — projects/, templates/, scripts/, .claude/agents/, sub-OS folders.
The README’s Quick Start
The repo’s recommended ramp (per the README):
- Clone the repo to a working folder.
- Open in Claude Code, run
/onboard. 7 questions, ~15 minutes. Voice samples must be pasted, not described. Day-1 file set drops at the end. - Use it for a week. Bring real questions. Make real decisions. Log via
/decisionor just append todecisions/log.md. - Day 7: run
/audit. Read the Four-Cs gap report. Pick one gap to close. - Day 14: run
/level-up. Three Ms interview surfaces one automation worth building. Build it. - Week 3+: weekly
/level-upritual. One shipped artifact per week.
Day 1 — Run the /onboard Skill
After cloning, just talk:
“I just cloned this repo. I want to set up my AI operating system. My name is [you]. Help me get onboarded.”
The skill reads as the instructions, asks a 7-question interview, and as you answer, fills in aios-intake.md. Question categories:
- Who are you, what do you sell, who do you sell to? Multi-paragraph answer required — don’t shortchange this. Voice dictation is the unlock.
- Paste 1-2 things you’ve written recently, verbatim, no edits. Calibrates voice. If you have multiple voices (LinkedIn vs. team email vs. client email), label each batch.
- Two-three biggest priorities for the next 90 days. Real Q3 priorities, real sprint, real milestones. 4-7. (Same pattern.)
When done: “Day 1 done. Your AIOS knows who you are, what you sell, what matters this quarter, and how you sound. Today you can ask ‘what should I focus on this week?’ Tomorrow pick one tool from connections and wire it up. On day 7 run /audit to see your Four Cs score.”
The first answers are vague — that’s the floor. They evolve every time you make a decision, launch an offer, or pivot. Best practice for priorities: don’t write them by hand; point the AIOS at the place they actually live (“read everything in my ClickUp Q2 workspace — those are our priorities”). This is exactly why connections matter — instead of asking you, the AIOS just goes and looks.
Day 2+ — Connections
After /clear, start wiring up tools. The most important non-obvious move: don’t give the AIOS your personal credentials. Make a separate account (“Up AI” in Nate’s setup) per-tool. Tighter permissions, clean blast-radius isolation, per-agent spend attribution.
API endpoints over MCP servers
The default suggestion when wiring up a tool will be “install the MCP server.” Nate’s strong preference is API endpoints written into a markdown reference file and called via .env-stored keys. Reasons:
- MCP servers expose every endpoint, not just the few you actually use
- Multiple loaded MCP servers eat tokens just by being available
- A markdown reference file is cheap to read and easy to extend
The standard prompt:
“I want to use [tool]‘s API instead of the MCP — more token-efficient. Research [tool]‘s docs, set up a reference markdown file in this project listing all endpoints, then create a
.envfile with placeholders so I can paste my API key in.”
Claude researches, writes the reference, creates the .env with placeholders. You paste keys into .env (gitignored by default — never paste keys into the chat).
Permission modes — the autonomy spectrum
| Mode | Behavior | When to use |
|---|---|---|
| Plan mode | Brainstorm only, no execution | Designing skills, exploring approaches |
| Edit / auto-edit | Prompts on every web fetch / write | Cautious daily use |
| Auto | Lets safe stuff through, pauses on deletes / pushes / risky ops | Trusted daily use |
| Bypass permissions | Full autonomy, no prompts | Operator who’s accepted the risk |
Nate runs bypass permissions; “I’ve never had a problem.” Enable under Settings → search “claude” → “allow dangerously skip permissions.” First-time users should default to ask-permissions until intuition builds.
Skill-file optimization
If a skill (e.g., a team check-in) calls the same 2-3 endpoints every time, put those endpoints directly in the skill file so Claude doesn’t re-read the entire 600-line API reference every run. The same applies to documentation lookups — scrape once, store as reference.md, point the skill there.
The mantra: processing markdown locally is cheaper than HTTP requests + token-heavy crawls.
Capabilities — Building Skills (the long section)
Skills are the capability primitive. Anything Nate does on a cadence — pulling YouTube analytics, team check-ins, slide decks, image generation, LinkedIn posts — has a skill.
What a skill is, technically
A folder, typically .claude/skills/<name>/, with at minimum a SKILL.md:
- YAML frontmatter between
---lines:name,description(required). Optional:disable-model-invocation,allowed-tools,argument-hint,model,context,hooks,agent. - Step-by-step instructions: the SOP Claude follows after picking the skill.
Two valid layouts for supporting files:
- Self-contained:
.claude/skills/<name>/{SKILL.md, scripts/, references/} - Shared:
.claude/skills/<name>/SKILL.mdreferencesreferences/andscripts/at the project root via path
Either works as long as SKILL.md points to the right paths. Nate’s idea-mining skill keeps reference files (channel data markdown, raw YouTube JSON, competitor list, Python analysis script) outside the skill folder and points at them via path.
Progressive context loading — why skills stay cheap
Three levels:
- Initial search — Claude reads only YAML frontmatter (
name,description) across all skills. ~100 tokens per skill. - Skill body — once a match is found, the full
SKILL.mdloads. 1-2k tokens. - Supporting files — only loaded if the request actually needs them.
Anthropic’s docs recommend keeping SKILL.md under 500 lines. Move detail to separate reference files. (Same advice in Agent Skills Overview.)
How Claude knows when to use a skill
Two triggers:
- Explicit slash command —
/skool-post - Natural language — “help me write a Skool post about X” → Claude searches skills, finds the YAML-description match, invokes
Whenever Claude gets a request, it (a) reads CLAUDE.md, (b) scans skills for a YAML-description match. Match found → invoke. No match → general knowledge fallback.
The six-step skill-building framework
- Name + trigger. What’s it called? What natural language fires it?
- Goal. One sentence. What’s the output?
- Step-by-step process. If you did this manually, what exactly do you do, in what order? What decisions do you make?
- Reference files. What context does it need? Style guides? Logos? Project state?
- Rules. What could go wrong? Build guardrails for those failure modes.
- Self-improvement loop. Iterate after each run.
The feedback cycle — your first skill won’t be perfect
Nate’s pattern:
- Walk Claude through the process manually
- Say “this is a daily thing — turn it into a skill, ask whatever you need to know to capture it”
- Run the skill the next time. Watch it work. Notice friction.
- Tell Claude what to fix. It edits the skill.
- Repeat. By run 10-30, the skill is sharp.
Symptom-to-fix table
| Symptom | Fix |
|---|---|
| Wrong steps or order | Edit SKILL.md instructions |
| Missing tone / style / context | Add reference files, point to them |
| Same mistake repeating | Add a rule |
| Struggles with a tool / repeats searches | Create a reference doc |
| Works but could be better | Brute force — run, nitpick, repeat |
| Skill not triggering | Tighten YAML description |
| Skill triggering too often | disable-model-invocation: true (slash-command only) |
skill-builder — a skill that builds skills
Nate built a skill that interviews you and produces a skill. It’s free in his Skool community. Drop in .claude/skills/skill-builder/, then say “let’s build a new skill” — interview-driven generation.
Demo flow (from the course): infographic-builder skill. Skill-builder asks: problem to solve? what kind of content? trigger style? walk-me-through-process? conversational vs. fire-and-forget? API integration? where do brand assets live? where does output go? brand guidelines? Builds the skill, the logo overlay logic, the API reference markdown, registers in CLAUDE.md, logs the decision. Run 1 has logo placement off; iterate to run 2-6 to get to “excellent and reproducible.”
Where skills come from (sources)
Don’t build all of them yourself:
- Anthropic’s official skill library — see skills (124k stars, 17 examples + Agent Skills spec)
- Community open-source — see agents marketplace, Corey Haines’ marketing-skills bundle, Charlie Hills’ social-media-skills
- Marketplaces — share / sell / download. Vet anything before installing per the 6-question framework.
Skills are markdown — they work across Claude Code, Cursor, Antigravity, Codex — not Claude-only.
Project-level vs. global skills
.claude/skills/<name>/ is project-only. For skills you want everywhere, install at ~/.claude/skills/<name>/ (global). Every Claude Code project picks them up. Nate runs frontend-design globally so it’s available wherever he’s building UI. Use globals for things that should apply universally — voice, company context, common workflows.
The skill → routine pipeline
The unifying pattern: notice a recurring need → build it as a skill → wrap that skill in a scheduled routine. The routine’s prompt becomes one line: “Run the X skill.” Cadence is built on capabilities that exist standalone too.
Google Workspace CLI — the big unlock
Nate explicitly calls out the GWS CLI (open-source Google Workspace CLI) as one of the highest-leverage moves in his entire setup. Background: Google search inside Workspace is bad, his team had spun up Docs/Sheets everywhere, and the GWS CLI gives Claude Code a single tool to reach the entire Workspace.
What the GWS CLI does:
- Search / list / upload / download / move / copy / share anything in Drive
- Anything in Gmail / Calendar / Docs / Sheets / Slides
- 100+ multi-step workflow recipes built in: “create doc from template,” “read sheet → create report doc,” “find free time → schedule meeting,” “create events from sheets,” “create presentations,” “label-and-archive emails”
Why it’s powerful:
- One interface instead of a stack of MCP configs
- JSON-first responses — agents work with it cleanly
- Auto-discovery — when Google adds a method, GWS picks it up
- Built-in skills for triage / prep / generation
- Free + open source — (“not an officially supported Google product”; “expect breaking changes as we march toward v1”)
Caveat: it’s open-source-beta, not enterprise-backed. Some users on X say it’s overpowered, others say multi-re-auth-prompt finicky.
Microsoft 365 users — there’s a 365 equivalent.
Installation (manual / Option B path)
- Google Cloud Console → New Project (e.g., “Claude Code GWS”). Select it.
- APIs & Services → OAuth consent screen → Get Started. Internal app type if it’s just you. Add contact info.
- APIs & Services → Credentials → Create OAuth Client ID. App type: Desktop. Name: GWS.
- Download JSON. Save to
~/.config/gws/as e.g.client_secret. - Tell Claude: “I finished option B, the credentials are called
client_secret. Rungws auth login.” - Browser opens → choose account → confirm scopes → Allow.
- Enable each API in your Cloud project: Drive / Gmail / Calendar / Docs / Sheets / Slides.
Test: “Find my Google Doc from April 2025.”
Slides + visual validation
Pure programmatic Slides generation produces structurally-OK / visually-off output (the agent admitted: “I cannot see the slides, I just know how to build them programmatically”). Add Chrome DevTools to the skill: it opens the page, screenshots, looks, fixes spacing, repeats. After iterations: logo top-right, on-brand colors, generated images aligned with brand. Not yet replacing Gamma, but getting close.
Cadence — the Fourth C
Cadence = things keep running while your laptop is closed.
Push the AIOS to a private GitHub repo
Everything you’ve built is folders + files. Local-only = single-laptop only. Push to a private repo:
- Clone on a second laptop, pick up anywhere
- Plug into other AI harnesses (Codex, OpenClaw, Hermes Agent)
- Connect ClickUp / Telegram with Claude Code on a VPS or Mac mini → 24/7 access
Routines vs. desktop scheduled tasks vs. /loop — the comparison table
The three cadence primitives are easy to confuse. They aren’t interchangeable:
| Feature | Cloud Routines | Desktop Scheduled Tasks | /loop |
|---|---|---|---|
| Where it runs | Anthropic cloud | Your machine | Your machine |
| Machine on? | No | Yes | Yes |
| Session open? | No | No | Yes |
| Survives restarts? | Yes | Yes | No |
| Local file access? | No | Yes | Yes |
| Permission prompts? | Fully autonomous | Configurable | Configurable |
| Min interval | 1 hour | 1 minute | 1 minute |
See Claude Code Routines (cloud), Scheduled Tasks (desktop + /loop).
Cloud routines — plan limits
Per the course (April 2026):
| Plan | Cloud routines/day |
|---|---|
| Pro | ~5 |
| Max ($200/mo) | 15 |
| Team / Enterprise | 25 |
Hit cap → orgs with metered overage can exceed.
Cloud routine settings
- Name + prompt (the actual instruction injected into a real Claude Code session)
- Model (any model)
- Repository (the GitHub repo to clone for the run)
- Cloud environment (env vars + network-access level)
- Cadence (hourly / daily / weekdays — minimum interval is 1 hour)
- Connectors (Slack / Gmail / etc.) or just env-var API keys
- Permissions
Gotchas Nate documented from migrating his routines to cloud
.envdoesn’t exist in the cloud clone..envis gitignored — cloud clone has no API keys. Fix: put keys in environment variables in the routine’s cloud-environment settings.- Network access defaults to “trusted.” Trusted = Anthropic-vetted domains only. ClickUp wasn’t on the list; Nate’s routine failed. Fix: change to “full.” Risk: malicious-content-driven exfiltration in theory; practical risk on private repos with controlled inputs is low.
- Sometimes Claude looks for
.enveven when you said env vars exist. Fix: add to the prompt: “API key X is available as an environment variable. Use it directly. Don’t look for.env.” - Browser automation (Playwright + cookies) fails. No local cookie state in the cloud. Skills that scrape via cookie-dependent browser automation can’t migrate directly — need stateless auth (API key, header auth).
- Each run is stateless. Cloud GitHub clone gets destroyed after every run. Exception: if the routine writes to your codebase (PR or branch), that persists.
/loop — the per-session shorter-cadence option
Three tools under the hood: cron-create, cron-list, cron-delete. Per-session, 3-day expiry, no catchup, no persistence beyond 3 days. Closing the terminal kills active loops.
Examples:
“Remind me at 10:23 to check on my project.” → one-shot reminder
“Every 10 minutes, check my ClickUp for new developments.” → recurring loop in this session
Disable via env var if Claude is creating loops too eagerly.
Loop vs. scheduled tasks decision
- Loop: “help me right now / for the next few days.” 3-day max, in-session.
- Scheduled tasks: “help me every day / week / month forever.” Persistent, with catchup.
- Cloud routines: “run while everything is closed.” Stateless, 1-hour minimum interval.
Bonus — Karpathy’s LLM Wiki Pattern (this vault!)
Worth its own callout because Nate explicitly endorses Karpathy’s LLM-wiki pattern — the exact pattern this Karpathy LLM Wiki implements.
The structure (matches what’s running here):
my-wiki/
├── raw/ ← drop raw docs here
└── wiki/
├── index.md ← LLM-maintained catalog
├── log.md ← LLM-maintained operation history
├── analysis/
├── concepts/
├── entities/
└── sources/
Why it matters per Nate’s endorsement:
- No vector DB. No embeddings. No RAG pipeline. Just markdown files in folders.
- Token efficiency — one X user converted 383 scattered files + 100+ meeting transcripts into a wiki and saw a 95% drop in tokens used to query it.
- Compounds like interest — every ingest enriches the cross-link graph.
- Karpathy himself runs ~100 articles, ~500k words. This vault now runs 190+ articles, demonstrating the pattern holds at scale.
Nate runs two vaults: a YouTube transcripts vault (36 videos auto-organized into nodes connected by tools / techniques / concepts) and “Herk Brain” (his personal second brain — businesses, employees, Q2 initiatives, personal notes).
Setup in 5 minutes, per Nate:
- Install Obsidian (free).
- Create vault.
- Open vault in VS Code → run Claude Code.
- Paste Karpathy’s gist into Claude Code with: “You are now my LLM wiki agent. Implement this exact idea file as my complete second brain.”
- Claude scaffolds the structure.
This is the pattern this very wiki uses. Worth lifting Nate’s specific configuration tips:
hot.mdcache in his personal Herk Brain vault — ~500-word “what did Nate just talk about / give me.” Saves the agent from reading the full wiki for short context. (This vault uses the samehot.mdmechanism.)- Cross-vault querying — in his Herk-2 personal-assistant project’s
CLAUDE.md: “There’s a wiki path at~/Herkbrain/wiki/. When you need information about me or my business that you don’t already have, go there. Read the hot cache first, then the index, then the relevant subindex, then search.” Token usage dropped when he switched his executive assistant from internalcontext/files to wiki-pointing. - Linting — Karpathy mentions running an LLM lint over the wiki periodically: find inconsistent data, fill missing data via web search, surface interesting connection candidates, find article candidates. (This vault implements lint as a documented operation in
karpathy-obsidian-vault-main-2/CLAUDE.md.) - Does this kill RAG? Not at enterprise scale. At hundreds of pages with good indexes, the wiki wins on every dimension (cost, infra, understanding). At millions of documents you still want a real RAG pipeline.
Bonus — Cowork Live Artifacts (lightweight dashboards)
Cowork has a Live Artifacts surface for quick dashboards. Nate runs three:
- QuickBooks dashboard — revenue / expenses / net profit / cash on hand / runway / financial trends with AI commentary
- Weekly commitments dashboard — ClickUp tasks, completion %, what’s at risk, who to follow up with
- Fireflies dashboard — meeting transcripts
Setup: Cowork → New Artifact → describe what you want + which MCPs/connectors to use → 5-minute build.
For a unified business dashboard with refresh logic, build it in Claude Code (artifacts → real app → routines for refresh, or trigger.dev / Modal for scheduling). More moving parts.
Mindset note from Nate — start with artifacts as PoC, decide if you’ll actually use it before investing in the custom build. He uses on-demand “pull data from these five sources for the past month, generate a report” more than he uses dashboards.
Daily / Weekly Loop
Daily: every morning, “help me plan my day.” If it does well — pulls priorities, messages, calendar — keep going. If not, note what context was missing, patch it tomorrow. End of day: what skills did you use? What did you have to correct? What did you copy-paste? Tomorrow, fix the gaps.
Weekly: every Friday, run /audit. See where you sit on the Four Cs. Note skills used daily — if something gets used daily, automate it.
When NOT to Use an Agentic Skill
Sometimes the answer is a deterministic workflow, not a skill. Nate’s exact words: “Boring is beautiful. Workflows beat AI agents 9/10 in real businesses. Most automations we built for clients barely used AI; full autonomy was almost never needed once we decomposed by task.”
If you need a deterministic Python script: ask Claude Code to build it, deploy to trigger.dev or Modal for 24/7 runtime. Natural language all the way down.
Three Success Indicators (per the repo README)
Subjective, not metric. The README explicitly calls these “lived experiences that show up in your week,” not KPIs.
1. Team-reaches-out:
“A teammate messages you with a question. You realize your AIOS would answer it better, faster, and with exact sources — even if you were awake and free. So you ask your AIOS too. That’s the moment you stop being a bottleneck for your own knowledge.”
2. Context-switching reduction:
“You stop opening new tabs. You stop launching the desktop app. When something new lands, your first move is to ask the AIOS, not to open six things. The default surface for thought work shifts. Silent. Compounding.”
3. Knowledge-leaves-your-head:
“You stop trying to remember business facts. You don’t rehearse what you decided last quarter or what your customer said in that meeting. You trust the retrieval. The AIOS holds the truth, you hold the questions.”
Two of three within a month → the AIOS took.
Per the README: “Personal foundation → company AI-readiness. Once these indicators show up for one person, the same data architecture powers everything else. … A company where every operator runs a personal AIOS is a company that’s actually AI-ready.”
Try It
For a WEO Marketly operator wanting to start in 30 minutes (without buying anything):
- Adopt the Three Ms first. Especially the default shift — “how could AI do this, or at least 30% of it?” — for one full day on every task. The mindset is a precondition; without it the rest doesn’t earn its time.
- Sketch the seven Tier-One Buckets for your role on paper. What lives in Revenue, Customer, Calendar, Comms, Tasks, Meetings, Knowledge for you? This is 10 minutes that pays back forever.
- Pick one bucket, wire one connection. Don’t try all seven. Pick the highest-leverage one (most likely Tasks or Comms for an agency role). Let Claude Code research the API, write the markdown reference, set up
.envwith placeholders. Use a separate AI account for credentials. - Build one skill. Pick something you do 3+ times a week and walk Claude through it manually once, then say “turn this into a skill, ask whatever you need.” Run it next time. Notice friction. Tell Claude what to fix. By run 10-30, it’ll be sharp.
- Set up the Karpathy wiki pattern for your role. This vault is the live example — clone Karpathy’s gist, run the setup, drop your own raw docs in. The 95% token-savings claim from Nate’s source is reproducible at the 100-page scale.
- Reach for cadence last. Routines / scheduled tasks /
/looponly earn their setup time once Context, Connections, and Capabilities exist. Cadence wraps mature capabilities. - Apply WEO governance to every connector decision. Especially: separate AI accounts (Nate’s “Up AI” pattern), permission-mode discipline (Ask-permissions for client-data tools), audit logging.
If you want a paid pre-built starting point, Nate’s free Skool community ships the AIS-OS template repo + the skill-builder skill + the bundled /onboard / /audit / /level-up skills. Apply the 6-question vetting framework — it’s free, but vet the bundled scripts before running.
Implementation
Tool/Service: Claude Code in VS Code (free Claude Code extension), paid Claude account (20/mo monthly minimum), AIS-OS template repo (free in Nate’s Skool community).
Setup notes:
- Course recommends VS Code over Claude Desktop, but the Desktop app’s Dispatch + Computer Use features earn their place for non-Code workflows (see Rick Mulready’s operator-track tour from earlier this session).
- Voice dictation is genuinely a productivity multiplier per the course — Nate’s preferred tool is Glo (he’s on the Glo team — disclose-d in the source). WhisperFlow works too. Voice-first prompting is the unlock.
- The Karpathy LLM-wiki pattern runs natively on Obsidian — install free at obsidian.md.
Cost: Pro plan (100-200/mo) for cloud-routine quota beyond ~5/day. Optional GWS CLI is free + open source. Optional Glo voice tool — pricing not stated in course.
Integration notes:
- The course is a 2+ hour free YouTube video bundled with a free Skool community membership and a paid community pitch. Standard lead-magnet-funnel pattern (see Brandon Storey’s course for the documented playbook). Technical content is genuine; community conversion is the secondary outcome.
- Auto-caption normalizations applied to the cleaned source (already corrected in
raw/): “Cloud Code” → Claude Code, “Naden” → n8n, “Codeex” → Codex, “school” → Skool, “Excal” → Excalidraw, “mirrorboard” → Miro, “Calendarly” → Calendly, “key.ai” → kling.ai, “GLO/Glyo/Glido” → Glo, “ChromeDev tools” → Chrome DevTools, “Carpathy” → Karpathy, “OOTH” → OAuth, “playright” → Playwright, “Hertz” → Herk, “ChatBT” → ChatGPT, “anti-gravity” → Antigravity, “AISOS/AIS OS” → AIS-OS. - For WEO Marketly use, the strongest fits look like: (a) the Karpathy wiki pattern is already in production here — use Nate’s
hot.mdcross-vault-query pattern to extend; (b) the Four Cs framework slots cleanly into the Intermediate Course as a reference framework; (c) the API-over-MCP token-efficiency pattern is worth surfacing in token optimization; (d) the cloud-routines gotchas list is essential for any WEO routine that migrates from local to cloud.
Convergent Patterns Across This Session’s Three Ingests
This article is the third of three Claude Code system-architecture videos ingested 2026-05-01. Worth surfacing the shared patterns:
| Pattern | Simon Scrapes (connected-skills-agentic-os) | Brandon Storey (lead-magnet course) | Nate Herk (this article) |
|---|---|---|---|
| Shared brand context as foundation | brand-context/voice.md etc | ”Brand guidelines on disk” Claude Code finds by name | context/about-me.md + context/about-business.md + context/voice.md |
| Self-improving skills via feedback | learnings.mmd folded into SKILL.md end-of-day | Iterate by talking back to Claude Code (4-6 turns) | The 10-30 run feedback cycle to “sharp” |
| Self-maintenance | Heartbeat at session start + wrap-up at session end | Not addressed | CLAUDE.md edited 2× per day; /audit for periodic scoring |
| Skill chaining | Foundation → strategy → execution → ops tiers | Phase-based pipeline (ideation → production → triple lead magnet → email → landing → distribution) | Skill → routine pipeline; “/level-up” identifies the next skill to add |
| OpenClaw / Hermes lineage | Explicit | Not mentioned | Mentioned as portability targets |
Three independent practitioners, same answers on 4 of 5 dimensions. The convergence is itself evidence the pattern is correct.
Related
- The 2026 Claude Code AIOS Pattern — convergent-evidence synthesis pairing this article with Simon Scrapes’ connected-skills agentic-OS and Brandon Storey’s lead-magnet course; the convergent-patterns table from the body of this article extracted into a first-class connection article
- Karpathy Techniques for Claude Code — the LLM-wiki pattern Nate explicitly endorses (and this vault implements)
- Karpathy Pattern (topic index) — community implementations of the same wiki pattern
- Agent Skills Overview — official Anthropic doc for skill mechanics
- Skill Design Patterns — five patterns for individual skill structure
- Complete Guide to Building Skills — Anthropic’s official guide
- skills repo — canonical skills source
- agents — 184-agent marketplace
- Corey Haines marketing-skills bundle — same shared-context pattern in production
- Charlie Hills social-media-skills — same voice-builder pattern
- Claude Code Routines — cloud routines
- Claude Code Scheduled Tasks — desktop +
/loop - Claude Code Hooks — lifecycle event triggers
- Claude Cowork — Cowork Artifacts dashboards live here
- Computer Use — Chrome DevTools-style visual validation
- oh-my-claudecode (OMC) — orchestration-first sibling third-party framework
- Superpowers (Jesse Vincent) — closed-methodology sibling third-party framework
- SuperClaude — configurable-surface sibling third-party framework
- Simon Scrapes connected-skills agentic OS — earlier in this session, OpenClaw-lineage take on the same problem
- OpenClaw Use-Case Cookbook — the OpenClaw runtime alternative Nate names
- Hermes Agent — alternative AI-OS harness Nate references
- Brandon Storey lead-magnet course — earlier in this session, single-pipeline application of the same patterns
- Rick Mulready 7 use cases — earlier in this session, Cowork Operator-track companion
- Claude Code Token Optimization — context for the API-over-MCP token-efficiency move
- WEO Intermediate Claude Course — natural slot for the Four Cs / Three Ms frameworks as reference material
Open Questions
- The AIS-OS repo is distributed via a free-but-gated Skool community signup. What’s actually inside vs. what’s documented in the video? Worth a survey of the skills (
/audit,/level-up,/onboard,skill-builder) and a vetting pass per 6-question framework before recommending wholesale adoption to WEO operators. - The Glo voice tool — Nate is on the Glo team (disclosed in source). What’s the actual privacy model? “Faster, more private, more agentic than Whisper Flow” is the speaker’s claim, not a verified spec. WEO governance check needed before recommending for client-data dictation.
- The “API endpoints over MCP” preference is presented as universally better for token efficiency. Counterpoint: MCP servers also do auth handshakes, retries, and schema validation that you’d otherwise rebuild per-skill. At what scale does the per-skill maintenance cost cross over the MCP token cost? No data presented in the course.
- Cloud routines plan limits (~5 / 15 / 25 per day per Pro / Max / Team) were stated in the source as of April 2026. Verify against current Anthropic docs — these tiers shift frequently.
- The Karpathy wiki cross-vault-query pattern (point a project at another vault’s
wiki/for context) is the most leveraged tip in the entire course for this vault specifically — worth experimenting with for WEO’s BAW project, OmniPresence, etc. - Nate names “Codex,” “Antigravity,” “OpenClaw” as portability targets but doesn’t show a real cross-tool migration. Codex was tested (“two minutes to adjust”); Antigravity is hypothetical; OpenClaw is implied. Worth verifying the AIS-OS template’s actual portability before banking on it.