Source: wiki synthesis: Simon Scrapes’ Connected Skills as Agentic OS, Brandon Storey’s Lead Magnet Creation course, Nate Herk’s AIS-OS masterclass, Karpathy’s LLM-wiki pattern, Agent Skills Overview, Skill Design Patterns + (May 2026 update) raw/Run_Your_ENTIRE_Business_on_Claude_Cowork_Agentic_OS.md — Jay (Robo Nuggets) Cowork-native AIOS framework

Three independent practitioners shipped Claude Code “AI operating system” frameworks within a six-week window in spring 2026. They wrote for different audiences (solopreneurs, freelance copywriters, SMB operators). They wired up different tool stacks (vanilla Claude Code; Claude Code + ChatGPT + Beehiiv + Kajabi + ManyChat; Claude Code + Google Workspace CLI). They came from different places — one from agentic-systems content, one from copywriting coaching, one from running a 350,000-member AI automation community. None of them appear to have coordinated. Yet on the architectural questions that matter, they converged on the same answers. This article documents the convergence as evidence-of-correctness for what’s becoming the canonical 2026 Claude Code AIOS pattern, and flags the dimensions where they diverge as open design questions.

The three sources

SourceDateAudienceTool stackDistinctive framing
Simon Scrapes — Connected Skills as Agentic OS2026-03-15 (YouTube, 13:24)Solopreneurs / agency operatorsVanilla Claude CodeOpenClaw memory model ported to Claude Code
Brandon Storey — Lead Magnet Creation course2026-04-15 (YouTube, 1:55:54)Freelance copywriters / coachesClaude Code + ChatGPT + Beehiiv + Kajabi + Zapier + Vizard.ai + ManyChat”AI copywriter as data synthesizer”
Nate Herk — AIS-OS masterclass2026-05-01 (YouTube + public MIT repo)SMB operators / AI consultantsClaude Code + GWS CLIThe Three Ms™ + Four Cs™ frameworks

The Five Convergent Design Dimensions

Each row below represents an independent design choice. Each cell is the practitioner’s specific implementation. Convergence on the principle, not the syntax.

DimensionSimon ScrapesBrandon StoreyNate Herk
1. Shared brand context as foundationbrand-context/voice.md, icp.md, positioning.md, samples/, assets.md — bootstrapped via 3 foundation interview skills”Brand guidelines on disk” — Claude Code locates by name and uses for every PDF + graphiccontext/about-me.md, context/about-business.md, context/voice.md, references/ — populated by /onboard 7-question interview
2. Self-improving skills via feedbacklearnings.mmd — feedback bucketed by skill, folded into SKILL.md at end-of-day by the wrap-up skillIterate by talking back (“make it more centered”), 4-6 conversational turns to acceptableThe 10-30 run feedback cycle — watch the skill work, note friction, tell Claude what to fix; “by run 30 the skill is sharp”
3. Skill as the capability primitiveA folder at .claude/skills/<name>/ with SKILL.md; categorized by tierEach pipeline phase wraps a Claude Code skill (downloaded design skill is the named example)Repo ships .claude/skills/<name>/SKILL.md; explicitly: “skills are ideation prompts and thinking tools, not heavy automations”
4. Skill chaining / dependency directionFoundation → Strategy → Execution → Ops tiers; lower tiers feed upper tiersSix-phase pipeline (ideation → production → triple format → email → landing → distribution); each phase outputs the next phase’s inputsSkill → routine pipeline; /level-up identifies the next skill worth building; ops tier wraps mature capabilities
5. The system improves itself, not just outputsHeartbeat at session start scans skills/ and reconciles against CLAUDE.md; wrap-up commits learnings(Implicit — by-talking-back is the only feedback channel)CLAUDE.md edited 2× per day; /audit Four-Cs scoring runs weekly; /level-up identifies the next leverage; the skill-builder skill builds skills

Three of three converge on dimensions 1, 2, 3, and 4. Dimension 5 (system self-improvement) is shared by Simon and Nate but not Brandon — Brandon’s framework is single-pipeline-deep, not platform-wide.

This is the central finding. Three teachers, three audiences, three tool stacks, all deciding independently that the answer is shared brand context + skill primitives + iterative skill improvement + tier-based chaining. When the same answer surfaces from independent paths, the answer is probably correct.

Where They Diverge (the design questions still open)

The convergent pattern is robust on the four-or-five named dimensions above. The divergences are the open design questions for anyone building on the pattern in mid-2026:

A. Runtime portability

SourceStance
Simon ScrapesExplicitly cites OpenClaw memory model as inspiration. Implements on vanilla Claude Code but the design is portable.
Brandon StoreySingle-runtime (Claude Code + auxiliary SaaS). No portability conversation.
Nate HerkTool-agnostic by stated design. Tested portability to Codex (~2 minutes); names Antigravity, OpenClaw, Hermes Agent as future targets.

Open question: Is portability earned at design time (writing skills as plain markdown that any harness can read) or at migration time (rewriting on the new harness)? Nate claims design-time; Simon’s port from OpenClaw memory model is migration-time evidence. No clean test yet.

B. Multi-tool vs Claude-Code-centric

SourceStance
Simon ScrapesStay in Claude Code. Skills do everything.
Brandon StoreyMulti-tool by necessity (lead-magnet pipeline crosses Beehiiv / Kajabi / ManyChat / ChatGPT). Claude Code is the production surface, not the operations surface.
Nate HerkClaude Code is the primary surface. GWS CLI is the one named exception — and it’s still a CLI Claude Code calls. Cowork Live Artifacts for lightweight dashboards only.

Open question: Where’s the right boundary between the AIOS surface (Claude Code) and the systems-of-record (CRM, email, calendar, content tools)? Brandon is the most honest about this — the AIOS doesn’t replace the SaaS stack, it orchestrates across it.

C. Self-maintenance as a first-class layer

SourceStance
Simon ScrapesHeartbeat at session start, wrap-up at session end — explicit named layer of the architecture.
Brandon StoreyNot addressed. The pipeline is one-shot per lead-magnet.
Nate HerkThe Four Cs put Cadence as a peer of Capabilities; /audit and /level-up are recurring system-improvement skills.

Open question: Is self-maintenance worth the engineering tax for a single-operator AIOS, or only when the AIOS scales to a team? Simon and Nate built it in early; Brandon never reached for it. The sample size is small and the workload mix differs.

D. Trademark / IP posture

SourceStance
Simon Scrapes”Agentic OS” is the speaker’s branded version of the pattern; offers paid community access. No trademarks in the public material.
Brandon StoreySix Figure Copy Academy is paid; the lead-magnet course itself is the funnel-top free artifact. No trademarks on the framework.
Nate Herk”The Three Ms of AI™” and “The Four Cs of an AIOS™” both ™ © 2026 Nate Herk. Repo is MIT, but the framework names are protected. “Use freely; don’t repackage as your own.”

Open question: Is the trademark strategy load-bearing for adoption (a clear, unique name spreads better than a generic pattern) or load-shedding (it adds friction for community contribution)? Three weeks of evidence is too short to say.

What This Means for the Karpathy LLM Wiki Pattern

Karpathy’s LLM-wiki pattern — the architecture this wiki implements — is itself convergent with the three practitioners’ frameworks at the implementation level:

  • Karpathy’s raw/ + wiki/ + CLAUDE.md pattern mirrors Simon’s brand-context/ + skills/ + CLAUDE.md, Brandon’s brand-guidelines-on-disk + skill-driven production, and Nate’s context/ + references/ + CLAUDE.md.
  • Karpathy’s index files (_master-index.md, _index.md) play the same role as Nate’s CLAUDE.md “Where things live” section.
  • Karpathy’s append-only log.md mirrors Nate’s decisions/log.md.
  • Karpathy’s hot.md session-continuity cache is exactly what Nate’s AIS-OS course cites as the cross-vault-query token-saver: “When you need information about me or my business that you don’t already have, go there. Read the hot cache first, then the index, then the relevant subindex, then search.”

Nate explicitly endorses the Karpathy pattern as a bonus section in his course — the pattern this vault has been quietly running for 8 months turns out to be the natural state of mature Claude Code knowledge management. The convergence isn’t just at the AIOS-architecture layer; it extends down to the file-and-folder layer.

This validates the bet that this Karpathy LLM Wiki is making — and suggests the natural next step is to operationalize Nate’s cross-vault-query pattern: point a project’s CLAUDE.md at this wiki/ directory, watch token usage drop, watch context-quality climb.

What This Means for WEO Marketly

The convergent pattern maps cleanly onto WEO surfaces:

Convergent dimensionExisting WEO surfaceGap
Shared brand contextOmniPresence voice profiles; client brand-context files (per-account)No single shared brand-context/ per client across surfaces; each project rebuilds it
Self-improving skills via feedbackOmniPresence script feedback loops; Clawdbot output reviewFeedback isn’t folded back into the skill file; it lives in chat history
Skill as capability primitiveLimited use; most automation lives as Python or n8n flowsSkills-first refactor would unlock the rest of the pattern
Skill chainingBAW pipeline (Blog-Agent-Worker) approximates thisChain happens in code, not as composable skills
Self-maintenanceHermes learning loop, Clawdbot weekly refreshNot a Claude-Code-native pattern yet

Concrete next step for the WEO AI Council: pilot the AIS-OS-style three-skill kit (/onboard, /audit, /level-up) against one WEO surface — most likely OmniPresence, since brand-context per dental client is already structured. If the pattern produces measurable gains (deltas Nate’s success indicators flag — team reaches AIOS instead of you; context-switching drops; knowledge leaves your head), generalize across BAW / Clawdbot / GHL.

Try It (for any operator on this wiki)

Three concrete moves anyone can make this week to apply the convergent pattern, ordered by leverage:

  1. Pick the shared-context folder. Create brand-context/ (or whatever name you prefer) at your project root. Add voice.md, icp.md, positioning.md — each one paragraph if that’s all you have. Have one existing skill open them at the top of its instructions. This is the foundation; everything else compounds on it.
  2. Make one skill self-improving. Pick a skill you run weekly. Next time you correct its output, don’t fix it in chat — append the correction to a learnings/<skill-name>.md file. End of week, ask Claude Code to fold the learnings into the SKILL.md. You’ve just built Simon’s / Nate’s feedback loop.
  3. Test the cross-vault-query pattern with this wiki. In any Claude Code project’s CLAUDE.md, add: “There’s a knowledge base at ~/Auto1111/Claude/karpathy/karpathy-obsidian-vault-main-2/wiki/. When you need information about WEO, Claude Code, marketing, AI, etc., that you don’t already have, go there. Read _master-index.md first, then the relevant topic _index.md, then the specific article. Don’t read the wiki unless you actually need it.” Compare token usage before/after. (This is Nate’s documented pattern; the vault is structured to support it.)

The convergent answer doesn’t require buying anything. The framework lives in plain markdown.

Update — May 2026: a fourth convergent voice (Cowork-native)

The “fourth convergent voice” question raised below has its first answer: Jay (Robo Nuggets) shipped Run Your ENTIRE Business on Claude Cowork — Agentic OS (YouTube, May 6 2026, source raw/Run_Your_ENTIRE_Business_on_Claude_Cowork_Agentic_OS.md). Jay’s framework converges on the same five dimensions but layers on a Cowork-specific seven-layer architecture:

LayerDescription
1. System instructionsCLAUDE.md as “operating manual” — Cowork-only emphasis, since IDE-style and claude.md don’t apply
2. SecuritySandbox VM + permissions — Cowork-native (sandbox is the runtime substrate)
3. ProgramSkills, sub-agents, plugins (with explicit warning: vet plugins outside Anthropic’s marketplace)
4. Context (most important)Knowledge + state + memory — explicitly differentiated. Knowledge = “what the system needs from you” (markdown files). State = “what is happening during runtime” (databases). Memory = “long-term learning” (RAG, etc., though Jay reports rarely needing it)
5. MCPExternal-system access — required for Cowork because no native IDE/CLI access
6. SchedulingSmart schedules so the agent runs autonomously without manual kicks
7. MobileDispatch (Cowork-native) for daily-brief-style mobile reach. Telegram integration TBD

Audit-by-pod approach. Jay’s /onboard-style intake breaks the business into acquisition / delivery / support / operations — a four-pod audit that produces the brand-context layer pod-by-pod. This is operationally very similar to Brandon Storey’s six-phase pipeline mapping (ideation → production → triple format → email → landing → distribution) but framed for SMB business operations rather than copywriter pipelines.

Adds to the convergent table:

DimensionJay (Cowork-native AIOS)
1. Shared brand context as foundationPod-by-pod (acquisition/delivery/support/operations) audit feeds context layer; explicit knowledge-vs-state-vs-memory decomposition
2. Self-improving skills via feedbackImplicit (continuous correction during build); not a named layer
3. Skill as the capability primitiveYes — .claude/skills/<name>/ skills are the program layer; explicitly recommends marketplace-vetted plugins for distribution
4. Skill chaining / dependency directionPod-led (acquisition skills feed delivery skills feed operations skills); MCP-mediated where external systems are involved
5. The system improves itself, not just outputsNot strongly addressed in the build phase covered

Five voices, four-of-five convergence on dimensions 1-4. Dimension 5 still splits Simon + Nate from Brandon + Jay. The Cowork-vs-Claude-Code-vs-Hermes runtime question is now a richer matrix:

  • Claude Code: Simon, Nate, Brandon (production)
  • Cowork: Jay (Cowork-native)
  • Cross-runtime: Nate (designed for portability), Brandon (multi-tool by necessity)

Implication for the open question on multi-tool vs Claude-Code-centric. Jay is the first convergent voice to position Cowork as the AIOS surface itself (not a Claude Code companion). The seven-layer model treats Cowork as the OS and the SaaS stack as MCP-reachable peripherals. This sharpens Brandon’s “AIOS doesn’t replace the SaaS stack, it orchestrates across it” thesis — the surface where orchestration happens is now a deliberate choice (Claude Code vs Cowork vs Hermes), not a default.

Open Questions

  • Is there a fourth convergent voice on the way? The pattern feels incomplete with three sources. First answer: Jay (Robo Nuggets) — May 6 2026, Cowork-native AIOS framework. Worth watching for Charlie Hills (already a cited skill-builder with voice-builder matching the shared-context pattern) or Corey Haines (Marketing Skills Bundle uses the same product-marketing-context foundation pattern) to publish a system-level framework. Both are halfway there at the skill-bundle level.
  • What’s the right name for the convergent pattern? Simon calls it “Agentic OS.” Brandon doesn’t name his framework. Nate trademarks “Four Cs of an AIOS™.” This article calls it “the 2026 Claude Code AIOS pattern” — generic, descriptive, free. If a single name catches on in the community, it’ll likely be Nate’s “AIOS” given the trademarked attention.
  • Is the convergence evidence of design correctness or evidence of shared influences? All three creators publish on YouTube; the AI/automation YouTube ecosystem is small. They could be downstream of the same blog post or Twitter thread. Worth watching for the original source — if there is one.
  • Does the pattern hold at team scale? All three frameworks are single-operator. The “team reaches the AIOS instead of you” success indicator implies multi-user, but none of the sources show a multi-user implementation. WEO is a natural test bed for whether the pattern survives multi-operator usage.
  • Where does the Karpathy LLM-wiki pattern sit relative to the AIOS pattern? This article treats them as compatible — Karpathy’s pattern is the knowledge layer; the AIOS pattern is the skill+context+cadence layer. But there might be a deeper integration (skills that read from the wiki, wiki articles that index skill outputs) worth designing. Open R&D.