Source: raw/Master_97_of_Codex_in_One_Hour.pdf — 12-page AIS+ Resource Guide (AI Automation Society, by Nate Herk, May 2026). Companion to Nate Herk’s Master 97% of Codex in One Hour YouTube video.
Nate Herk’s hour-long zero-to-shipping walkthrough of OpenAI Codex (the super app) — interface basics → projects → plan mode → external tool plugins → skills → slash commands / @ tagging / side chats / multi-tasking → personalities + pets → GitHub + Vercel pipeline → automations (scheduled workflows) → browser-use QA → mindset shifts. Sister to Nate’s Claude Code AIOS masterclass and Hermes Agent 1-Hour Course — same operator, different harness, explicit cross-tool composition framing throughout.
Key Takeaways
-
Codex = ChatGPT super app with full local access. Read/write Excel, navigate local files, control mouse and keyboard, automate browsers, build skills, websites, apps, video games, run automations on a schedule. Codex vs ChatGPT (web): ChatGPT chats with connectors for limited functionality; Codex does everything ChatGPT does plus far more. Codex vs Claude Code: different harnesses, similar fundamentals — Claude Code uses Opus/Sonnet/Haiku, Codex uses ChatGPT models; both work out of the same local directory if you have proper instructions. You can mix and match.
-
Strengths observed in practice (Nate’s framing). Claude Code is better for exploratory thinking, brainstorming, creative planning. Codex is better at being pragmatic, executing long plans, troubleshooting, and finding issues Claude sometimes misses. Compose them on the same project directory.
-
Required: ChatGPT plan. Free tier limited; $20/month plan recommended to start; Pro is worth it once you hit limits.
-
Interface, models, settings. Same layout as ChatGPT — projects/chats on the left, conversation in the middle. Inside any chat: model + reasoning toggle.
- Models: GPT-5.5, GPT-5.4, others.
- Reasoning levels: Low / Medium / High / Extra High.
- Speed: Standard or Fast (Fast burns through usage quickly and is rarely needed).
- Recommended usage: Low = quick lookups; Medium = default for planning / brainstorming / most builds; High = big builds, complex skills, critical work; Extra High = last resort for stubborn bugs only. Each level costs different session usage — picking too high a level on a simple task can cause Codex to overengineer.
- Rate limits: Rate Limits Remaining panel in Settings shows usage for current 5-hour session and weekly window. Bottom bar in any chat shows context window fill. Codex auto-compacts (similar to Claude Code with Opus). Sessions in Codex tend to last longer than Claude Code because GPT-5.5 is highly token-efficient.
-
Project setup. A project = local folder Codex works inside. Everything Codex creates lives in real files/folders on your machine — you can navigate, edit, or share that folder with any other agent harness.
- Create: New Chat → add new project → file explorer → choose/create folder (e.g.,
Desktop/Codex YouTube/YouTube Analytics Demo) → open → select. agents.md= Codex’s onboarding doc. Same role as Claude Code’sclaude.md. Every new chat reads this first to get oriented on project goal, who the user is, how to help. Include: who you are and what you do; the goal of this specific project; product direction and key constraints; important learnings (failures, edge cases) so they’re not repeated.- Pro tip: treat every failure as golden knowledge. When something breaks, ask Codex to log the lesson somewhere persistent in the project so the next chat doesn’t repeat the mistake. Watch for
agents.mdbloat — eventually move learnings into dedicated reference files.
- Create: New Chat → add new project → file explorer → choose/create folder (e.g.,
-
Three permission modes (Settings > General).
- Default — pauses to ask before every meaningful action. Best when first getting started.
- Auto-review — asks for approval on sensitive actions like network access. Reasonable middle ground.
- Full Access — executes everything autonomously without asking. Use after you trust your skills and prompting flow.
- The orange warning on Full Access exists because horror stories of agents deleting databases or sending mass emails are real. Most come from context rot, vague instructions, or weak planning — not the tool itself. Build trust gradually.
-
Plan Mode + the build loop. Plan Mode is a toggle that prevents Codex from executing anything — only brainstorms and proposes a plan. Always start here when building something new.
- Switch on Plan Mode.
- Describe the goal in natural language (Glideo voice-to-text speeds this up).
- Codex asks clarifying questions, then proposes a plan.
- Iterate on the plan: edit, refine, change scope.
- Submit the plan, Codex executes.
- Codex runs verification loops automatically — visual passes, browser checks, screenshots.
- Review and iterate further.
- Mindset shift: when you don’t know if something is possible, don’t Google it or book a consulting call — ask Codex to research and explain it.
-
External tool integrations = “plugins” (overlap with MCP servers, skills, connectors). Categories:
- Dev/AI: Hugging Face, Vercel, GitHub, Game Studio.
- Design: Remotion, Figma, HyperFrames, Canva.
- Productivity/Lifestyle: Google Drive, Slack, SharePoint, Teams.
- Most plugins connect via just signing in (like Slack or Gmail).
- When no plugin exists (e.g., YouTube Data API): ask Codex to research and set up the connection itself. Example: ask how to access the data and choose API key vs OAuth → Codex generates a step-by-step plan with exact Google Cloud Console steps → you create the project + enable the API + create credentials → copy API key → Codex creates
.env.localin the project → paste key → ask Codex to test the connection. - Why
.env.localmatters: the leading dot tells Codex / Claude Code / git to exclude the file from public commits. Never put keys in random files called something like “secrets.txt.”
-
Skills = reusable recipes. Markdown file of instructions that tells Codex how to do something better the next time.
- Easy way to build: brainstorm + execute a workflow with Codex normally → once the output is good, say “Turn that into a skill so every time I ask for [X], you do this exact flow.” → Codex reverse-engineers the steps into a skill file.
- Global vs Local: Global skills live in
.codexfolder across all projects. Local skills live inside a specific project. Ask Codex to convert: “Move this skill to global” or “Keep this skill local.” - Calling a skill: slash command (
/youtube-comment-insights) or natural language (“grab my YouTube comments and give me insights”). - Built-in examples: ImageGen, OpenAI Docs, GitHub review follow-up, documents, browseruse, epubview, PDF, Skill Creator, and more.
-
Slash commands, @ tagging, side chats, multi-tasking.
- Slash commands (
/):/autoreview,/codereview,/feedback,/mcp,/memories,/model,/reasoning,/personality(friendly vs pragmatic),/pets,/plan-mode, plus all installed skills. - @ Tagging: press
@to tag a specific plugin or file in your project. Tagging the exact file path is far more efficient than asking Codex to search for it. - Side Chats: click Open Side Chat at the top while a main task is running. Opens a parallel conversation with full project context. Quick side questions without interrupting the main agent’s work.
- Multi-tasking: each project can run multiple chats in parallel. Codex color-codes the sidebar (blue or yellow dots) when a session needs your attention.
- Slash commands (
-
Personalities and Pets.
/personality: Friendly or Pragmatic (concise, task-focused, direct — recommended default for most builds). Pets (Settings > Appearance): a small character at the bottom of your screen showing what Codex is working on. Options: Codex, Dewey, Sadie, BSOD, Stacky, Fireball. -
From localhost to live: GitHub + Vercel pipeline. When Codex builds a dashboard or web app, it serves on a localhost URL only you can access. To put it on the public web:
- Codex = where you build and edit; GitHub = cloud version control; Vercel = hosts the app on a public URL.
- GitHub + Vercel are tightly integrated — when Codex pushes to GitHub, Vercel auto-deploys. You only have to manage Codex.
- Setup: free GitHub account → ask Codex to sync to a new repo (browser-based GitHub CLI auth pop-up) → free Vercel account → Add New Project → connect to GitHub → import the repo → Deploy → ~30 seconds later you have a live
.vercel.appURL (custom domains via DNS swap). - Iteration flow: make changes locally in Codex → test on localhost → if you like it, ask Codex to push → Vercel auto-deploys → if you don’t like it, the production site is never touched.
-
Automations: scheduled workflows. Automations tab schedules any chat or skill to run on a recurring basis (e.g., “Every Sunday at 5:00 PM, run the YouTube comment insights skill, refresh the data, and push the update to GitHub”).
- Setup: in a project chat, describe the recurring workflow → Codex creates the automation with schedule, prompt, target project → review the prompt in the Automations tab → Run Now to test.
- Critical settings to check: model defaults to GPT-5.2 (change to 5.5 or your preferred model, otherwise runs are slow and weak); set the right reasoning level (medium or high for most data refreshes); confirm the right project is selected.
- Important limitation: automations are essentially a local cron job. If you close Codex or shut down your machine, automations stop. For 24/7 automations, you need cloud routines — see Claude Code Routines.
- When automations get stuck: if a run takes far longer than the manual task did, stop and ask Codex what went wrong. Common causes: an open file blocking a write; wrong model selected; missing context from the original session. After fixing, ask Codex to update the automation or skill so the same failure doesn’t repeat.
-
Browser use and QA (
/browseruse). Codex controls an in-app browser with a visible mouse cursor. Use cases:- QA stress testing — Codex clicks around your dashboard, tries to break it, reports bugs (Nate’s testing: found 6 issues on a freshly built dashboard).
- Automation without an API — log into a service, download reports, change settings.
- Authenticated browsing — browser remembers cookies, can navigate past login walls.
- Build QA into your skills — bake browser-use QA passes directly into skills so Codex stress-tests visually before returning work. Compounds over time.
- Performance note: from Nate’s testing, Codex’s browser use feels noticeably smoother and more intelligent than other browser automation tools, including Playwright via CLI.
-
Mindset shifts and best practices.
- Watch out for “dark code” — when vibe coding, you write code without knowing what it does. Don’t need to understand every line, but should fundamentally understand what each script is doing and why.
- Stop early, steer, save tokens — if a run is going down the wrong path, stop and ask a clarifying question. A 20-second human intervention saves minutes (or hours) of wasted token spend.
- Specificity beats searching — giving Codex an exact file path is more efficient than asking it to “look around and find” something.
- Mix and match tools — everything in your project is just files and folders. Run Claude Code in the terminal inside your Codex project, brainstorm with Claude, then tag that file in Codex and ask Codex to execute. Use the right tool for each task, not the same tool for everything.
- Migrating projects between harnesses — to move a Claude Code project to Codex, just ask: “Help me figure out what files to create to make this compatible with Codex.” Usually it’s just renaming
claude.mdtoagents.mdand a few minor tweaks. Takes about 30 seconds.
-
The Codex Build Loop (PDF’s quick reference table — 9 steps):
- Set up — create project, write
agents.md. - Plan — switch to Plan Mode and brainstorm.
- Connect — hook up data sources (Plugins /
.env.local). - Build — execute the plan with the right model (Medium reasoning, Standard speed).
- Verify — use browser-use QA passes (
/browseruse). - Skill — convert the workflow into a reusable recipe.
- Deploy — push to GitHub, auto-deploy to Vercel.
- Automate — schedule recurring runs (Automations tab) with the correct model — don’t leave at the GPT-5.2 default.
- Iterate — refine skills and automations every run.
- Set up — create project, write
Where it fits in the wiki
- Sister course to Nate’s Claude Code AIOS masterclass and Hermes Agent 1-Hour Course — same operator, three harnesses (Claude Code = desk, Codex = local-but-different-model alternative, Hermes = on-the-go automation). All three explicitly recommend mix-and-match composition on the same project directory.
- Validates “scaffolding moves into the model” from the OpenAI side — Codex’s auto-compaction, native browser use, and
agents.mdonboarding doc are the OpenAI parallels to Anthropic’s/context+ computer use +CLAUDE.md. Same convergent direction. - Concrete cross-tool composition pattern. “Run Claude Code in the terminal inside your Codex project, brainstorm with Claude, then tag that file in Codex and ask Codex to execute” — a transferable workflow operators should adopt regardless of vendor preference.
.env.localdiscipline is portable — same rule applies across all three Nate Herk articles.- Automations vs Routines — Codex’s local-cron automations require an open machine; Claude Code Routines run remotely on Anthropic infra. The PDF explicitly cites Routines as the 24/7 alternative when you outgrow Codex automations.
Implementation
- Tool/Service: OpenAI Codex (the super app — separate from the legacy Codex API).
- Required: ChatGPT subscription. Free tier limited; $20/month plan to start; Pro after you hit limits.
- Project layout: local folder +
agents.md+.env.localfor secrets + global.codex/for cross-project skills. - Default settings: Pragmatic personality, Medium reasoning, Standard speed, Auto-review permissions.
- Glideo for voice-to-text — same speech-to-text Nate uses with Hermes (he’s an official member of the Glideo team).
- Cost: ChatGPT subscription + token usage. Sessions tend to last longer than Claude Code (GPT-5.5 token-efficient).
- Integration notes: Skills are markdown files Codex can author/edit. Automations are local cron — for 24/7 use Claude Code Routines instead. Browser use via
/browseruseis the QA layer. GitHub + Vercel = the publish pipeline.
Open Questions
- What “97%” means. The title claims 97% mastery in one hour — editorial claim or backed by a coverage metric? PDF doesn’t quantify the missing 3%.
- Cross-harness skill compatibility. Claude Code uses
~/.claude/skills/, Codex uses.codex/. Are skill files (SKILL.md+ YAML frontmatter) literally portable, or harness-specific extensions? - Codex’s MCP server support. PDF mentions
/mcpbut doesn’t show how to register an MCP server. Comparable to Claude Code’s surface, or distinct? - Cloud routines roadmap for OpenAI. PDF acknowledges Claude Code Routines as the 24/7 path. Has OpenAI announced a cloud-routines equivalent for Codex?
- Cost comparison. Nate claims Codex sessions last longer — has anyone measured operator workflow cost on Codex vs Claude Code with numbers?
- Permission boundary granularity. Three modes (Default / Auto-review / Full Access) are coarser than Claude Code’s per-tool allow/deny + classifier. Is there per-tool granularity not surfaced in the PDF?
Try It
- Subscribe to ChatGPT $20 plan if not already.
- Open Codex, create a project for one workflow. Write
agents.mdfirst — who you are, project goal, key constraints, prior failures. - Toggle Plan Mode and describe the goal. Iterate on the plan before submitting. Don’t skip planning.
- Set personality to Pragmatic and reasoning to Medium as your default. Bump only when the task warrants.
- Build one workflow end-to-end — connect a plugin or
.env.localAPI key, execute, verify with/browseruse, deploy via GitHub + Vercel. - Convert the workflow into a skill with “Turn that into a skill.” Move global if it’ll apply to other projects.
- Schedule one Automation — but verify the model setting (default GPT-5.2 will burn tokens slowly; switch to GPT-5.5).
- Try the cross-tool composition pattern. Run Claude Code in your Codex project’s terminal. Brainstorm in Claude → tag the file in Codex → execute.
Related
- Nate Herk Claude Code Operating Systems course — Claude-Code-side AIOS counterpart
- Hermes Agent 1-Hour Course (Nate Herk) — Hermes-side on-the-go automation counterpart
- Claude Code Routines — 24/7 cloud counterpart to Codex’s local automations
- The Expanding Toolkit (Lucas, Code with Claude 2026) — “scaffolding moves into the model” thesis from the Anthropic side
- Karpathy techniques for Claude Code
- Computer Use — Claude Code’s browser/desktop counterpart to Codex’s
/browseruse - Agent Skills Overview — Anthropic’s skill spec; Codex’s skill format is markdown-with-YAML, very close to portable
- OpenSpec — spec-first counterpart to Plan Mode
- 2026 Claude Code AIOS Pattern