Source: Nate Herk | AI Automation — “Higgsfield Just Turned Claude Into a Creative Agency” (35:27, published 2026-05-05). Auto-captions normalized for “Higgsfield” (caption: “Higsfield” / “Hicksfield”), “Claude Code” (caption: “Cloud Code”), “OAuth” (caption: “OOTH”), “ideate” (caption: “ID8”), CLAUDE.md (caption: “claude.mmd”), and “Google Workspace CLI” (caption: “GWS CLI” / “GW CLI” / “GWCI”). Same author as Voice Agents with Claude Code + ElevenLabs and Nate Herk’s AIOS masterclass.
A 35-minute end-to-end Higgsfield + Claude tutorial that adds three new dimensions to the wiki’s existing trio of Higgsfield walkthroughs (Mike Futia DTC, Robo Nuggets brand launch, 50-ad Instagram campaign): (1) the explicit CLI-over-MCP architectural call for agentic work on token-cost grounds, (2) skill reverse-engineering — turning a single winning prompt into a reusable hypermotion-video skill that compounds, and (3) routine-based scaling — Sunday-plan + Monday-generate cron-style routines that grow the asset bank from 50 → 100 → 200 ads weekly while the operator sleeps. Demoed on a fictional headphone brand “Murmur” + a sleep-supplement bottle, with a Google Sheet (created via the Google Workspace CLI) acting as the asset database that every routine reads and writes.
Key Takeaways
- Two-surface flow. Tutorial starts in Claude Desktop with the Higgsfield MCP for the first set of one-shot generations (the “build me a headphone brand from scratch” demo), then switches to Claude Code with the Higgsfield CLI the moment automation enters the picture. The MCP is the discovery surface; the CLI is the production surface. This is the first wiki Higgsfield tutorial to articulate the boundary explicitly.
- CLI-over-MCP for agentic work — explicit token-cost argument. Author’s words: “The MCP has all those tools. So from a token perspective, it’s actually more expensive to use an MCP. The CLI is just better for agents. It’s going to be faster, it’s going to be more efficient.” Same backend, same credits, same models — the CLI ships fewer tool definitions into context per session, so it scales better when the agent runs in long-horizon sessions or scheduled routines. Aligns with the Railway Remote MCP “context is expensive on both sides” design discipline.
- Three install commands for the CLI + skills. Author’s bootstrap prompt to Claude Code paste-in: install the Higgsfield CLI → run OAuth login → install the Higgsfield agent skills bundle. Three copy-paste commands from the Higgsfield MCP and CLI page. After the OAuth flow opens a browser tab (Connect → sign in with Higgsfield account), Claude Code reports back: CLI working, account connected, agent skills installed. Operator gotcha: when the user already has the skills installed, the install step says “this already exists” — that’s expected, not an error.
- “Skills are recipes for an AI agent.” The tutorial’s mental model: a skill is a recipe. Without a recipe, agent outputs are inconsistent (the slot-machine problem). With a recipe, the agent generates the same shape every time, and every run gets feedback — “I don’t like X, Y, and Z, but I love A, B, and C — update the skill so next time is better.” This is the auto-improvement loop already documented in Simon Scrapes’ nine-component AIOS (skills with progressive disclosure + learning loop), here applied specifically to creative-asset generation.
- Skill reverse-engineering — the favorite-output workflow. Pick the single best generation from a session. Copy its prompt. Paste it into a new chat. Tell Claude: “This is my favorite output we’ve gotten from Higgsfield Marketing Studio. Turn this into a skill that lives locally inside this project at
.claude/skillsso that anytime I ask for a hypermotion-style video, you will utilize this and they’re always consistent.” Claude generates.claude/skills/hypermotion-video/SKILL.mdwith name + description + invoke conditions + hard rules + the prompt template. One winning generation → one reusable skill — the closest thing the tutorial offers to a compounding asset. - Skill registration requires a Claude restart. Operator gotcha (~minute 32 in source): after the skill file is written, Claude Code initially tries to invoke
higgsfield-generate(the default skill from the agent-skills bundle) instead of the newhypermotion-videoskill. Solution: close out of the Claude app, reopen it, then invoke. Skills written mid-session don’t auto-register — the skill index is loaded at app start. - Google Workspace CLI as the asset database. Author calls the GWS CLI “another CLI just like this Higgsfield CLI” — agents can quickly look at Google Sheets, Google Docs, Gmail, Calendar, and Drive without separate per-tool MCP servers or API calls. Tutorial uses it to: (1) create a master generation log (Sheet with tabs for product, style, image-or-video, model, prompts), (2) add a status column on the fly when prompted, (3) write back completed status + result URL + job ID after each generation finishes. The Sheet is the agent’s memory across routines — pure side-effects-as-state pattern, agent reads the queue, generates, writes back status.
- Two-routine scaling pattern (Sunday plan + Monday generate). Author’s natural-language routine spec, paraphrased: “Every Sunday, look at this Google Sheet and pull data from where we’re posting (Instagram or wherever). Analyze what’s working, what’s not, then ideate and add 50 new generations to the sheet. Then every Monday morning, pick 30 videos with a blank status, create the prompts, generate them, mark them complete.” The two routines split planning from generation so the planning routine has clean Sunday-night-after-the-week-of-data context and the generation routine has clear “blank status” picking criteria. Scales by raising the per-week count: 50 → 100 → 200 with no extra operator effort. Connects to Scheduled Tasks.
- The advertising-masterclass research doc as project knowledge. Before any generation, author kicked off a deep-research run: “I need you to do deep research on the best strategies for advertising in 2026 — organic ads on TikTok, Meta, X — what captures attention, what converts, how it differs per platform. Build me a full markdown file
advertising-masterclass.mdto live in this project.” Resulting doc is 617 lines, last-updated May 2026, contains a cheat sheet + per-platform sections + attention-capture mechanics. Every downstream agent (planning routine, generation routine, single-shot generations) reads this doc when it needs ideation help. Pattern is the Eliot Prince Client Intelligence Brief applied to a creative subject area — borrowed expertise as a project knowledge file. - Higgsfield Marketing Studio’s Hypermotion variant. Marketing Studio is a Higgsfield product where you drop in a product or product link, optionally a custom avatar, and it returns format-specific outputs (Hypermotion, unboxing, UGC). Hypermotion = fast cuts, zooms, premium-launch-video feel. The author’s reverse-engineered
hypermotion-videoskill targets this format specifically —.claude/skills/hypermotion-video/SKILL.mddefines the recipe, agent skills handle the API call. First wiki article to explicitly name and document the Hypermotion preset. - Iterative debugging when generation fails — read the prompt, fix it, retry. During the launch-video demo, a 16×9 Hypermotion variant got rejected for “sensitive content” and refunded credits. Author asked: “Why did that get denied? Show me the prompt. Figure out why that happened.” Claude read its own prompt, identified flagged words, removed them, retried successfully. The skill then captures these flagged-word lessons so future runs avoid them. Pairs with the troubleshooting reference (refusal recovery via prompt diagnosis).
- Reference-image consistency is the most common failure mode. When asked to generate ads “for this sleep-supplement bottle,” Claude initially produced random blue bottles that didn’t match the actual product. Fix: drag the actual product photo into the chat, prompt: “When you create advertisements for the sleep-supplement product, it has to appear as shown in this reference image every single time. Same color, same text. Don’t change anything. Regenerate the five examples.” After this prompt update, all five outputs matched. Operator rule: the reference image needs explicit “must match” wording — Higgsfield models will drift toward generic representations otherwise.
- Tornado interruption is part of the source. Author paused mid-tutorial because a real tornado came through during recording, which is preserved in the transcript. No technical content lost, just minor pacing.
The CLI-over-MCP architectural call
This is the most quotable single line from the tutorial and the most reusable across the wiki’s other Higgsfield articles. Direct quote:
“Ultimately functionally they can do pretty much the same things, but the MCP has all those tools. So from a token perspective, it’s actually more expensive to use an MCP. The CLI is just better for agents. It’s going to be faster, it’s going to be more efficient.”
What this means in practice:
| Surface | Use when |
|---|---|
| Higgsfield MCP (covered in Higgsfield MCP) | Conversational discovery in Claude Desktop / web — one-shot generations, multi-model side-by-side, character training, asset browsing. Token cost per turn doesn’t matter much. |
| Higgsfield CLI (this tutorial) | Agentic / scheduled / batched / long-horizon work in Claude Code or any code-runner agent. Tool surface is leaner, every turn ships fewer tool schemas, scales to nightly routines without burning context on tool-definition overhead. |
The economics here aren’t unique to Higgsfield — they generalize to every vendor that ships both an MCP and a CLI. The wiki has been accumulating evidence for this same direction:
- Railway agent CLI + delegation tool — single
railway-agenttool absorbs multi-step work; “context is expensive on both sides” - Meta Ads CLI — vendor-shipped CLI explicitly designed for AI-agent use
- Higgsfield Python SDK — same backend, code-callable, similar token-discipline argument
This tutorial is the first to put it in plain operator language for a creative-agency use case.
Implementation
Tool/Service: Higgsfield CLI + Higgsfield Marketing Studio + Claude Code + (optional) Google Workspace CLI.
Setup (Claude Desktop side, for the discovery phase):
Claude Desktop → Settings → Connectors → Add custom connector
Name: Higgsfield
URL: https://mcp.higgsfield.ai/mcp
→ Connect → OAuth → sign in with Higgsfield account
→ optionally configure permissions (e.g., always-allow image-generation)
Setup (Claude Code side, for the agentic / routine phase):
1. In your project folder (e.g., higgsfield-studio/), open Claude Code.
2. Paste a 4-step bootstrap prompt:
"This project uses Higgsfield. Install the Higgsfield CLI, run the
OAuth login, install the agent skills, and confirm everything works.
Commands:
<paste 3 commands from higgsfield.ai/mcp page>"
3. Browser opens for OAuth — sign in with same Higgsfield account.
4. Claude Code confirms: CLI working, account connected, agent skills installed.
Cost:
- Higgsfield subscription required (auth via existing account, not pay-as-you-go credits).
- Marketing Studio Hypermotion ~10–20 credits per video; Nano Banana 2 + Soul 2 mix similar to the 50-ad campaign tutorial (~949 credits for 50).
- Author did not publish a per-skill cost estimate.
Integration notes:
- Mount the project folder before starting so Claude Code can write
.claude/skills/,data/assets/, and the advertising-masterclass doc to disk. - Pre-build the research doc (
advertising-masterclass.md) before any generation so every downstream agent has a knowledge anchor — same pattern as the Eliot Prince Client Intelligence Brief. - Use the Google Workspace CLI to create the master tracking sheet and let the generation routine update it. Avoids stitching MCP servers for a basic queue/status table.
- Restart Claude Code after writing a new skill file — skill index loads at app start, mid-session writes don’t auto-register.
- Always verify reference-image fidelity before scaling generation — passing the actual product photo + “must appear exactly as shown” wording prevents drift to generic representations.
- Save your two routines first, then scale. Don’t try to write Sunday-plan + Monday-generate routines after you have hundreds of generations to sort through.
The two-routine scaling pattern
SUNDAY 23:00 (planning routine)
→ read sheet, read posting-platform data, read advertising-masterclass.md
→ analyze what's working / not working this week
→ append 50 new generation specs to the sheet (status: blank)
MONDAY 06:00 (generation routine)
→ pull rows where status = blank
→ take first 30 (configurable cap)
→ generate prompts via the relevant skills (hypermotion-video, etc.)
→ call Higgsfield CLI, attach reference images
→ on completion: write status = complete + result URL + job ID
→ operator wakes up to 30 finished assets
To scale:
- Raise the planning routine to 100 / week, 200 / week.
- Add a Thursday routine for mid-week refresh of high-performers.
- Once outputs are trusted, pipe results into Meta Ads CLI for upload + scheduling — the operator becomes purely strategic, agents handle execution.
Try It
- Run the Claude Desktop discovery flow first (one-shot “build me a brand and 3 ads”) to confirm the Higgsfield MCP connector is live and credits flow. The author’s “build me a headphone brand from scratch” demo is the cheapest smoke test.
- Build the
advertising-masterclass.mdresearch doc on day 1 — the deep-research prompt template is in the source. Reusable across product categories with one paragraph swap. - Reverse-engineer your first skill from your favorite generation. The recipe in the source: copy the winning prompt, paste into a new chat, prompt: “Turn this into a skill that lives locally inside this project at
.claude/skillsso that anytime I ask for a [style-name], you will utilize this.” Restart Claude after the skill is written. - Stand up the Google Workspace CLI before the first routine — the asset-tracking sheet needs to exist before any agent writes status to it.
- Time-box the routine setup: the Sunday + Monday pair only needs a one-time prompt: “Set up two routines for me. Sunday 11pm: …. Monday 6am: ….”
- For WEO Marketly placement: this article supersedes the 50-ad campaign tutorial as the operator-track Higgsfield + agentic-routines hands-on in any future intermediate-course module on creative scaling. Less ad-vertical-specific, more architecture-transferable.
Related
- Higgsfield MCP — companion conversational surface (this tutorial uses it for discovery)
- Higgsfield + Claude Code Ad-Agency Workflow (Mike Futia) — sister DTC tutorial, also CLI-track but no routines
- Higgsfield MCP — Robo Nuggets Brand-Launch Tutorial — sister brand-launch tutorial
- Higgsfield MCP — 50-Ad Instagram Campaign at Scale — sister scale tutorial via Claude Desktop
- Higgsfield (Overview) — REST API + async-queue basics
- Higgsfield SDK (Python) — code-callable surface, complementary to the CLI for Python-based pipelines
- Voice Agents with Claude Code + ElevenLabs (Nate Herk) — sibling Nate Herk tutorial, same direct-vendor-CLI-over-MCP architecture for ElevenLabs
- Nate Herk’s AIOS masterclass — flagship author context, AIS-OS course
- Railway Remote MCP + railway agent CLI — same context-economy argument applied to infrastructure tooling
- Meta Ads CLI — natural extension for upload + schedule after generation
- Routines — primitive used for the Sunday + Monday pair
- Cowork Projects “AI Consultant” Recipe — knowledge-doc pattern (Client Intelligence Brief / advertising-masterclass)
- Simon Scrapes’ nine-component AIOS — the skills-with-learning-loop component this tutorial implements
- AI Video & Content Production — topic index
Open Questions
- Per-call cost via the CLI vs the MCP. Author asserts MCP is “more expensive from a token perspective” but doesn’t publish per-call numbers for either. Worth measuring on a fixed prompt across both surfaces to verify the magnitude.
- Does the CLI bundle ship updates automatically? The agent-skills install step says “this already exists” if the bundle is already there — unclear whether
higgsfield-cli skills updateis a thing or whether re-running the install command refreshes. - Skill reverse-engineering loop is single-shot in the tutorial. Author says “every run gets better because you give feedback,” but the demo doesn’t show the second iteration. Worth a follow-up to verify the auto-improvement loop converges or hits diminishing returns.
- Routine inputs: how does Sunday-planning agent know what’s working on Instagram? Author hand-waves “pull data from Instagram or wherever we’re posting.” Real implementation needs an actual data source — Meta Ads Library MCP, Apify scraper, or HeyGen Studio Automation’s pattern of mounting a project-folder copy of the data. Not specified in this tutorial.
- Skill registration restart requirement. Tutorial confirms restart is needed after writing a new skill, but doesn’t address whether Claude Code’s
/skill-reloadcommand (if it exists) bypasses the restart. Worth checking against Skill Design Patterns. - Glido voice tool transition. Author mentions joining the Glido team and switching from Whisper. Glido isn’t yet in the wiki — open whether it’s worth a standalone entry or a Note in the voice-agents article.
- Author’s
.claude/skills/hypermotion-video/SKILL.mdcontent. Source shows the structure (name + description + invoke conditions + hard rules + prompt template) but doesn’t print the full text. A subsequent companion article from Nate Herk releasing the skill file would let the wiki ingest it directly.