Source: ai-research/mikefutia-scale-ai-foundation-skills-2026-05-11.md — README + verbatim repo tree from mikefutia/scale-ai-foundation-skills (3 stars / 1 fork, no LICENSE, pushed 2026-05-07). Live tutorial: raw/The_7_Claude_Skills_Every_DTC_Brand_Agency_Needs.md — Mike Futia / SCALE AI YouTube tutorial DyuJX6X7KVs, 2026-05.
Three free Claude Code skills that produce the foundation files every downstream SCALE AI skill reads before writing — brand/brand-dna.md, brand/brand-voice.md, brand/icp-cards.md. The free repo is the lead generator; the same author’s premium 4-skill stack (hook writer, brief generator, ad script writer, static-ad variation engine) is gated behind the SCALE AI Skool community (550+ DTC brands, agencies, performance marketers). The architectural pattern — SKILL.md plus a references/ subfolder of long-form prompts and templates, no executable scripts — is the same shape as the skills examples and is itself a reusable agency template.
Key Takeaways
- Three skills, sequential, file-passing. Run them in order —
brand-dna-builder→brand-voice-profiler→icp-deep-dive. Each later skill reads the file the prior one wrote. The shared output directory is./brand/inside the project folder. Downstream premium skills also read these three files first. - Output schema is the load-bearing primitive. Every downstream SCALE AI skill (the 4 premium ones, plus any custom skill you build on top) expects exactly
brand/brand-dna.md,brand/brand-voice.md,brand/icp-cards.mdat those exact paths. This is the contract — the skill bundle works because of the schema, not because of any clever LLM tricks. The pattern is reusable for any agency or in-house team building skill libraries. brand-dna-builder— the onboarding skill. Scrapes the brand’s homepage and about page via Firecrawl MCP, extracts colors/fonts/value-prop/social-proof/voice-signals, then runs three batches of interview questions (customer / voice / positioning) to fill the gaps the website can’t answer. Writesbrand/brand-dna.md— a comprehensive brand bible with identity, visual identity, messaging, target customer, positioning, competitors, language-to-use and language-to-avoid.brand-voice-profiler— the voice rulebook. Readsbrand/brand-dna.mdfirst, then scrapes 2-3 long-form copy samples from product pages / about page / blog. Extracts actual sentence patterns, surfaces sample rewrites for calibration (“rate these on a 1-5 scale, which feels most on-brand”), then writesbrand/brand-voice.mdwith voice anchors, sentence structures, do/don’t word lists, rhythm and pacing rules, tone modifiers.icp-deep-dive— autonomous customer research. Readsbrand/brand-dna.md, optionally ingests a CSV of first-party customer reviews, then runsfirecrawl_agentagainst Reddit / Amazon / Trustpilot / category-specific forums to pull real voice-of-customer language. Confirms findings in a short interview (typically picks 3 personas), writesbrand/icp-cards.mdwith persona identity, lifestyle, values, specific pains, buying triggers, objections, hesitations.- Firecrawl MCP is the load-bearing dependency. Two of three skills (brand-dna-builder, icp-deep-dive) call
firecrawl_agent. Without it, brand-dna-builder still works but loses the website-scrape step (interview-only); icp-deep-dive falls back to interview-only mode (loses the autonomous web research that’s the headline value). - Pure-prompt skills, no scripts. The repo tree contains zero executable code — every skill is a
SKILL.mdplus areferences/directory of long-form templates and interview questions that the skill loads on invocation. Same shape as the skills examples and Superpowers. The implication: this is a reusable agency template — any agency or in-house team can fork it, swap the brand-system field set for their own discipline (legal-firm DNA, restaurant DNA, B2B SaaS DNA), and inherit the file-passing contract. - Distribution model is freemium-toward-community. The README and the demo video both lead with the same Skool community CTA. Three skills MIT-style free via GitHub (technically no LICENSE — see Caveats), 4 premium skills gated behind community membership. The pattern is the post-Karpathy “open-source the foundation, sell the orchestration” play — same shape as the Printing Press (Matt Van Horn’s CLI factory) freemium model and the Everything Claude Code community-bundle pattern.
- Cowork is the demo surface. Mike runs skills 1-6 inside Claude Cowork in the demo (the Static Ad Variation Engine runs in Claude Code because it needs OpenAI’s GPT Image 2 model). Cowork’s project-folder primitive is what makes the file-passing contract ergonomic — each skill writes to
./brand/, the next skill reads from./brand/, the operator sees the full file tree. - Repo size: 3 stars / 1 fork, no LICENSE. This is a new repo (pushed 2026-05-07). The README marketing matches the more-established
mikefutia/claude-visionskill (48 stars at this fetch), so adoption trajectory is plausible. No LICENSE file — the repo is technically all-rights-reserved by default. Operators using it for client work should reach out to Mike or wait for an explicit license before commercializing derivatives.
The 7-skill stack (3 free in this repo + 4 premium in SCALE AI community)
The demo video runs the full 7-skill end-to-end on a fictional DTC hoodie brand named Comfort. The pipeline:
| # | Skill | Repo | Reads | Writes | Backend |
|---|---|---|---|---|---|
| 1 | brand-dna-builder | free (this repo) | website (via Firecrawl) | brand/brand-dna.md | Firecrawl MCP + interview |
| 2 | brand-voice-profiler | free (this repo) | brand-dna.md + website copy | brand/brand-voice.md | Firecrawl MCP + interview |
| 3 | icp-deep-dive | free (this repo) | brand-dna.md + reviews CSV | brand/icp-cards.md | Firecrawl deep-research + interview |
| 4 | Meta Ad Hook Writer | SCALE AI Premium | all 3 foundation files + campaign focus | hook file (20 hooks, top 5 ranked) | Claude only |
| 5 | Creative Brief Generator | SCALE AI Premium | 3 foundation files + (optional) hook file + interview | full creative brief on Motion.app template | Claude only |
| 6 | Ad Script Writer | SCALE AI Premium | 3 foundation files + briefs + customer reviews CSV | UGC or produced ad scripts (2 modes) | Claude only |
| 7 | Static Ad Variation Engine | SCALE AI Premium | 3 foundation files + reference winning ad image + ad copy | 3 image variations + 3 copy variants | OpenAI GPT Image 2 |
Skills 1-6 run inside Cowork in the demo; skill 7 needs Claude Code (CLI handoff to OpenAI). The 4 premium skills are NOT in this GitHub repo — only the 3 free foundation skills are.
Architectural primitives worth lifting
The pattern this bundle codifies is more reusable than the specific brand vertical:
- Foundation files as a schema contract. Define a small set of file paths every downstream skill expects. Make those files thin enough that one round of interview-plus-scrape can fill them, but rich enough that downstream skills don’t need to re-research. (Here: 3 files at fixed paths.)
- Sequential file-passing, not in-memory state. Each skill writes to disk; the next reads from disk. No shared memory, no agent-to-agent message bus. This is the same primitive that makes skills composable and what Cowork’s project-folder primitive operationalizes.
- References/ subfolder pattern.
SKILL.mdis short and behavioral (“when to invoke me”); the long-form templates and interview questions live inreferences/. Loaded on-demand during invocation. Mirrors Anthropic’s progressive disclosure: a small skill description, a larger references body, an unlimited tool surface. - Trigger phrases inline in SKILL.md. Each skill’s SKILL.md description includes natural-language trigger phrases (e.g. “build brand DNA”, “onboard a new client”, “set up [brand name]”). The auto-invocation surface is the SKILL.md description; the trigger phrases are how the operator’s verb maps to the skill name.
- Interview-plus-scrape pattern. Scrape what the public web shows; interview-fill the rest. Don’t over-engineer the scrape, don’t over-interview where the website would tell you. Pattern resembles the Cowork Projects “AI Consultant” recipe’s Client Intelligence Brief.
Implementation
Tool/Service: mikefutia/scale-ai-foundation-skills — three Claude Code skills (no LICENSE, no executable scripts, pure-prompt). Author: Mike Futia / SCALE AI Skool community.
Setup:
git clone https://github.com/mikefutia/scale-ai-foundation-skills.git
mkdir -p ~/.claude/skills
cp -R scale-ai-foundation-skills/brand-dna-builder ~/.claude/skills/
cp -R scale-ai-foundation-skills/brand-voice-profiler ~/.claude/skills/
cp -R scale-ai-foundation-skills/icp-deep-dive ~/.claude/skills/
# Verify
ls ~/.claude/skills/ # should show brand-dna-builder/, brand-voice-profiler/, icp-deep-dive/
# Or project-local instead:
mkdir -p ./.claude/skills/
cp -R scale-ai-foundation-skills/brand-dna-builder ./.claude/skills/
# (repeat for the other two)Firecrawl MCP setup (required for full functionality):
# Install Firecrawl MCP server per Firecrawl's official docs
# Then add to Claude Code:
claude mcp add firecrawl --transport <appropriate transport>Without Firecrawl, the skills fall back to interview-only mode — usable, but you lose the autonomous website scraping for brand-dna-builder and the autonomous deep-research for icp-deep-dive.
Cost:
- Skills themselves: Free (3 stars, no LICENSE file but README implies free use; Mike’s distribution intent is clearly free-tier-lead-to-community).
- Firecrawl MCP: Firecrawl’s own pricing applies — free tier exists; paid tiers for higher volume.
- Claude tokens: Each skill consumes ~5-15k input tokens for the scrape + interview + write, plus the output tokens for the final document (
brand-dna.mdtypically 2-4k output tokens;brand-voice.mdsimilar;icp-cards.md4-8k depending on persona count). Full foundation pass for one brand: roughly 30-60k tokens total.
Integration notes:
- Sequential, not parallel. Skill 2 and skill 3 both read
brand-dna.md, so skill 1 MUST complete first. Skill 2 doesn’t strictly block skill 3, but the demo runs them in order and the README mandates the order. Don’t try to parallelize. - Trigger phrases (verbatim from README and demo):
brand-dna-builder: “set up brand context”, “build brand DNA”, “onboard a new client”, “let’s set up [brand name]”brand-voice-profiler: “build a brand voice profile”, “define voice rules”, “calibrate brand tone”icp-deep-dive: “define my ideal customer”, “build customer personas”, “create ICP cards”
- Working directory matters. All three skills write to
./brand/relative to the current working directory. Run them from inside the project folder you want the brand files to live in. Mike’s Cowork demo uses one folder per brand client. - First-party reviews CSV improves
icp-deep-divesubstantially. The demo uses 78 first-party Comfort reviews — Claude calls this out as “voice of customer data is rich.” If you have any review data (Shopify reviews, Amazon reviews, Trustpilot exports), drop the CSV into the project folder before invoking. - No LICENSE. The repo has no LICENSE file. Default behavior in GitHub is all-rights-reserved unless the author has granted permission. The README marketing implies free use (“Drop them into your skills directory and Claude will pick them up automatically”), but if you’re using this commercially or building a derivative skill bundle, ask Mike directly or wait for an explicit license. The companion repo
mikefutia/claude-visiondeclares MIT in its README — Mike just hasn’t done the same here yet.
Try It
- Onboard a new agency client in one Cowork session. Spin up a fresh Cowork project folder, install the three skills, run them in order on the new client’s URL. Output:
brand/brand-dna.md+brand/brand-voice.md+brand/icp-cards.mdin one session. Compare against your current onboarding workflow’s time-to-deliverable. - Pair with claude-vision for ad teardowns. After the foundation files exist, drop a competitor’s UGC ad into the project folder and run
/video-analyzer --prompt "Grade this ad against my brand DNA and voice. Recommend hook variants."The video analyzer reads the brand files automatically via Cowork’s context. Same author, designed to compose. - Build your own premium-tier-equivalent skill. Fork the schema (3 foundation files at fixed paths), write your own hook writer / brief generator / ad script writer that reads from
./brand/*.md. The premium SCALE AI skills aren’t open-source, but the pattern is — the contract is the file paths, not the prompts. - Adapt for non-DTC verticals. The schema (
brand-dna.md/brand-voice.md/icp-cards.md) generalizes to law firms (firm-dna / advisor-voice / matter-icp), restaurants (concept-dna / menu-voice / diner-icp), B2B SaaS (product-dna / docs-voice / buyer-icp). Fork the repo, swap the brand-extraction questions for vertical-specific ones, keep the file-passing contract. - For WEO Marketly client onboarding. Run on a sample dental practice URL — Smile Springs Family Dental (the canonical fictional client used in the WEO onboarding course). Compare against the existing WEO onboarding intake. Likely overlap with the Brand DNA Builder workflow already in production at the agency; this version automates the customer-research deep-dive step.
- Connect to the WEO Marketly brand-dna pipeline. The existing
static-ad-generator/brands/weo-marketly/brand-dna.mdwas hand-built in February 2026; running brand-dna-builder againstweomarketly.comwould produce a comparison artifact that calibrates how close the agentic version gets to the human-curated baseline. Calibration data feeds the brand-DNA tooling pipeline for the WEO ad generation flow.
Related
- claude-vision — Mike Futia’s video-analyzer skill — sibling skill from the same author; same MIT marketing pattern; designed to compose with the brand foundation files
- skills — canonical pure-prompt-skill pattern; same shape (SKILL.md + references/)
- Superpowers — alternative skill framework with similar references/ progressive disclosure
- Claude Cowork — the project-folder primitive that operationalizes the file-passing contract
- Cowork Projects “AI Consultant” recipe (Eliot Prince) — the 4-knowledge-file pattern that anticipates this 3-file pattern
- Printing Press (Matt Van Horn) — same freemium-toward-community distribution model
- Everything Claude Code (Affaan Mustafa) — broader skill-bundle community ecosystem
- Firecrawl scraping recipe — the autonomous-scraping primitive used here
- AI Marketing topic index — where DTC/agency workflows belong
- The 5 AI Automations Businesses Actually Pay For — same author audience; complementary pricing argument
Open Questions
- No LICENSE on the repo. The MIT-implicit-via-README pattern works for
mikefutia/claude-visionbut is missing here. Worth a 30-second PR to Mike adding LICENSE — clears the path for agencies to use commercially without ambiguity. - Adoption signal. 3 stars / 1 fork at this fetch is low — the YT tutorial is the actual distribution vector, not GitHub. Worth re-checking the star count in 30 days to see if the YT-to-GitHub-conversion is meaningful or if Mike’s audience consumes the video without ever installing.
- No measured comparison vs hand-curated brand DNA. Open whether the autonomous version closes 80%/95%/99% of the gap to a human-curated brand-DNA document. The WEO Marketly hand-curated version (
static-ad-generator/brands/weo-marketly/brand-dna.md) is a natural calibration point — run the skill againstweomarketly.comand diff. - Firecrawl-free fallback quality. README says without Firecrawl the skills fall back to interview-only mode. Open how much value the autonomous research adds vs interview-only — possibly Firecrawl is decisive for icp-deep-dive but marginal for brand-dna-builder where the homepage scrape is mostly metadata.
- The 4 premium skills aren’t open-source. Open whether community members can republish their outputs (Mike’s premium-skill markdown files) without violating community ToS. Operators who want to reproduce the workflow without joining the community would need to write the 4 missing skills from scratch — the file-schema contract is the only public part.
- No examples of the actual output files. The README and demo video reference
brand-dna.md/brand-voice.md/icp-cards.mdbut don’t ship example outputs. Thereferences/*-template.mdfiles in the repo are the closest the public surface gets to a schema spec — worth fetching and articulating in a follow-up. - Cross-contamination with the existing WEO
static-ad-generatorbrand-dna workflow. The WEO Marketly project already has a hand-builtbrand-dna.mdskill. Open whether running Mike’s skill would replace, augment, or contaminate the existing artifact. Memory note: ad generation is paused awaiting brand assets, so this isn’t blocking, but it’s a coordination question for whenever the WEO ad pipeline re-activates.