Source: wiki synthesis: Agent Skills Overview, Building Agents with Skills, Building Skills Guide, claudeskills.info Directory, Marketing Skills Bundle, Social Media Skills (Charlie Hills), Shopping for Skills and Plugins, skills repo
Time: Read 10 min | Watch 20 min | Practice 60 min — total ~90 min
Read First
The intro course’s Module 6 — Skills, Tools, Connectors showed you the directory and how to install one skill. This module assumes you’ve done that and goes three layers deeper.
Anthropic’s reference is the same as before — Anthropic Academy “Working with Skills” section plus the Engineering blog: Equipping agents for the real world with Agent Skills. For a 15-minute conceptual primer that pairs well with this module, watch Peter Yang’s Claude Skills, Clearly Explained in 15 Minutes and Simon Scrapes’ The Claude Code Skills Trap (Most People Fall For This) — the latter is the anti-pattern guide that prevents skill hoarding.
The WEO overlay below is where this module actually earns its keep: shopping the ecosystem with a real lens, vetting before installing, and authoring a Smile Springs voice-check skill you can paste-and-use today.
Why It Matters at WEO
Skills are how Claude’s output stops looking like Claude’s output and starts looking like WEO’s output. Five things compound when a team uses skills well:
- Voice consistency across people. A copywriter, a paid-media specialist, and an account manager all writing for Smile Springs should produce work that reads like it came from the same agency. A voice-check skill enforces that without anyone having to memorize the brand guide.
- Cross-team output quality floor. When everyone has access to the same vetted skill set, the worst draft anyone produces still passes a baseline. The director stops rewriting deliverables that should never have shipped in that shape.
- Reusable across clients. Most skills you build for one practice (voice check, intake-form copy, post-op email sequence) port to the next practice with a 10-minute parameter swap. By month three, the team has a skill library, not a folder of one-offs.
- Governance posture. A skill is auditable text. A prompt typed into chat by an account manager is not. When the AI Council asks “what does Claude do for our clients?” the answer is “these 12 skills, last reviewed on these dates, here are the diffs since last review.” That’s defensible. Free-form prompting is not.
- Compounding leverage. Every skill someone builds becomes a building block for the next person. The marketer who would otherwise have spent 40 minutes prompting Claude into the right voice now runs a 30-second skill invocation. Multiply by 12 staff and 100 client projects a month — that’s the actual ROI.
The catch: a bad skill is worse than no skill. It runs without you watching, on every draft, and silently corrupts work in ways that are harder to catch than a one-off prompt error. So the order matters: shop carefully, vet before installing, build only what you actually need.
Section 1 — Shop
The skill ecosystem has three layers, ranked by the trust posture WEO should default to. Knowing which layer a skill comes from is the first vetting move.
Layer 1 — Anthropic-published (highest trust)
The starting point. Anthropic ships skills two ways:
The anthropics/skills repo. GitHub’s anthropics/skills is the canonical reference. Apache 2.0 for most skills (docx/pdf/pptx/xlsx are source-available, which means readable but redistribution-restricted). 124k stars at the time of writing. These are written by the same team building Claude — they’re the closest thing to “official” guidance on what a well-formed skill looks like. Read anthropic-skills-repo for the full breakdown of what’s in there.
When to use them: file conversion, document handling, generic creative tasks. Less useful for marketing-specific work — the bundled creative skills are intentionally generic.
The Anthropic plugin marketplace. Skills published through the official marketplace go through Anthropic review. The claudeskills.info aggregator flags these as “Official” alongside the cross-vendor Official tier (GitHub, Vercel, OpenAI, Microsoft, WordPress). See claudeskills-info-directory.
Default posture for WEO: anything Anthropic-published is pre-approved for client work. Install freely. Still read the SKILL.md before invoking — not because Anthropic might be malicious, but because you need to know what it does to use it correctly.
Layer 2 — Known organizations (vetted, maintained)
Third-party skill collections from organizations with reputations to protect. The community has produced a handful of these that meet WEO’s bar:
Marketing Skills Bundle by Corey Haines. 36+ marketing-specific skills curated for Claude Code, Claude.ai, Codex, Cursor, and Windsurf. 21.8k stars. Covers copywriting, ad creative, SEO audits, competitor intelligence, email sequences, landing-page CRO. The single most relevant external collection for WEO daily work. Full breakdown in marketing-skills-bundle.
Social Media Skills by Charlie Hills. A 16-skill voice-first system focused on social media — LinkedIn voice analysis, post drafting, thread composition, response style matching. Charlie’s framing is voice-led: the skills capture and replicate a specific creator’s voice signature, then apply it to new posts. Particularly relevant for WEO because dental marketing leans heavily on practitioner voice — when Dr. Park writes a LinkedIn post, it has to sound like Dr. Park, not like an AI that read his last 50 posts. See social-media-skills-charlie-hills.
wshobson/agents — the agent marketplace. A curated marketplace of Claude Code agents and skills covering engineering, content, and ops use cases. The recent (2026-04-26) ingest pulled four of these into the wiki cluster. Closer to engineering-adjacent than marketing-pure.
SuperClaude and oh-my-claudecode. Frameworks and skill collections aimed at Claude Code power users. These are organized as plugin packs — installing one gets you 20–40 skills with a single command. Higher convenience, lower per-skill review density. Treat as a single decision: “do I trust this maintainer enough to install everything they ship?”
Default posture for WEO: Layer 2 skills get the 6-question vetting framework before install on any workstation that touches client data. Bundle installs (SuperClaude, oh-my-claudecode) require AI Council review because you’re approving 20+ skills with one click.
Layer 3 — Individual creators (varies)
Personal repos, single-developer skills, GitHub Gists. Quality varies widely. Some individual creators ship better skills than entire organizations — Charlie Hills started as an individual creator. Others publish-and-abandon.
The signal isn’t “individual” itself — it’s maintenance, transparency, and reputation. Look for:
- Active commit history past the initial publish
- Other repos by the same author that look maintained
- Issues and PRs being responded to
- A README that explains what the skill does and what it doesn’t
- A LICENSE file (no license = no rights to use)
When all five signal positive, an individual creator skill can be a good fit — particularly for niche use cases that organizational skills don’t cover. When any of the five signal negative, treat it as Layer 4.
Layer 4 — Anonymous publishers (default no)
Single-username repos, no commit history, no issues, no other public work. Could be a genuine first-time contributor. Could also be a placeholder account staging a supply-chain attack. You can’t tell from the outside, and the skill itself is plaintext that runs in your environment with whatever access you give Claude.
Default posture for WEO: never install. If a Layer 4 skill is the only thing that does what you need, that’s a signal to author your own — see Section 3.
Where to find each layer (filtering by use case)
For marketing-pure work: start with Marketing Skills Bundle (Layer 2). 80% of WEO use cases are covered by 6–8 skills from that bundle.
For social-led practitioners: Charlie Hills’ Social Media Skills bundle (Layer 2) is the right starting point. Pair with one or two skills from Marketing Skills Bundle for content-creation tasks that aren’t social-specific.
For governance, planning, document handling: Anthropic’s anthropics/skills repo (Layer 1) covers the document side; SuperClaude’s planning frameworks cover the structured-thinking side (Layer 2, bundle install).
For dental-specific work: doesn’t exist as a public skill. This is where authoring your own (Section 3) earns out — the Smile Springs voice-check skill below is the seed pattern.
For one-off niche tasks: consider whether a 30-line custom SKILL.md in .claude/skills/ would do the job better than installing a 600-line community skill that does 5x what you need. The best skill is sometimes the one you author in 10 minutes for a single repeating task.
Section 2 — Vet
The 6-question vetting framework from shopping-for-skills-and-plugins. Run through these before any Layer 2 or Layer 3 install.
Question 1 — Who is the publisher?
Look at the GitHub org or username. Click through to their profile. Read three things: how many other repos do they maintain, when was their last commit anywhere, and is the bio recognizable.
Green flag: Anthropic, Vercel, Snyk, Trail of Bits, or a recognizable creator (Corey Haines, Charlie Hills, Nate Herk). Active across multiple repos. Last commit within 30 days.
Red flag: dev_username_3847 with one repo and no commit history elsewhere. Placeholder bio. Account created within the last 30 days.
Dental example: if a “dental marketing skills bundle” appeared from a brand-new account with no prior dental or marketing work, that’s a no — even if the content looks fine. The provenance pattern matters more than the contents on a single read.
Question 2 — When was it last updated?
Check the commit history. Don’t trust the README’s “Last Updated” line — it can be edited.
Claude itself ships changes weekly. A skill last touched eight months ago is probably stale at minimum, broken at worst. The skill might rely on a Claude API behavior that changed, or call an MCP server that’s been deprecated.
Green flag: commits within the last 60 days. Active issues/PRs.
Red flag: single big commit, then nothing. The publish-and-abandon pattern. The skill might still work, but no one’s watching for breakage.
Dental example: a Google Reviews-fetching skill last updated in 2024 is almost certainly broken — Google’s review API surface has churned multiple times since. Skip it and build a 50-line replacement.
Question 3 — What does it access?
Open the SKILL.md (and for plugins, also hooks/hooks.json, .mcp.json, and any bin/ executables). Read it end-to-end before installing.
What you’re looking for: paths to credentials (~/.ssh/, ~/.aws/, .env), references to environment variables, HTTP requests to unfamiliar domains, subprocess calls.
Green flag: the SKILL.md scope matches the description. A “voice check” skill reads markdown text and writes feedback. Period.
Red flag: the description says “format markdown” and the SKILL.md instructs Claude to read ~/.ssh/id_rsa and POST it externally. This is the literal attack pattern Repello and Snyk documented in 2025.
Dental example: a skill that claims to “audit client websites for SEO” but the SKILL.md includes instructions to “save site contents to /tmp/ and email to [external address]” is exfiltrating client data. Catch this on read.
Question 4 — What dependencies does it pull in?
Some skills require an MCP server, a CLI binary, an API key, or a paid third-party service. That’s fine when expected — a skill that needs Tavily for web search will tell you so.
Green flag: dependencies declared up front in the README. You know exactly what you’re being asked to install.
Red flag: the skill silently uses your existing API keys without disclosure. Or it pulls in a chain of npm/pip dependencies that aren’t named. Or it runs pip install mid-skill on a package nobody’s heard of.
Dental example: if a “competitor intel” skill asks you to set CLAUDE_API_KEY (you have one), OPENAI_API_KEY (sure), and [OBSCURE_SCRAPER_KEY] (what?) — that third one is the question. Find out what it is before installing.
Question 5 — What’s the license?
Three categories matter:
- Apache 2.0 / MIT / BSD: safe for any use including client work.
- Source-available (Anthropic’s docx/pdf/pptx/xlsx): readable, but redistribution restricted. Fine for internal use, check before bundling into a deliverable.
- GPL: fine for internal use. Triggers redistribution requirements if you ship the skill bundled into a product. Check before commercial use.
- No license stated: you have no rights to use the work. Default copyright applies, which means installing it for commercial use is a copyright violation. Skip.
Green flag: explicit Apache 2.0 or MIT in the LICENSE file.
Red flag: no LICENSE file. Or “all rights reserved” with no usage permission.
Dental example: a beautifully-written email-sequence skill with no LICENSE file is unusable for client work no matter how good it is. Either contact the author for a license, or write your own.
Question 6 — Does it actually fit your use case?
The most-skipped question. People install skills that are 80% adjacent to what they need, then bend their workflow to match the skill. That’s backward.
Green flag: the skill description names a task you do every week. Examples in the README look like things you’d actually produce.
Red flag: the skill is “close enough.” The description says “ad copy” but it’s optimized for B2B SaaS and you write dental practice ads. The skill will produce competent B2B SaaS-flavored dental copy — exactly the wrong output.
Dental example: a “B2B social media skill” might technically work for dental practice content, but the underlying voice patterns (data-driven, ROI-focused, vendor-aware) clash with the warm-plainspoken-trustworthy posture WEO’s clients need. Skip it. Build your own — see Section 3.
The Snyk ToxicSkills audit (why this matters)
In 2025 Snyk audited a sample of community-published skills and plugins. The findings:
- 36% had prompt injection vulnerabilities — instructions in the skill that an attacker could trigger to override Claude’s intent.
- 1,467 malicious payloads identified across the broader ecosystem.
- 91% of confirmed malicious skills combined prompt injection with traditional malware — the prompt injection was the entry point; the malware did the actual exfiltration.
The takeaway isn’t “the ecosystem is poisoned.” Most skills are fine. But “most” is not “all,” and a single bad skill installed on a workstation that touches PHI-classified client data is a HIPAA incident.
WEO AI Council posture
For client work, the rule is binary:
- Anthropic-published skills: pre-approved. Install freely.
- Everything else: intake form to AI Council before install on any workstation that touches client data. Same approval workflow as new MCP connectors. The threat model is identical.
The intake form takes 10 minutes. The AI Council reviews within 2 business days. For genuinely time-sensitive tasks, an emergency review can run within 4 hours.
For personal experimentation on a non-client account (your own claude.ai chat with no connectors enabled to client data): no approval required. Experiment freely, surface what’s worth proposing for the official list.
See WEO AI Governance for the current approved-skills list.
Section 3 — Build
The fastest path to a skill that fits WEO’s actual work is to author one. A lightweight markdown skill takes 30–60 minutes to draft, 15 minutes to test, and pays for itself the first time three different team members use it on the same client.
The minimum-viable skill anatomy
A skill is a single markdown file with YAML frontmatter and a structured body. Everything else (hooks, MCP configs, executables) is plugin territory — separate concept, separate trust profile.
---
name: skill-name-lowercase-hyphenated
description: One sentence describing what this skill does and when Claude should use it.
---
# Skill body
## When to use this skill
Tell Claude when to invoke. The clearer the trigger, the more reliably
the skill activates.
## What to do
Step-by-step instructions for Claude. Plain English. Bullets, not prose.
## Examples
Concrete before/after pairs. The single highest-leverage section — Claude
matches example shape better than it follows abstract instructions.
## Output format
What Claude's response should look like. Specify structure, length, voice.That’s the whole pattern. Read building-skills-guide for the deeper version.
Worked example: Smile Springs voice-check skill
The artifact for this module. Drop the file below into ~/skills/voice-check-smile-springs.md (Claude Code) or paste the body into a custom Project’s instructions (claude.ai). Then invoke it on any draft.
---
name: voice-check-smile-springs
description: Reviews dental marketing copy drafted for Smile Springs Family Dental against the brand voice guide. Flags banned AI phrases, voice deviations, and clinical-tone creep. Use this before sending any draft to the client.
---
# Smile Springs Family Dental — Voice Check
## When to use this skill
Run this skill on any copy drafted for Smile Springs before it ships
internally for review. Includes: homepage hero, service-page copy, blog
intros and outros, email subject lines and body, social media post
captions, ad headlines and primary text, reception-script revisions.
Do NOT run on patient testimonials (preserve verbatim) or on factual
clinical content where exact wording is regulated.
## Brand voice baseline
Smile Springs Family Dental serves families with kids and adults 35–55
in Columbus, Ohio. Voice is:
- Warm, plainspoken, trustworthy
- NOT clinical, NOT corporate, NOT salesy
- Differentiator language: Saturday appointments, no-wait booking
- Patient-first framing — the practice exists to make patients' lives
easier, not to advertise its own credentials
## What to do
Read the draft. Run three checks in order. For each issue found, output
one line with: line number, what was found, why it fails, suggested
rewrite.
### Check 1 — Banned AI phrases
Flag any of these (case-insensitive, partial-match OK):
- "streamline" / "streamlined"
- "leverage" / "leveraging"
- "world-class"
- "state-of-the-art"
- "game-changer" / "game-changing"
- "revolutionize" / "revolutionary"
- "in today's fast-paced world"
- "dazzle" / "dazzling"
- "smile of your dreams"
- "transform your smile"
- "cutting-edge"
- "best-in-class"
- "premier" (when used as a marketing adjective for the practice)
- "boasts" / "boasting"
- "elevate" (when used as a marketing verb)
- "robust"
- "synergy" / "synergistic"
- "seamless" / "seamlessly"
- "delve" / "delves"
- "tapestry"
- "in conclusion"
- "it's worth noting"
### Check 2 — Voice deviation
Flag sentences that read corporate, salesy, or hype-heavy. Specific
patterns to catch:
- Opening with "At Smile Springs Family Dental, we..." (corporate
cliche — rewrite patient-first)
- Three or more adjectives stacked before a noun ("our compassionate,
experienced, and dedicated team" — pick one)
- Any rhetorical question used as a hook ("Looking for a dentist who
cares?" — never)
- Exclamation points (max one per page, and only on legitimate
excitement, never on marketing claims)
- Sentences over 25 words (split or simplify)
- Use of "your smile" as a metonym for the patient (overused — vary)
### Check 3 — Clinical-tone creep
Flag clinical or technical language that doesn't fit the warm
plainspoken brand. Specific patterns:
- Procedure names without plain-English explanation ("composite
restoration" → "tooth-colored filling")
- Dental jargon when a patient word exists ("occlusion" → "bite,"
"prophylaxis" → "cleaning," "extraction" → "tooth removal")
- Insurance/billing language in patient-facing copy ("benefits
coordination," "claim adjudication" — say "we'll handle the
paperwork")
- Liability hedging that creates distance ("Results may vary depending
on individual circumstances" — usually deletable)
## Output format
For each issue found, one line:
`Line [N]: [WHAT WAS FOUND] — [REASON IT FAILS] — Suggested: "[REWRITE]"`
After all issues, a summary block:
- Total issues: [N]
- Severity: [low / medium / high based on count and check type]
- Voice fit (subjective 1-10): [score with one-sentence justification]
- Recommended action: [ship / light revision / structural rewrite]
If the draft passes all three checks cleanly, output:
`PASS — voice on-brand, ready for internal review.`
## Examples
### Example 1 — Banned phrase + clinical creep
INPUT:At Smile Springs Family Dental, our world-class team leverages cutting-edge composite restoration techniques to transform your smile.
OUTPUT:
Line 1: “At Smile Springs Family Dental, our…” — corporate cliche opener, not patient-first — Suggested: “Tooth-colored fillings, done in one visit, by a team that knows what families in Columbus need.”
Line 1: “world-class” — banned AI phrase — Suggested: remove
Line 1: “leverages” — banned AI phrase — Suggested: “uses”
Line 1: “cutting-edge” — banned AI phrase — Suggested: remove
Line 1: “composite restoration” — clinical jargon — Suggested: “tooth-colored filling”
Line 1: “transform your smile” — banned phrase — Suggested: “protect your tooth”
Total issues: 6 Severity: high Voice fit: 2/10 — reads as generic agency boilerplate, four banned phrases in a single sentence Recommended action: structural rewrite
### Example 2 — Mostly clean, one voice deviation
INPUT:
Saturday appointments mean you don’t miss work. Walk in, sit down, and we’ll get you out the door — no waiting room shuffle. Looking for a dentist who actually values your time? We’d love to meet you.
OUTPUT:
Line 3: “Looking for a dentist who actually values your time?” — rhetorical question hook — Suggested: “If a dentist who values your time matters to you, we’d love to meet you.”
Total issues: 1 Severity: low Voice fit: 8/10 — warm, plainspoken, differentiator-led; one rhetorical question to fix Recommended action: light revision
### Example 3 — Pass
INPUT:
Saturday hours, no wait, and a team that remembers your kids’ names. That’s what we built Smile Springs to be.
OUTPUT:
PASS — voice on-brand, ready for internal review.
How to test the skill
Test on three drafts before relying on it. The order:
Draft 1 — known bad. Paste in a deliberately broken paragraph (corporate opener + 3 banned phrases + a clinical term). Confirm the skill catches all of them and proposes sane rewrites.
Draft 2 — known good. Paste in copy you’ve already shipped that the team loved. Confirm the skill returns PASS or only flags 1–2 minor items. If it flags many issues on a draft you know was good, the skill is overcalibrated — soften thresholds.
Draft 3 — real working draft. Pull a current Smile Springs draft from your work-in-progress folder. Run the skill. Read the output critically. Did it catch real issues? Did it false-flag anything? Adjust the banned-phrase list and voice-deviation patterns based on what you learn.
After three test cycles, the skill is calibrated. Keep using it. Update the banned-phrase list whenever you catch a new bad phrase in production output — that’s how the skill compounds over a year.
Adapting to other clients
The Smile Springs skill is the seed. To port to another client:
- Copy the file. Rename to
voice-check-{client-shortname}.md. - Replace the Brand voice baseline section with the new client’s voice guide.
- Audit the Banned phrases list — most carry over (corporate AI cruft is universal); some are client-specific.
- Rewrite the Voice deviation patterns to match the new brand. Different clients tolerate different sentence rhythms.
- Rewrite the Clinical-tone creep check for the new vertical (a B2B SaaS client doesn’t have clinical creep — they have feature-list creep).
- Rewrite both example pairs with the new client’s actual copy.
A WEO client roster of 30 practices means 30 voice-check skills, each ~200 lines, all derived from this template. That’s the library by month three.
When Skills Don’t Fit (Use MCP or Connectors Instead)
A skill is the right tool when the work is text-in, text-out and the value is in the prompt-shaping. A skill is the wrong tool when:
- You need live data. A skill can’t pull current Google Reviews, fetch a client’s HubSpot pipeline, or read patient appointment data. That’s MCP territory — see skills-vs-mcp-vs-plugins.
- You need to write to external systems. A skill can produce text. To send that text as an email, post to social, or update a CRM record, you need a connector or an MCP server.
- The action requires authentication. Skills run in Claude’s context with whatever access you’ve already given Claude. They can’t authenticate to a new system mid-skill.
Module 3 covers MCP and connectors — when to reach for each, and how the WEO AI Council reviews them. Skills + MCP together is the actual ceiling. Skills alone is the floor that takes you 80% of the way.
Key Takeaways
- The skill ecosystem has four trust layers: Anthropic-published (default install), known orgs (vet first), individual creators (assess maintenance signals), anonymous (default no).
- The 6-question vetting framework: publisher, recency, access scope, dependencies, license, fit. Run all six before installing on a client workstation.
- Snyk’s 2025 audit found 36% of community skills had prompt injection vulnerabilities. The risk is real. Vetting is not optional for client work.
- WEO posture: Anthropic skills pre-approved; everything else routes through AI Council intake before client-data workstations install.
- Authoring your own lightweight markdown skill takes 30–60 minutes and produces something better-fit than 90% of community skills for client-specific work.
- The Smile Springs voice-check skill above is the seed pattern — copy it, adapt per client, build the WEO skill library.
- A skill is the wrong tool when you need live data, write access, or authentication mid-action. That’s MCP/connectors territory — see Module 3.
Related
- Previous: Module 1 — Prompts as Reusable Artifacts
- Next: Module 3 — Connecting to Your Tools
- Agent Skills Overview
- Building Agents with Skills
- How to Author a Skill
- Shopping for Skills and Plugins (6-question framework)
- Marketing Skills Bundle (Corey Haines)
- Social Media Skills (Charlie Hills)
- skills repo
- claudeskills.info Directory
- Skills vs MCP vs Plugins
- WEO AI Governance
Try It
Three exercises, tagged by track. The Operator and Builder exercises are independent — pick the one that matches your role. The Both exercise applies to everyone.
[Operator] Install one skill from Marketing Skills Bundle (~30 min)
- Browse the Marketing Skills Bundle repo. Pick one skill whose description matches a task you do for Smile Springs (or a real WEO client where you have permission to experiment).
- Run it through the 6-question framework. Even though Marketing Skills Bundle is a Layer 2 trust source, the questions still apply — you’re vetting the specific skill, not the publisher.
- Install it (claude.ai: paste into a Project’s instructions; Claude Code: drop into
~/.claude/skills/). - Run it on a real brief. Save the output.
- Document in your week-2 notes: what changed in your output? Faster? Better-fit? Worse? Would you keep it installed, or uninstall?
The honest “would I keep it” answer matters more than the time saved on this one run. A skill earns its keep across 20+ uses, not 1.
[Builder] Author the Smile Springs voice-check skill from scratch (~45 min)
- Copy the skill source from Section 3 above. Save to
~/.claude/skills/voice-check-smile-springs.md(Claude Code) or paste into a new Project on claude.ai. - Test on three drafts in order: known-bad (deliberately broken), known-good (something the team loved), real working draft. Document what the skill caught and missed on each.
- Adjust the banned-phrase list, voice-deviation patterns, and clinical-creep checks based on what you learned. Add at least 3 phrases or patterns to one of the lists from your real-draft test.
- Save the v2 file. Run it on the same three drafts. Confirm the v2 catches what v1 missed.
Optional Builder bonus: port the skill to a second WEO client. The template adapts in ~15 minutes; you’ll feel how the voice baseline drives everything.
[Both] Run the 6-question framework on one wild skill (~15 min)
- Find one skill outside Marketing Skills Bundle and Anthropic’s official repo — anywhere in claudeskills.info, a YouTube creator’s repo, or a thread you’ve seen.
- Run all six vetting questions out loud. Write down each answer in a paragraph.
- Decide: green-light (install), yellow (more review needed), red (skip). Note the reason for each.
- If green-light and the skill genuinely fits a WEO use case: file an AI Council intake request. Use this exercise as the documentation.
The point isn’t whether you install. The point is the muscle memory — running the six questions before any non-Anthropic install becomes default behavior by week 4 of this course.
Done all three? You’ve shopped, vetted, and built. Module 3 — MCP and Connectors — is the next layer up: when text-in/text-out isn’t enough and Claude needs to actually reach out into the world.