Techniques for getting reliable, high-quality output from Claude and other LLMs — system prompts, few-shot examples, chain-of-thought, structured output, validation loops, and prompt caching. Focus is practical: what to write, when to write it, and how to verify it worked.
This topic was seeded 2026-04-12 by synthesizing prompt material scattered across claude-ai/ (primarily CCA-F Technical Reference). New prompt-focused sources should land here directly.
Articles
- Claude Prompting Best Practices (Official Anthropic Docs) — Anthropic’s authoritative prompting reference for Opus 4.7 / 4.6 / Sonnet 4.6 / Haiku 4.5. Opus 4.7-specific calibrations (literalism, effort tuning, subagent spawning, design defaults, code-review recall), general principles, XML structuring, output formatting, tool use, adaptive-thinking migration, agentic-systems prompting, migration guidance.
- Prompt Engineering Essentials — Consolidated reference covering few-shot prompting, explicit criteria, prompt chaining, the interview pattern, validation and retry-with-feedback, and self-correction. Synthesized from CCA-F technical reference.
- Winston MIT Presentation Prompts (God of Prompt) — Six-prompt Claude workflow applying Patrick Winston’s MIT “How to Speak” framework. Each prompt uses the same
<role>/<task>/<steps>/<rules>/<output>XML skeleton — a clean reference implementation of Anthropic’s structured-prompt conventions. - Godin Personal Brand Prompts (God of Prompt) — Six-prompt Claude workflow for personal brand engineering. Uses the same five-element mnemonic as the Winston thread (Symbol / Slogan / Surprise / Salient idea / Story) plus a combinator prompt. Demonstrates the mini-prompts + combinator chain pattern.
- LinkedIn Thought Leadership Prompts (God of Prompt) — Five-prompt Claude workflow for LinkedIn authority: narrative arc, signature idea, 6-week content plan, proof stack, comments-as-leads. Uses markdown headers (
#ROLE:/#TASK:/etc.) instead of XML tags — useful comparison for when to pick which syntax. - LinkedIn Funnel — 7-Prompt Claude Workflow — End-to-end LinkedIn lead-gen funnel: profile optimization (WAR framework) → ICP pain-point research (Reddit-driven) → weekly content system (2/3/2 funnel split) → sales-call mining → repurposing → outbound DMs → lead-magnet post. Conversational-prose syntax, interview-pattern in 6 of 7 prompts. Broader funnel coverage than the Edmondson thread; strong ban-lists on AI-slop phrasing.
- Troubleshooting Claude — Common Failure Modes and Recovery Moves — Six failure modes intermediate users hit (refusals, context exhaustion, tool-use failures, hallucination, sycophancy, drift) with what-it-looks-like / why-it-happens / recovery / prevention for each. Decision rule for when to recover in-thread vs restart fresh. Anchored on Anthropic’s reduce-hallucinations + sycophancy research + MCP debugging docs.
- OpenAI GPT-5 Prompting Guide — Cross-Vendor Reference — Practical takeaways from OpenAI’s official GPT-5 prompting guide that port directly to Anthropic models. Eagerness control (less / more / explicit budgets / escape hatches), tool preambles, the universal warning that contradictory prompts hurt reasoning models more than they hurt non-reasoning models, Cursor’s prompt-tuning case study (verbosity split, don’t-defer-to-the-user rule, lighter thoroughness language for newer models, structured XML specs), self-rubric prompting, codebase-rules blocks, and the metaprompting pattern. Also flags the OpenAI-specific bits to skip (
apply_patch,minimalreasoning tier, Responses API). Authors: Anoop Kotha + Julian Lee + Eric Zakariasson (Cursor) + Erin Kavanaugh. - SEO Prompt Compilation — 17-section, ~120-prompt Google Doc (no byline) covering Deep Target SEO + keyword research, content creation, on-page + technical SEO, local SEO, email marketing, landing pages, market research/strategy, competitor analysis, SEO reporting, advanced/unconventional SEO, case studies, business strategy, micro-SaaS, and lead magnets. Quality is mixed (one-line tricks alongside 110-line frameworks). Standouts: 3-Pillar Authority Accelerator (deep competitor research → 12-month, 30+ article roadmap with spreadsheet-ready table), Forensic Psychology Analysis (4-layer Customer DNA framework with Tripwires + Vernacular Goldmine), SEO Topical Map clusterer (1,000-keyword input → priority-scored cluster table), Authority Miner (NotebookLM-specific “borrow credibility” workflow), Hardback Book Mockup + New Age App Lead Magnets. Mostly platform-agnostic; five entries assume NotebookLM / customGPT / Manus. Bracket-template syntax — modern Anthropic practice would re-wrap the high-value ones in XML structure tags before saving as Skills.
- R.I.T(E) Prompt Framework — Eliot Prince’s 4-Part Structure — 4-part prompt structure (Role / Input / Task / Example) plus a Run/Review step. Cross-vendor (any AI). Operator-friendly mnemonic over methodology — covers the same primitives as Anthropic’s Claude Prompting Best Practices but compressed to four letters. 80/20 rule: Role + Task = 80% of the value. Pairs with Lyra (the auto-generator Skill from Five Claude Skills). Includes the RITE Method Prompt Generator GPT. British-spelling fingerprint of the AI Recipe Vault.
Related topics
- Claude AI — Claude Code, API, skills, MCP, agents. Prompt techniques intersect with every article there.
- Agents & Agentic Systems — Agent loops, tool-use prompting, multi-agent coordination.
- AI Video & Content Production — Voice profiles and banned-pattern lists are applied prompt engineering for creative work.
Open questions
The prompt-engineering research agenda is maintained in the vault (_research-agenda.md) as an internal working document — not published.