Source: ai-research/eliot-prince-rite-prompt-framework-2026-05-02.md (Notion AI Recipe at notion.so/jonathonc/R-I-T-PROMPT-FRAMEWORK-f0025679607682938ee781ef06148696, fetched via Notion MCP 2026-05-02; January 9 2026; from Eliot Prince’s AI Recipe Vault)
A 4-part prompt structure — Role → Input → Task → Example — plus a 5th Run/Review step. Cross-vendor (any AI). The reusable artifacts are (a) the structural skeleton itself, (b) the 80/20 rule (Role + Task = 80% of the value), and (c) the RITE Method Prompt Generator GPT which auto-asks the operator for the four components. Pairs natively with the Lyra Prompt Writer skill (auto-generates RITE-shaped prompts inside Claude rather than ChatGPT-Custom-GPT). Most useful adjacent to Anthropic’s prompting best practices — RITE provides a memorable mnemonic for the same structural primitives Anthropic teaches under different names.
Key Takeaways
- R.I.T.E. is the order, not the acronym. The framework name “R.I.T” is just the first three steps; the “(E)” Example step is the differentiator most operators skip. The full structure is Role (R) + Input (I) + Task (T) + Example (E) + Run/Review. The “TE” pairing matters: a precise Task without an Example produces structurally-correct but stylistically-wrong output. The Example is what teaches the model your quality bar, not just your requirements.
- 80/20 rule: Role + Task is the load-bearing pair. Eliot explicitly: “If you’re rushing, nail Role + Task. Input and Example make it perfect, but Role + Task gets you 80% there.” Practical test: a one-line Role (“Act as a senior copywriter with 15 years SaaS email experience”) + a structured Task block (sections / length / tone) outperforms most ad-hoc prompts even without the Input/Example layers.
- Role statements work better with experience years and specific traits. Generic (“Act as a marketer”) underperforms specific (“Act as a customer research specialist with 10 years of experience in B2B SaaS. You’re analytical, methodical, and skilled at uncovering pain points, behavioral patterns, and buying triggers”). The pattern: profession + years + domain + traits + capabilities. Repeatable across roles — Eliot ships 3 example role statements (Customer research / Content writing / Strategy) that follow the exact same template.
- Task block has 6 dimensions to specify. Goal / Structure / Length / Tone / Format / Constraints. The structural template (sections list with required content per section) is the part operators most often under-specify. Eliot’s full ICP example shows 5 sections, each with bullet-level requirements (e.g., “Buying Triggers (3-5 scenarios) — what makes them start searching”). Reduces the “guessing” surface area to near-zero.
- Examples should be paired (Good + Bad) with explanation. The Pain Point example in the recipe shows good (“Hitting capacity ceiling — Can’t take on more clients without hiring, but margins are too thin to afford senior SEO talent. Stuck between turning down revenue and burning out the existing team”) vs bad (“They’re busy”), then explicitly enumerates why the good version works (specific / layered / emotional). The contrast plus the meta-explanation does more than either alone — same primitive as few-shot prompting (covered in prompt engineering essentials) but with an explicit quality rubric appended.
- Refinement is faster than restart. When the output is wrong, don’t rewrite the prompt. Instead, refine in-thread: “This is too generic. Go deeper on the pain points section.” / “Match the tone in my example more closely — shorter sentences, more conversational.” / “Add more specificity to the buying triggers.” This works because the RITE prompt already contains the structural intent — refinement only needs to nudge specific dimensions, not re-establish context.
- The Prompt Generator GPT is the cheat code. eliotprince.com/resources/prompt-generator-gpt — a custom GPT that asks the operator for the four components and auto-generates the RITE prompt. Removes the “I forgot the framework” failure mode. Lyra (the Skill from 5 Claude Skills) is the Claude-native equivalent.
How It Works (Compact)
| Step | Letter | Action | Time | What it achieves |
|---|---|---|---|---|
| 1 | R | Define the Role | ~1 min | Persona, expertise, perspective |
| 2 | I | Provide Input (Context) | ~2 min | Business / audience / constraints |
| 3 | T | Write the Task | ~2 min | Zero-ambiguity instructions |
| 4 | E | Add an Example | ~2 min | Quality benchmark |
| 5 | – | Run & Review | ~3 min | Validate, refine in-thread |
Final prompt skeleton:
ROLE:
[Profession + years + domain + traits + capabilities]
INPUT:
[Business situation, audience, what you have/don't have, what you need]
TASK:
[Numbered sections with length + content requirements + tone + format]
EXAMPLE:
[Good example + Bad example + meta-explanation of why the good one works]
Patterns Worth Lifting
- Mnemonic over methodology. The framework’s whole value is operator memorability. Compare to Anthropic’s prompting best practices — same structural primitives (role-setting, context-loading, task-specification, exemplars, iteration), but Anthropic teaches each as a chapter; RITE compresses to four letters. Operator pickup is the trade-off — RITE is faster to learn, less accurate to deep technical patterns. Use RITE as the operator layer; deeper patterns when iterating.
- Quality bar via good-vs-bad pairs. Showing both extremes plus a meta-explanation of why is reusable beyond prompting. Same primitive as onboarding course exercises (good vs bad email examples), ad-creative briefs (winning vs losing variants), video script reviews (Mel’s feedback rules). Build a “good + bad + why” rubric for any deliverable type and operators converge on quality faster than with a single exemplar.
- Voice mode for prompt input. Recipe surfaces this as a productivity multiplier — “Dictate your RITE prompt 3x faster than typing.” The 80/20 RITE skeleton is short enough that voice transcription works without heavy editing, especially for the Role and Input blocks. Pairs with the Cowork AI Consultant recipe’s voice-first knowledge-file authoring.
- Reusable role statements as a personal asset. “Build a library of RITE prompts for common tasks (customer research, content creation, strategy work)” — treat role statements + task structures as named, versioned, reusable artifacts. Same pattern as Claude Skills (named reusable workflows) but lighter-weight — a Notion page or a Google Doc table works. Drop into Claude project knowledge for cross-thread availability.
- Pair RITE with Lyra. Lyra is the Claude Skill from the 5 Skills recipe that auto-generates RITE-shaped prompts. RITE is the manual structure; Lyra is the auto-generator. Operators benefit most from learning RITE first (so the output of Lyra is legible and refinable) and adopting Lyra second (so the activation energy stays low).
Caveats
- Effectiveness claims are operator-reported. “Top 0.0001% of AI users” framing is hype, not measured. RITE is structurally sound but the empirical performance vs simpler prompts is not benchmarked in the recipe. Treat as one operator’s reusable scaffold, not a peer-reviewed framework.
- Single-shot bias. RITE optimizes for the first-shot prompt being good. Multi-turn / iterative workflows where the AI proposes options, the operator picks, and the AI refines — that’s a different shape. RITE still works in that mode, but the “Add Example” step gets diminishing returns once you’re in the iteration phase.
- Long prompts have a cost. A full 4-part RITE prompt with detailed Input + Example can hit 1000+ tokens before the model responds. For high-frequency tasks (daily / hourly), the cost-per-call matters; consider prompt caching or stripping back to the 80/20 (Role + Task) once you’ve calibrated the model’s defaults.
- British spelling throughout — same UK-authored fingerprint as the rest of the AI Recipe Vault. Adapt examples for US-English client work (e.g., “personalize” not “personalise”).
- Prompt Generator GPT is a third-party hosted tool at eliotprince.com — subject to change without notice. Lyra (Claude-native) is the more durable substitute if you want a vendor-managed surface.
Try It
- Run RITE on a real WEO Marketly task this week. Pick a low-stakes one — e.g., draft a client status update or summarize a Slack thread. Time the prompt-construction (full RITE, all 4 parts) vs your usual shape. Measure: output quality on a 1-5 scale, refinement turns needed. Most operators see fewer refinements with RITE even on the first attempt.
- Build a library of role statements for the 5-10 expert personas WEO Marketly hires Claude as most often (SEO auditor / web reviewer / ad copywriter / GBP optimizer / content strategist / proposal writer). One paragraph each. Drop in a project knowledge file or a Notion table.
- Build a Good+Bad+Why rubric for one deliverable type. Pick a high-volume one — blog post / GBP description / ad copy / cold email. Two paragraph examples (one good / one bad) + 3-line meta-explanation. Reusable in the Example slot for every RITE prompt of that type.
- Compare to Lyra. Build the Lyra Skill in Claude. Run the same task through manual RITE vs Lyra-generated RITE. Compare prompt structure and output quality. Decide whether Lyra’s auto-generation matches your manual standard or whether your library of role statements + rubrics outperforms.
- Cross-reference Anthropic’s prompting best practices. RITE is a memorable surface; Anthropic’s docs are the deeper layer. After 2-3 weeks of RITE use, read Anthropic’s docs and notice which sections you naturally already practice.
Related
- AI Recipe Vault — Eliot Prince’s Catalog — parent vault
- 5 Claude Skills To Build Right Now — Lyra (Skill 3) auto-generates RITE-shaped prompts
- Cowork “AI Consultant” Recipe — uses RITE-style prompts inside the 4-knowledge-file architecture
- Anthropic’s Prompting Best Practices — deeper structural layer
- Prompt Engineering Essentials — covers few-shot prompting + explicit criteria, the underlying primitives behind RITE’s Example and Task steps
- MEGA PROMPT CHEST — applied prompt library; some entries follow RITE structure implicitly
- OpenAI GPT-5 Prompting Guide — cross-vendor reference; tool-preamble + self-rubric primitives complement RITE
- Troubleshooting Claude — recovery when RITE outputs miss the bar
Open Questions
- Empirical benchmarks. No A/B data on RITE vs unstructured prompts vs other frameworks (CoT / Tree of Thoughts / Anthropic’s structured prompts). Worth a 30-task internal bake-off if RITE becomes load-bearing for WEO Marketly operators.
- Token cost of full-RITE prompts. Full-RITE with detailed Example can hit 1000+ input tokens. For high-frequency operations (e.g., GBP description generation at scale) the cost matters. Pair with prompt caching or RITE-Lite (Role + Task only) for high-volume cases.
- RITE for multi-turn workflows. RITE optimizes for first-shot quality. For iterative workflows (proposal drafts → client feedback → revisions), the framework needs adaptation — possibly a “Memory” or “Prior State” addition between Input and Task. Worth experimenting with.
- Author follow-on. Eliot is actively publishing recipes; whether RITE evolves (RITE-Plus? RITER with Refine?) is worth monitoring. Watchlist candidate.