Source: ai-research (web research, 2026-04-11) Repo: github.com/avidevelops/claude-architect-exam-prep Stars: 39 (at time of ingest)
A breakdown of exam-style practice questions from the community prep repository, organized by CCA-F domain. This is NOT a copy of the questions — it summarizes the patterns, principles, and tradeoff categories each question tests. Use this to identify which architectural concepts you need to study deeper.
About the Source Repo
- 15+ exam-style scenario questions with detailed explanations
- Each question includes: scenario context, multiple-choice options, correct answer with reasoning, why each alternative is weaker, and a key takeaway principle
- NOT official Anthropic material, but aligns closely with published certification domain descriptions
- Active community: 39 stars, 14 forks
- Format mirrors the actual exam: production scenario context followed by “which approach is best” questions
Domain 1: Agentic Architecture (27%)
Patterns tested
- Batch API cost optimization — when processing large volumes of non-urgent tasks, the Batch API offers 50% cost savings. Questions test whether you recognize when batch is appropriate (non-real-time) vs. when real-time API is required (user-facing, latency-sensitive)
- SLA calculations for agent chains — when agents call tools in sequence, overall SLA is the product of individual reliabilities. A 5-step chain with 99% reliability per step = 95% overall. Questions test whether you can calculate this and design for it
- Fallback loop architecture — when an agent step fails, the correct response depends on the failure type. Transient failures get retries; systematic failures need fallback tools; ambiguous failures need human escalation
- Workflow decomposition decisions — when to split a monolithic agent into subagents vs. keeping it unified. Splitting adds coordination overhead but improves reliability and debuggability
Key principle: cost-aware architecture
The exam tests whether you can make architectural decisions that balance cost, latency, and reliability. The cheapest approach is not always best; the fastest approach is not always best. The right answer depends on the specific constraints given in the scenario.
Domain 2: Tool Design & MCP Integration (18%)
Patterns tested
- Schema design for ambiguous inputs — when a tool parameter could be interpreted multiple ways (e.g., “status” could mean account status, order status, or payment status), the correct solution is enum constraints or separate parameters, not prompt instructions
- Tool interface design — lookup/action tool pairs vs. monolithic tools. Questions present a scenario where a single tool takes a natural language name and performs an action, then ask how to make it more reliable. The answer is always: split into lookup (returns ID) + action (takes ID)
- Pagination patterns — when a tool returns data, it should return a page with metadata (count, cursor, has_more), not a raw dump. Questions test this by presenting scenarios with large result sets
- Machine-readable identifiers — questions present tool schemas using natural language parameters (
customer_name: "Acme") and test whether you recognize the fragility. The fix:customer_id: "cust_123" - Error handling in tool chains — when tool B depends on tool A, what happens when A returns an error? Questions test structured error propagation vs. silent failure vs. retry logic
Key principle: semantic validation through schema
JSON schema is not just for formatting — it is a semantic validation layer. Enums constrain the model to valid options. Required fields prevent incomplete inputs. Pattern matching catches malformed IDs. The schema is the first line of defense against hallucinated tool inputs.
Domain 3: Prompt Engineering & Structured Output (20%)
Patterns tested
- Structured output for reliable parsing — when the downstream system needs to parse the model’s output, structured output schemas eliminate parsing failures. Questions test when free-form text vs. structured output is appropriate
- Self-correction validation — have the model output both a result and a verification of that result. If they conflict, trigger review. This is called “dual-field validation” in some prep materials
- Prompt refinement vs. system scaling — a prompt that works 80% of the time: do you fix the prompt or add retry logic? The exam consistently favors fixing the root cause (the prompt) over adding compensating mechanisms (retries)
- Schema constraints to prevent hallucinations — when the model must select from a known set of options, encoding those options as an enum in the output schema is deterministic. Listing them in the prompt text is probabilistic. The exam prefers deterministic
Key principle: optimize before you scale
This principle appears in multiple questions across domains. The correct first step is almost always to improve the prompt, schema, or tool design — not to add more infrastructure, retries, or fallback logic. Scale amplifies problems; optimization eliminates them.
Domain 4: Claude Code Configuration (20%)
Patterns tested
- Project configuration — CLAUDE.md as the canonical source of project conventions. Questions test whether configuration belongs in CLAUDE.md, in code, or in prompts
- Permission scoping — giving agents minimal necessary permissions. Questions present scenarios where an agent has broad permissions and ask how to reduce risk
- Session management — how to handle context across sessions, compaction, and what happens when context is lost
Key principle: make the right thing easy
Configuration should encode team decisions so individual agents follow them automatically. If a convention has to be re-explained in every prompt, it belongs in configuration instead.
Domain 5: Context & Reliability (15%)
Patterns tested
- Session management with state changes — an agent was running, paused, and now resumes. The external system changed while the agent was paused. Questions test whether you design for state validation on resume or blind continuation
- Context window management — given a large amount of information, what goes in the system prompt, what goes in the user message, and what gets summarized or excluded. Questions test understanding of the “lost in the middle” effect
- Data provenance — in a multi-step agent chain, can you trace a claim back to the tool call that produced it? Questions test architectural patterns for maintaining provenance (structured logs, tagged outputs)
- Security in MCP contexts — tool permissions, credential management, and what an agent should and should not have access to
Key principle: trust but verify on resume
Never assume the world is the same as when you left it. Session resumption requires re-validating external state before continuing. This is tested through scenarios where blind continuation leads to acting on stale data.
Cross-Domain Patterns
Several principles appear in questions across multiple domains:
| Principle | Domains where tested |
|---|---|
| Deterministic over probabilistic | All 5 |
| Optimize before scaling | 1, 3, 4 |
| Machine IDs over natural language | 2, 4 |
| Paginate all results | 1, 2 |
| Structured error responses | 1, 2, 5 |
| Scope tools dynamically | 1, 2, 4 |
| Validate on resume | 1, 5 |
How to Use This for Study
- Identify your weak domains — if the patterns in a domain feel unfamiliar, that domain needs more study time
- Practice the tradeoff format — the exam does not ask “what is X.” It asks “given constraints A, B, and C, which approach is best.” For each pattern above, think about when it would be the wrong choice
- Build mental models for each principle — when you see “deterministic over probabilistic” in a question, you should immediately know: schemas, enums, backend validation > prompt instructions, retries
- Work the actual practice questions — this summary tells you what to expect, but the repo has full scenario contexts with detailed explanations of why each answer is correct and why alternatives are weaker
Key Takeaways
- The exam heavily favors deterministic/structural solutions — if a question offers both a schema-based and a prompt-based approach, the schema wins
- Cost-aware architecture is a recurring theme: know when Batch API vs. real-time is appropriate, and how to calculate chain reliability
- Tool design questions almost always resolve to: use machine IDs, split lookup from action, paginate results, validate with schema
- “Optimize before scaling” is perhaps the single most-tested principle — it appears in at least 3 domains
- Session management questions test whether you validate external state on resume — never assume continuity
- The community repo is a strong supplement to official Anthropic Academy courses, but the actual exam may test patterns not covered here
Try It
- Clone the repo —
git clone https://github.com/avidevelops/claude-architect-exam-prep.gitand work through every question - Score yourself by domain — track which domains you get right and which you miss, then focus study on your weakest areas
- For each wrong answer, understand why — the detailed explanations are the most valuable part; do not just check correct/incorrect
- Create your own scenarios — take a real system you have built and write exam-style questions about its architecture. This forces you to think about tradeoffs from the examiner’s perspective
- Revisit after studying — work through the questions once before studying, once after. Track improvement
Related
- CCA-F Practice Exam (60 Questions) — 60 additional practice questions from a separate study guide
- CCA-F Technical Reference — deep technical content tested by these questions
- Claude Certified Architect — Foundations (CCA-F)
- CCA-F Study Guide
- Claude Code Subagents
- Essential MCP Servers for 2026
- Skill Design Patterns
- Skills vs MCP vs Plugins
- Claude Agent Hierarchy
Open Questions
- How frequently is the community repo updated — does it track exam changes?
- Are there other community practice question sets beyond this repo?
- How representative are these questions of actual exam difficulty — are real questions harder, easier, or comparable?
- Does Anthropic plan to release official practice questions or a sample exam?