Source: ai-research/eliot-prince-cowork-projects-ai-consultant-2026-05-02.md (Notion AI Recipe at notion.so/jonathonc/Claude-Cowork-Projects-b3125679607682e4aad801e97d4dc355, fetched via Notion MCP 2026-05-02; March 23 2026; from Eliot Prince’s AI Recipe Vault)

A 6-step recipe for turning a Claude Chat Project into a Cowork-enabled “AI Consultant” that researches new clients, produces deliverables in your voice, and runs autonomously inside Cowork. The unlock is Cowork’s Import from project feature — your Chat Project’s knowledge files and custom instructions flow directly into a Cowork Project, eliminating the previous re-explaining-context overhead. The reusable architectural pattern is the 4 Knowledge Files spec: Business DNA + Client Intelligence Brief + Service Playbook + Consulting Framework. Most reusable artifact is the Client Intelligence Brief prompt — an industry-agnostic 6-domain due-diligence research template that produces a 10-section output report.

Key Takeaways

  • The unlock is Import from project. Cowork now has a “Projects” section in its sidebar with a + button → “Import from project” → search and select your Claude Chat Project → choose local folder → done. Knowledge files and custom instructions transfer; Cowork can additionally create files, organize folders, execute multi-step tasks. Before this, operators had to copy-paste context into every Cowork session. This is the connective-tissue feature that makes Chat Projects load-bearing for Cowork operators.
  • The 4-Knowledge-File architecture is the spec. File 1 = Business DNA (your business, voice, values, frameworks, credentials — “if you hired a new consultant, what would the onboarding pack contain?”). File 2 = Client Intelligence Brief (a reusable research prompt — paste in client name + website, get a 10-section briefing). File 3 = Service Playbook (services, pricing, timelines, deliverables, common objections). File 4 = Consulting Framework (your actual delivery methodology, structured by phase: Discovery & Audit / Analysis & Strategy / Recommendations & Reporting / Implementation & Support — “the actual process, not what you wish you did”).
  • Client Intelligence Brief prompt is the reusable artifact. Quoted verbatim in the source file. 6-domain research scope (Business Profile / Market Positioning / Digital Presence / Financial Indicators / Growth Trajectory / Industry Context); produces a 10-section output report (Executive Summary → Source References). Industry-agnostic — swap the CLIENT INFORMATION block at the top, run unchanged. Eliot’s claim: turns 4 hours of manual research into 10 minutes. Worth running on a real WEO Marketly prospect this week to calibrate.
  • “Fundamentals before automation” is the load-bearing principle. Build the 4 knowledge files → write Custom Instructions → test in Claude Chat → THEN import into Cowork. Skipping the Chat-test step is how operators end up with “a fast system that produces rubbish.” Echoes the discipline in build and the AIOS pattern — get the prompt working in a single conversation before automating it. Most operator failures are upstream of the automation, not in it.
  • Custom Instructions should cover 5 dimensions. Role definition (senior consultant / strategist / research analyst — be explicit), Behaviour rules (tone, level of detail, ask-vs-assume), Output standards (formatting, structure, quality bar), Knowledge file routing (“when researching a new client, use the Client Intelligence Brief prompt”; “when writing proposals, reference Service Playbook + Consulting Framework”), and Guardrails (what not to do, what to check first). The routing layer is the often-missed piece — without it, Claude has to guess which file applies to which task.
  • “Fix the instructions, not the chat” is the operator discipline. When Claude gets something wrong, don’t just correct it in the live conversation — update the Custom Instructions so it doesn’t happen again. Treats the project as a system being refined, not a chat being tolerated. Same fix-upstream principle as updating the Consulting Framework when client engagements reveal “I should do that differently next time.”
  • Voice notes lower the activation energy. If writing your Business DNA or Consulting Framework “feels like pulling teeth,” record yourself for 10 minutes → transcribe → use Claude to structure. The recipe surfaces this as a recurring trick — same approach as the Cowork Getting Started recipe’s memory-loading-via-voice-recording step. Voice-first → tidy-later eliminates the “blank page” friction that kills knowledge-file projects.

Architecture Diagram (mental model)

[ Claude Chat Project ]
    Knowledge Files:
      - Business DNA
      - Client Intelligence Brief (reusable prompt)
      - Service Playbook
      - Consulting Framework
    Custom Instructions:
      - Role / Behaviour / Output / Routing / Guardrails

         ↓  Import from project

[ Cowork Project (same brain + execution) ]
    Same knowledge + instructions
    + Local folder for file output
    + Multi-step task execution
    + Cowork Dispatch (mobile assignment)
    + Scheduled tasks (recurring work)

Patterns Worth Lifting

  • Knowledge file as a reusable prompt. File 2 (Client Intelligence Brief) is not data — it’s an executable prompt stored in the project knowledge. Naming this as a file rather than a Custom Instruction lets Claude reference it explicitly (“use the Client Intelligence Brief prompt”) and lets the operator iterate on the prompt independently of behaviour rules. Generalizes: any reusable workflow becomes a knowledge file (e.g., “Proposal Template”, “Strategy Doc Template”, “Kickoff Agenda”).
  • Methodology-as-code. File 4 (Consulting Framework) is a formal description of how YOU deliver work. Most agencies have this in tribal knowledge or scattered Notion pages. Externalizing it into a single document Claude can reference is the actual leverage of the recipe — the AI consultant becomes useful in proportion to how well-documented your real methodology is. Prerequisite, not output.
  • One project per business unit. Eliot’s “what’s next” suggests separate projects per area (content, sales, ops). The architecture scales — same import-from-project flow per Cowork project. For agency operators, plausible split = one project per service line (SEO / Web / Ads / Strategy) with shared Business DNA + per-line Service Playbook + Consulting Framework.
  • Living documents. Both the Consulting Framework and Custom Instructions are framed as documents you update after every engagement. The system gets smarter via deliberate post-mortem updates, not via in-chat corrections. Same pattern as Superpowers’ codified discipline of writing the methodology down so the next agent inherits the lesson.

Caveats

  • Source is one operator’s recipe, not Anthropic-published guidance. Test on real client work before depending on it. The recipe’s claim of “10 minutes vs 4 hours” for client research is plausible for the breadth covered, but quality matches the underlying source quality (e.g., Companies House for UK only — substitute SEC filings / business registries for non-UK clients).
  • Cowork desktop dependency. “Import from project” is a Claude Desktop / Cowork feature; not available in claude.ai web for the Cowork half. Operators on web-only get most of the benefit (Knowledge Files + Custom Instructions still work in Chat) but lose the file-creation / multi-step execution. Disclose to operators evaluating the recipe.
  • British spelling and UK-business context. Recipe references “Companies House” and “limited company” filings. UK-authored fingerprint, same as the rest of the AI Recipe Vault. Adapt the Client Intelligence Brief Domain 4 (Financial Indicators) for non-UK clients.
  • Privacy on Knowledge Files. Business DNA + Service Playbook may contain confidential pricing, client lists, or proprietary methodology. Anthropic’s Claude data policy (no training on Pro/Max conversations by default) covers this — but if your jurisdiction or vendor agreements require stricter handling, treat the project knowledge as you would any internal SaaS upload.
  • No A/B effectiveness data. Eliot’s claims (10-min research, “produces YOUR work, not generic AI work”) are operator-reported, not measured. Worth tracking actual delivery time savings on the next 3-5 client engagements before generalizing.

Try It (WEO Marketly fit)

  1. Build the 4 knowledge files for one WEO Marketly service line first. Pick the highest-volume one — likely SEO or Local SEO. Business DNA (5 pages tops) + Service Playbook (existing pricing + delivery doc) + Consulting Framework (the actual SOP for an SEO engagement) + Client Intelligence Brief (the prompt above, swap Companies House for US business registries / Yelp / GMB). Should be 1-2 sittings, not weeks.
  2. Test in Claude Chat first. Run “Research [real prospect] and produce a Client Intelligence Brief” against a real prospect. Compare to whatever the WEO Marketly prospect-research process produces today. Measure: time to brief, brief quality on a 1-5 scale vs current process, what surprised you.
  3. Import to Cowork only after the Chat test passes the WEO Marketly quality bar. Then run a real prospect end-to-end: research brief → proposal draft → 90-day implementation plan. Compare against current Pulse / Sandler / SEO-engagement workflow.
  4. Lift the Client Intelligence Brief prompt into the WEO Marketly playbook. Even if the full 4-file project doesn’t ship, the brief prompt alone is reusable. Drop it in seo-content or ai-marketing as a standalone artifact.
  5. Decide on the projects-per-service-line architecture. If WEO ships this, the open question is: one mega-project covering all services, or one project per service line? Per-service is more maintainable (smaller knowledge files, sharper Custom Instructions) but means more setup time. Test with one first; expand if the pattern works.

Open Questions

  • Effectiveness vs internal WEO Marketly process. Need a live test on real prospects to know whether the 10-minute claim holds and whether the brief quality matches what current SEO / strategy workflows produce. Worth 3-5 prospects of measurement before generalizing.
  • Service-line vs all-services project architecture. Open question for WEO. Likely service-line wins on maintainability; service-line loses on initial setup time. Test with one service first.
  • Cowork Dispatch integration. Recipe’s “What’s Next” step mentions Cowork Dispatch (mobile task assignment) as a downstream automation layer. Worth a separate evaluation — adjacent topic to Cowork Getting Started but not yet covered with the AI Consultant flow.
  • Scheduled tasks for recurring client work. Mentioned but not detailed. Plausible follow-on: monthly client report generation, quarterly review prep, weekly competitive-update digests. Cross-references to scheduled tasks worth pursuing once the base project is shipped.