Source: Building with Claude Managed Agents and Asana AI teammates (Asana / Ara, Code with Claude 2026, May 7 2026 customer talk, YouTube BrpB-h1e--k)
Ara from Asana walks through how Asana built AI Teammates (GA March 2026) on top of Claude Managed Agents. The frame: most enterprise AI agent usage today is single-player — an individual chats with an agent, gets an outcome, hands off to a human. Asana’s bet is multiplayer — agents as actual actors in the work system with their own RBAC, sharing controls, and shared enterprise memory, working alongside multiple humans through end-to-end multi-step workflows. Asana provides the work-graph + UI + auth + enterprise-context layer; Claude Managed Agents provides the multi-step action engine with verification loops + grader. Demo: a marketer uses an AI Teammate to produce a campaign brief and an HTML landing page mockup, then hands it to a teammate who iterates via comments — the agent remembers feedback (e.g., “primary color is now blue”) across sessions and across users. 21+ pre-built AI Teammates ship today across PMO / marketing / IT / HR / R&D.
Key Takeaways
- Asana’s vision: agents as multiplayer actors, not single-player tools. Most enterprises today let an individual interact with an agent → get output → pass to humans. Knowledge doesn’t compound. No shared enterprise memory, no true multiplayer / multi-human-in-the-loop interactions. Asana’s bet is the agent is “an actor in the system” — RBAC controls, sharing controls, multi-human nudges and feedback, end-to-end jobs to be done.
- AI Teammates went GA March 2026. Pre-built teammates designed against Asana’s ICPs (PMO, marketing, IT, HR, R&D). 21+ pre-built ones. Examples: launch planning, spec writing, goal management, resource management, capacity planning, competitive intelligence research, “product thought buddy” (Ara’s example — used by his team to give feedback on keynote drafts that surface trade-offs from the actual Asana road map).
- The Asana work-graph is the enterprise-context substrate. 17 years of building this: missions and visions tracked by goals, goals in portfolios, portfolios delivered by projects, projects with tasks, tasks with approvals + workflows + history. Both human UI (kanban / list / timeline) and AI-agent context graph come from the same model — humans get a UI, agents get a context API. Approvals, historical decisions, back-and-forth on a campaign brief or project plan that finally led to its approval — all preserved as agent context.
- What Asana brings vs what Claude Managed Agents brings. Clean separation:
- Asana side: human-interface coordination across multiple people, work-graph context, enterprise security guardrails (RBAC across agent actors), shared memory across teammates, integrations (Google Drive, Office 365, more landing).
- Claude Managed Agents side: multi-step action engine, built-in verification loop, built-in grader (Asana passes in the desired outcome; Claude’s grader iterates the output until it passes), reduced prototyping cost vs hand-rolling agent loops, multiple parallel agents working independently.
- Concrete demo workflow (paraphrased from the talk’s video):
- Marketer creates a Kanban task in Asana: “create a campaign brief and prototype a landing page.”
- Picks a pre-built AI Teammate from the gallery; teammate auto-pulls relevant work-graph objects (previous campaign projects, portfolios) into its memory.
- Teammate produces (a) a campaign-brief document and (b) an HTML landing-page mockup. Both delivered as Asana-tracked artifacts.
- Marketer comments on the agent’s work: “this is great but make the primary color blue.” The agent ingrains the feedback into its memory — so a different marketer using the same teammate later won’t repeat the green-default mistake.
- Marketer hands off to a reviewer — multiplayer continues. Multiple humans nudge the same agent; the agent persists feedback across the whole team.
- Why Managed Agents specifically (vs Messages API). Asana’s prior implementation used the Messages API and had to hand-build the agent loop, file management, code execution, verification logic. Switching to Managed Agents: faster prototyping, built-in verification loop, built-in grader for outcome quality, native parallel-agent execution. “We focus on what is unique to Asana — human interface, context, security. Quality of the output is what we leverage Managed Agents for.”
- Two things customers asked Ara about during Q&A.
- Skill / capability maintenance over time. Ara’s answer: Asana is a “shrink-wrap product for knowledge workers.” Asana decides which skills bake into the GA AI Teammate and adds more based on ICP working-backwards research. Long-term they may open it up so customers design their own skills, but the primary go-to-market motion is Asana-curated AI Teammates with controlled lifecycle, quality, and release cadence, not customer-customizable.
- Third-party integration patterns. Asana integrates third-party tools at two levels: directly with Asana’s own AI Teammate agent loop AND at the MCP level with Managed Agents. Both layers receive the same context to keep behavior consistent whether or not the Managed Agents sub-agent path is invoked.
- Internal dogfooding results. Asana internal use of AI Teammates over months has produced agents like “product thought buddy” — has all the trade-off context, road map, why-we-decided-X reasoning. When marketing asks for product team feedback on a keynote draft, they assign the task to that teammate. Output: a plan + feedback “highly optimized for the way in which we work, accurate from the perspective that it’s using all of the real-time context.” Done in multiplayer mode: everyone on the product team sees the response, can react, can give nudges that the agent remembers across runs.
- Enterprise memory + agent actor model is the load-bearing claim. “When agents are real actors in the system with sharing and RBAC controls just like onboarding a new human teammate, they can work with multiple people, get nudges and interactions, and complete end-to-end jobs to be done.” This is the same shape Mahes describes in Memory and Dreaming for Self-Learning Agents — Asana’s AI Teammate is one of the named Memory-customer applications, just framed from the customer’s product surface rather than the platform team’s API surface.
- Anecdote: a former Asana built a competitive-intelligence AI Teammate before leaving. Now that engineer is at Anthropic. The teammate is still in use day-to-day at Asana, with all of the historical context for RFP responses ingrained, getting better as more people on the team use it. Multiplayer-memory continuity outlasts individual employment — non-trivial implication for organizational knowledge management.
Where it fits in the wiki
- Customer-side counterpart to Mahes’ Memory + Dreaming talk. Mahes describes the platform primitives; Asana’s Ara describes what a real customer built on top — pre-built AI Teammates, multiplayer feedback loop, persistent feedback memory across users, integrated into the existing work-graph product. Same conference, same conference-track logic.
- Concrete reference for Claude Managed Agents customer-built deployments. This is the most-detailed Managed-Agents customer talk to date. Specifics: built-in verification loop + built-in grader were the load-bearing reasons to migrate from Messages API. Useful for builders trying to decide whether the Managed Agents abstraction is worth the migration cost.
- Reference architecture for “agentic enterprise” stories. The Gartner CMO Quarterly and conference keynote both reference “agents as actors in enterprise systems” abstractly. Asana’s talk is the concrete embodiment — RBAC scopes per-agent, work-graph context auto-pulled, multiplayer feedback loop, persistent learnings across users.
- Composes with AI Marketing use cases. The demo (campaign brief + HTML landing page from a marketing task) is on the marketing-workflow surface that the wiki’s ai-marketing topic covers. Asana AI Teammates are a Cowork-adjacent surface for that work.
- Adjacent to Claude Cowork but distinct. Cowork is a horizontal claude.ai surface for any workflow. Asana AI Teammates are vertically integrated into Asana’s work-graph product — different placement, similar agentic-actor pattern.
Implementation
- Tool/Service: Asana AI Teammates (GA March 2026), built on Claude Managed Agents API.
- Setup (customer side):
- Enable AI Teammates in your Asana workspace.
- Pick from the 21+ pre-built teammates matching your ICP (PMO / marketing / IT / HR / R&D).
- Customize per use case — Asana auto-pulls relevant work-graph objects (relevant projects, portfolios, tasks) into the teammate’s memory.
- Start delegating work via Asana tasks: assign the task to the teammate. Teammate produces output as Asana artifacts (documents, HTML, slides, comments).
- Iterate via comments on the artifact. Feedback persists in shared memory across team members.
- Setup (builder side, if you’re building something Asana-shaped on Managed Agents):
- Use Managed Agents’ built-in verification loop and grader instead of hand-rolling your own.
- Pass in the desired outcome; let the grader iterate the output until it passes.
- Run multiple agents in parallel for tasks that need it.
- Integrate third-party tools at both levels (your own agent loop AND MCP level on Managed Agents) so context stays consistent.
- Cost: Asana AI Teammates is bundled in Asana enterprise tiers (specifics not given on stage). Builder-side cost is normal Managed Agents pay-as-you-go.
- Integration notes:
- Per-agent RBAC: critical. The teammate respects the same access controls as a human teammate.
- Asana-wide context: teammate sees what its assigner can see (subject to RBAC).
- Agent memory persists across users. Feedback from User A reaches User B’s session of the same teammate later.
- Skills / capabilities are Asana-curated, not customer-customizable as of GA.
- Integrations Google Drive + Office 365 today, more landing in coming weeks. Slides, comments, full HTML output supported.
Open Questions
- What’s in Asana’s grader prompt vs the Managed Agents grader? Asana describes “passing in the outcome we want” and Managed Agents iterating. The boundary between Asana’s outcome spec and Managed Agents’ grader logic isn’t fully exposed in the talk.
- Multi-tenant memory isolation. When two different Asana customers use the same pre-built teammate (e.g., “competitive intelligence researcher”), does memory leak across tenants? The architecture implies per-customer memory stores, but worth verifying.
- Skill catalog evolution. Ara hinted at customer-customizable skills “over time.” What’s the rough roadmap — Q3 2026? 2027? Enterprise-tier-only? Worth tracking.
- Failure-mode disclosure. Six months of GA dogfooding implies failure cases discovered. What does it look like when an AI Teammate fails — silent bad outputs, refused work, infinite loops, what? Not addressed.
- Agent-as-actor security model details. RBAC parity with humans is the headline, but does the agent get its own audit log, its own session history per actor, its own credentials? “Just like onboarding a human teammate” suggests yes; the operational details would matter for enterprise procurement.
- Pricing for AI Teammate seats. “Generally available” implies a SKU — bundled in existing Asana tiers, separate add-on, per-seat, usage-based?
Try It
- Watch the talk (YouTube
BrpB-h1e--k) for the live demo. The “marketer assigns campaign brief task → teammate produces brief + HTML landing page → reviewer comments to change primary color → memory persists” loop is the centerpiece. - If you’re an Asana customer: open the AI Teammate gallery in your workspace. Pick one pre-built teammate aligned with your team’s ICP and assign it a real task you’d otherwise do manually. Compare output quality with what you’d produce.
- If you’re a builder: the migration story (Messages API → Managed Agents API) is the actionable signal. The two reasons Ara cited as the reason to migrate were built-in verification loop and built-in grader for outcome quality — read the Managed Agents cookbook coverage and assess whether your hand-rolled loop is doing the same work less efficiently.
- Compare against Claude Cowork for the same use case. Cowork is a horizontal alternative; Asana AI Teammate is the vertical product. For the same campaign-brief-and-landing-page task, which surface fits the customer’s existing workflow better?
- For multi-tenant agent products: study Asana’s design choices specifically — pre-built skills (Asana-curated, not customer-built), pre-pulled context (work-graph objects auto-included), multiplayer feedback loop. The “shrink-wrap quality control” pattern (Asana decides which skills ship; customers customize behavior, not skills) is a non-obvious go-to-market choice worth weighing.
Related
- Code with Claude 2026 — Opening Keynote — umbrella talk
- Memory and Dreaming for Self-Learning Agents (Mahes) — platform-side companion
- The Expanding Toolkit (Lucas) — primitives below this talk’s orchestration layer
- The Thinking Lever (Matt Bleifer) — token-economics companion
- Claude Managed Agents — the API surface this is built on
- Managed Agents cookbook (multiagent + outcomes)
- Claude Cowork getting started — horizontal alternative
- Cowork “AI Consultant” recipe
- AI Marketing — campaign-brief + landing-page use case lives here
- AI Industry Research — Gartner agentic-enterprise framing