Source: raw/This AI agent knows which ads actually print money.pdf Author: Matt (Matthew Berman / The Mattberman newsletter), ForwardFuture lineage Repo: github.com/TheMattberman/outcome-kit Published: April 2026

An open-source multi-agent analysis pipeline that connects Meta Ads + GA4 + a real business-outcome source (Calendly, HubSpot, or a CSV/JSON) and tells you which message angles are driving actual revenue — not vanity clicks. The premise: “scale what wins” is broken when “winning” = CTR and ROAS from Meta’s own attribution. Outcome Kit is opinionated, MIT-licensed, and runs on any agent runtime — including Claude Code.

Key Takeaways

  • “Attribution” is the wrong metric. Matt argues it’s become a dashboard-theater word. Real signal is outcomes — bookings, signups, purchases — tied back to the creative that caused them.
  • Angles are the unit of analysis, not creatives. An angle is {angle, audience, creative_family, page}. Multiple creatives share an angle; angles are what you scale.
  • Four Outcome Truths classify every angle:
    • Best winners — strong clicks, strong outcomes (scale it)
    • Fake winners — strong clicks, weak or no outcomes (kill it)
    • Lurking winners — low clicks, outcome-positive (test a better hook)
    • Ungraded winners — not enough data / not yet mapped
  • Three-agent pipeline: data mapperdiagnosticianbrief writer. Each agent has one job; the three coordinate via cron.
  • 12 skills total ship with Git-LFS; install as Claude Code skills, OpenClaw/Hermes skills, or both.
  • Cost delta is the pitch. Agency stack: Triple Whale 500/mo + Hyros 6K/mo → 0/mo (you pay for LLM inference only).
  • Launch cadence: v1 runs Meta Ads → the report cadence is weekly; confidence “scales with data” (100 conversions = medium, 200 = high).

The Core Insight — “Scale What Wins” Is Broken

Two failure modes:

FailureWhat happens
CTR-led optimizationMeta’s optimizer pushes clicks, which aren’t outcomes. You scale a “winner” that converts at 0.3% and burn budget on a fake.
Last-touch attributionHyros/Northbeam/Triple Whale give you vendor-branded dashboards. They claim the outcome; Meta claims the outcome; GA4 claims the outcome. None agree. Analyst-theater follows.

Matt’s reframe: stop trying to resolve attribution. Map angles → outcomes directly from the business-of-record system (Calendly for a booking, HubSpot for a deal, Shopify for an order).

The Four Outcome Truths (Angle Classification)

Strong clicksWeak clicks
Strong outcomesBest winner — scale itLurking winner — rebuild the hook
Weak outcomesFake winner — kill itUngraded / insufficient data
  • Scale best winners. The creative and angle are both working.
  • Kill fake winners. The platform is optimizing for clicks because clicks are what the pixel can see. You’re paying for a vanity signal.
  • Keep lurking winners in rotation but rewrite the hook — the angle works, the creative doesn’t.

The Angle Schema

{
  "angle": "time-savings",
  "audience": "heads-of-growth",
  "creative_family": "founder-direct",
  "page": "/time"
}

Auto-sourced from calendar invite notes in Calendly / HubSpot by matching naming patterns to angles. Each angle-audience-page tuple becomes the analysis grain.

The Three Agents

AgentJob
Data mapperPulls raw data from Meta Ads, GA4, outcome source. Reconciles angle metadata. Writes to a normalized table.
DiagnosticianScores each angle against the Four Outcome Truths. Flags confidence (needs ≥100 conversions for medium, ≥200 for high).
Brief writerGenerates a plain-language brief: what’s winning, what to kill, what to rebuild. Delivered to Telegram / Slack / email via cron.

One cron. Three agents. Report lands where you want it.

The 12 Skills Shipped

Full set not enumerated in the source; article lists them as shipping via Git-LFS and installable as Claude Code skills, OpenClaw skills, or Hermes skills. “You can also talk to it naturally: ‘First fake winners in my Meta funnel.’ ‘Which angle is driving booked calls?’ ‘Give me the daily outcome brief.‘”^[ambiguous]

Who Outcome Kit Is For

Good fitBad fit
Spending money on paid traffic but unsure which message angle actually produces buyersYou already have 5-person data team + pristine multi-touch attribution + Snowflake warehouse
Agency managing multi-channel accounts and need outcome-level truth, not platform-level spinYou want one-click magic — you still need to define your angles, configure data sources, and read the brief
SaaS founder with demo bookings or signups as your true metric, and tired of sorting by CPL
Messy tracking, disconnected tools, needing to make decisions anyway
Commerce brand who knows CTR and ROAS tell different stories per creative

V1 Limitations (Matt’s Own List)

  • V1 is Meta-first. Google Ads support is coming but isn’t native yet. If Meta is your primary paid channel, you’re good. If Google-only, wait.
  • You need at least one outcome source. Calendly, HubSpot, or a CSV/JSON of your bookings/purchases. Without ≥1 outcome source, the agent can’t find fake winners.
  • Angle tagging is manual to start. The system doesn’t auto-discover message strategy — you tell it what an “angle” is. Takes ~10-15 min of upfront thinking.
  • Confidence scales with data. Week 1 with 30 conversions = low confidence. Week 4 with 200 conversions = high confidence. The agent reports confidence; it doesn’t hide it.
  • No automatic budget changes in V1. Matt’s own line: “You approve. Same philosophy as my Meta Ads AI. Start with visibility. Graduate to autonomy when you trust it.”

Cost Comparison (From the Article)

Old agency wayAgent way
Triple Whale: $380/moMeta API: free
Northbeam: $500/moGA4 API: free
Hyros: $500/moCalendly API: free
Data analyst: $6K/moOutcome Kit: free
Total: $8K+/moTotal: $0/mo (MIT, pay for LLM inference only)
“Still says ‘it depends’""Says ‘cut this, scale that, fix this page‘“

Try It

From the article — a 6-step quickstart:

# 1. Clone and configure
git clone https://github.com/TheMattberman/outcome-kit
cd outcome-kit
cp .env.example .env
cp config.example.json config.json
 
# 2. Define angles (edit config.json) — the 10-minute part
# 3. Sanity check
npm run doctor
 
# 4. Sample pipeline
npm run run:sample
 
# 5. Run for real
npm run run
 
# 6. Set up a cron
# Outputs land in Telegram / Slack / email

Implementation

Tool/Service: Outcome Kit — github.com/TheMattberman/outcome-kit, MIT license. Setup: Meta Ads access token + ad account ID + GA4 property ID (service account JSON) + one outcome source (Calendly API, HubSpot private API token, or CSV/JSON) + the Outcome Kit repo. Cost: Free (runtime), plus LLM inference cost via Claude Code / OpenClaw / Hermes. Integration notes:

  • Runs identically on Claude Code or similar agent runtimes — cookbook/skill model is portable.
  • Good candidate for integration with Claude Cowork for Marketing — feed Outcome Kit’s “kill/scale” brief into Cowork’s ad-creative variants.

Open Questions

  • Full 12-skill list not published in article — would need to read the repo README.^[ambiguous]
  • How does Outcome Kit handle cross-device / cross-session attribution when an ad click → outcome spans multiple sessions? Article doesn’t specify.
  • Claim that agency stacks like Triple Whale + Hyros + analyst = “$8K+/mo still dashboard theater” is not independently validated and reads as rhetorical framing.^[ambiguous]
  • Google Ads roadmap timing not disclosed. “Coming but isn’t native yet” as of April 2026.
  • MIT license + “free” framing doesn’t account for API rate limits at scale — large accounts (100+ ad sets) may hit Meta Marketing API throttles; article doesn’t mention.^[inferred]