Source: raw/similarweb-ads-in-ai-insights-2026-04.pdf — Similarweb 19-slide research deck “Ads in AI: Insights From Real User Behaviour” (UK spelling) marked “Business Proprietary & Confidential,” based on Similarweb Ad Intelligence panel data Mar 30 – Apr 13, 2026.
Publisher: Similarweb (Ad Intelligence product) Data window: March 30 – April 13, 2026 Methodology: Similarweb panel data + Ad Intelligence Format: 19-slide deck (PowerPoint export, dark theme)
The first cross-platform performance study of paid AI advertising — covers ChatGPT, Google AI Mode, and AI Overviews side by side. The headline finding is that conversational ads behave fundamentally differently from search ads: ChatGPT reads the full conversation rather than matching keywords, ads fire later in the session and the user sticks around for 4+ more turns afterwards, top advertisers skew B2B SaaS rather than retail, and 46% of users who opened with zero commercial intent developed buying signals by the time an ad appeared. The deck pairs naturally with Similarweb’s organic-citation study (same data team, organic surface) and provides the empirical backbone for the OpenAI Ads launch article.
Key Takeaways
- 460M people now discover products inside AI assistants every month. ChatGPT growing at 39% YoY — “the fastest-growing discovery surface since mobile is already monetized.”
- Three platforms, three different games. Treating them as one channel will misallocate budget. Brand counts: ChatGPT ~1K active brands ($50K minimum), Google AI Mode “thousands” (Google Ads infra), AI Overview 100K+ brands (auto from Shopping feed).
- ChatGPT reads the conversation, not the query. 83% of ChatGPT ad-triggering queries would never have triggered a Google Shopping ad. Only 2% of headlines say “buy now.” This is brand advertising wrapped in conversation, not direct response.
- One brand, one turn, no competition. ChatGPT shows a single sponsored result with no side rail, no competing cards, no ten blue links. Format doesn’t exist on Google or Bing.
- Ads fire mid-conversation. ChatGPT: 44% on Turn 1, 56% later in the conversation. Google AI Mode: 98.5% on Turn 1 (single-query model).
- ChatGPT conversations are 6× deeper than Google AI Mode. Median ChatGPT conversation = 6 turns vs 1 turn on AI Mode. Average = 17 turns vs 2.6. After an ad fires, ChatGPT users stay for 4 more turns vs 1 on AI Mode. Post-ad continuation rate: 73% vs 41%.
- ChatGPT ad penetration = 1.5% of conversations (2.0% weekday peak, 0.8% weekend low — clear Mon-Thu peak / weekend dip / Monday recovery cycle). Google AI Mode penetration = 0.09% (15× lower) but 2.7 ads/conversation vs 1.7 (60% denser when it does serve).
- CTR distribution on ChatGPT: 0.68% overall, 0.50% median brand, 1.00% top quartile, 1.57% top decile, 5.4% peak brand. Benchmarks: Search 3-5%, Display 0.35%, Podcast 0.5-1%. ChatGPT performs between display and podcast — sub-search but premium-quality engagement.
- Pricing: 10-15, LinkedIn 12 implied CPC** (vs Search 15-25). ChatGPT prices in line with podcast advertising, multiples above social/search.
- ChatGPT’s top advertisers skew B2B SaaS, not retail. HubSpot, Fiverr, Preply, Cursor lead ChatGPT — fundamentally different from Google AI Mode (eBay, Walmart, Best Buy, Amazon) and AI Overview (eBay, Walmart, Temu, Etsy). The conversational format favors brands selling to people doing research/comparison/planning, not transactional shoppers.
- Conversations generate intent. 46% of users opened with zero commercial intent but developed buying signals by ad time. 69% of opening intent persists but new intent layers on top. AI conversations don’t replace intent — they create it.
- Worked example. “What are the best golf irons for mid-handicappers?” (Turn 1, research) → “Compare Callaway Paradym vs TaylorMade P790 price and reviews” (Turn 10, intent shifted, still research) → DICK’S Sporting Goods “Golf Club Deals” ad fires. Ten turns later, ready to purchase without ever typing a commercial query.
- Pharma, Auto, Insurance verticals are still absent on ChatGPT — wide-open category opportunity for early movers.
- “Three platforms, three completely different games.” Think of ChatGPT as Podcast Advertising, Google AI Mode as Google Shopping 2.0, AI Overview as the Default. Different mental model required for each.
The Three-Platform Landscape
| Platform | Brands active | Format | Floor |
|---|---|---|---|
| ChatGPT | ~1,000 | Sponsored links in chat | $50K minimum |
| Google AI Mode | ”Thousands” | Shopping + text in AI answers | Google Ads infrastructure (no separate floor) |
| AI Overview | 100,000+ | Product cards from Shopping | Auto from Shopping (no separate opt-in) |
“Three platforms are competing for AI ad dollars right now. Each one targets differently, fires at different moments in the conversation. Treat them as one channel and you’ll misallocate budget.” — Slide 3
Three different models
| Lever | ChatGPT | AI Mode | AI Overview |
|---|---|---|---|
| Targeting | Full conversation context | Query keyword match | Search query |
| When ads fire | 44% Turn 1 / 56% later | 98.5% Turn 1 | Single query |
| Top brands | HubSpot, Fiverr, Preply, Cursor | eBay, Walmart, Best Buy, Amazon | eBay, Walmart, Temu, Etsy |
| Ad moment | Exclusive — 1 brand per conversation turn | Shared — multiple ads per response | Shared — carousel of product cards |
| Creative | Text + Image | Text + Image / PLA | Text + Image / PLA |
| Mental model | Podcast Advertising | Google Shopping 2.0 | The Default |
ChatGPT Ad Performance (panel-measured)
Click-Through Rate Distribution
| Metric | Value | Notes |
|---|---|---|
| Overall CTR | 0.68% | All ChatGPT ads |
| Median (brand-level) | 0.50% | Half of brands above, half below |
| p75 (top quartile) | 1.00% | Top quartile brands |
| p90 (top decile) | 1.57% | Best-performing brands |
| Peak brand CTR | 5.4% | Highest single brand |
Benchmarks (Similarweb-stated):
- Search avg: 3-5%
- Display avg: 0.35%
- Podcast: 0.5-1%
ChatGPT slots between display and podcast on raw CTR — but the format is brand-advertising-not-direct-response, so CTR alone underweights the value. Engagement quality (turn depth + post-ad continuation) is the missing dimension.
Pricing (Similarweb-derived from panel + reported rates)
| Metric | ChatGPT | Industry comparison |
|---|---|---|
| Assumed CPM | $60 | Meta 33-65 |
| Implied CPC | $12 | Search 15-25 |
ChatGPT CPC is roughly 3× search and 1× to 0.5× podcast — pricing has converged to the podcast-advertising tier, which matches the “Podcast Advertising” mental model on Slide 4.
Engagement vs AI Mode
| Metric | ChatGPT | Google AI Mode |
|---|---|---|
| Post-ad continuation | 73% | 41% |
| Median conversation length | 6 turns | 1 turn |
| Avg conversation length | 17 turns | 2.6 turns |
| Post-ad turns | 4 more | 1 more |
ChatGPT conversations are 6× deeper. After an ad serves, the user stays for 4 more turns of brand exposure on average — that’s 4× the post-ad surface area of Google AI Mode.
Ad Penetration
| Surface | Avg | Weekday peak | Weekend low |
|---|---|---|---|
| ChatGPT | 1.5% | 2.0% | 0.8% |
| Google AI Mode | 0.09% | 0.12% | 0.07% |
Clear weekly cycle on ChatGPT: Mon-Thu peak, weekend dip, Monday recovery — consistent with B2B-skewed top advertisers (HubSpot, Fiverr, Preply, Cursor) tracking the work week.
Conversational Intent — A New Targeting Model
Similarweb introduces a conversational intent classification because keyword-matching breaks for multi-turn conversations. The model reads the full arc of a conversation to determine what the user is actually trying to accomplish at the moment an ad fires — a behavioral signal that keyword models cannot access.
8 intent categories
| Code | Category |
|---|---|
| INF | Informational |
| COM | Commercial |
| TSK | Task-Completion |
| NAV | Navigational |
| TRX | Transactional |
| CON | Conversational |
| ENT | Entertainment |
| PER | Personal |
Three moments per impression
| Moment | What’s captured |
|---|---|
| Opening | What the user came in wanting |
| Ad moment | What the conversation had become by the time the ad fired |
| Post-ad | Where the conversation went after |
“The gap between Opening and Ad Moment is where the commercial opportunity lives.” — Slide 13
Worked example: golf-irons buyer
| Turn | Conversation | Intent state |
|---|---|---|
| Turn 1 | ”What are the best golf irons for mid-handicappers?” | Research. No buying signal. |
| Turn 10 | ”Compare Callaway Paradym vs TaylorMade P790 price and reviews” | Still research, but intent has shifted. |
| The ad | DICK’S Sporting Goods “Golf Club Deals” | Brand placed at the moment intent crystallized — without the user ever typing a commercial query. |
Intent generation — the headline number
- 46% of users who opened with zero commercial intent developed buying signals by the time an ad appeared.
- 69% of opening intent persists — but new intent layers on top.
“AI conversations don’t replace intent. They generate it.” — Slide 15
Three illustrative conversations (Similarweb panel)
| Brand / theme | Opening | Ad moment | Intent shift |
|---|---|---|---|
| HubSpot — “Scale Outreach” (Turn 52/70) | “Where are distilled acid oils exported from USA?" | "US buyers of sunflower acid oil, best strategy to reach out” | Market researcher → active seller |
| Indeed — “Career Advice” (Turn 30/42) | “Help me prepare interview for Admin Assistant" | "Write follow up email” | Interview prep → active job pursuit |
| Envato — “Unlimited Graphics” (Turn 66/66) | “Having margin issues with the interior of the book" | "Come up with 7 keywords and phrases that sell” | Layout fix → marketing optimization |
The pattern: opening intent is functional/exploratory; ad-moment intent has migrated toward purchase, hire, sell. The ad moment is the right targeting unit, not the opening query.
The Measurement Gap
“You can’t optimize what you can’t see.”
| What you can see | What’s still invisible |
|---|---|
| Google Ads reports “Top Ads” impressions | AI Mode + AI Overview aren’t broken out — you don’t know which came from AI |
| ChatGPT Ads Manager shows your campaign data | No competitor visibility, no share-of-voice, no category benchmarks |
| Shopping feed performance is aggregated | No way to isolate AI Overview placement performance from standard search |
Every platform shows you your own data; none show you the competitive landscape. This is Similarweb’s selling point — multi-source panel-based methodology fills the cross-platform gap.
Strategic Recommendations (Slide 17)
| Platform | Action |
|---|---|
| ChatGPT | Run a pilot campaign. Apply via OpenAI. $50K minimum. “Think brand voice in a conversation, not banner copy. This is the new format.” |
| Google AI | Audit your Google Ads account today. Your Shopping and text campaigns are likely already serving in AI Mode and AI Overview — you just can’t see it in reporting yet. |
| Similarweb | Track AI ad performance across all 3 platforms via Ad Intelligence (conversational AI ads insights starting Q2). |
”Early Movers Win Categories” — The Closing Pitch
“The brands that move now, that test, measure, and build conversational creative muscle before the window narrows, will set the category patterns that late movers inherit. ChatGPT sits at 1.5% ad penetration today. Pharma, Auto, Insurance are still absent. Your vertical may be wide open. Every new channel starts with low competition and high attention, and this one is moving faster than most. The measurement gap is real, but it’s the same gap that creates the opportunity. Be among the first to close it and shape the patterns every brand in your category will be following in two years. The window is open. It won’t stay open.” — Slide 18
Try It
- Audit your Google Ads “Top Ads” report. Your existing Shopping and text campaigns are probably already serving in AI Mode + AI Overview surfaces, but the placements aren’t broken out. Pull “Top Ads” impression data and compare period-over-period for surprise lift.
- If you have $50K to spend in Q2: apply for the ChatGPT pilot via OpenAI Ads Manager. Optimize creative for conversational voice, not banner copy. Best-performing brands target moments inside conversations, not single queries.
- Map your category to the three platforms. ChatGPT skews B2B/SaaS/services; AI Mode and AI Overview skew retail/commerce. If you’re in retail, AI Overview is likely 80% of your ad-AI exposure today (auto from Shopping). If you’re in B2B SaaS, ChatGPT is the open frontier.
- Build conversational-creative muscle now. Standard search creative (“buy now,” urgency, discount stack) is the wrong format. The DICK’S example is instructive: ad fires after 10 turns of research; the right creative is helpful and brand-led, not transactional.
- Pick one wide-open vertical and test: Pharma, Auto, Insurance are absent on ChatGPT per Similarweb. Any agency client in those verticals has a window to set the category pattern.
- Track three intent moments per impression, not just one. Opening + Ad moment + Post-ad. The shift between Opening and Ad moment is where targeting earns its keep.
- Don’t wait for measurement parity. The reporting gap will close, but by the time it does, the category patterns will already be set. The “measurement gap is the same gap that creates the opportunity.”
Notable Cross-Source Reconciliation
The Similarweb deck and the public OpenAI Help Center say slightly different things about ChatGPT ad access:
| Source | Access model | Floor |
|---|---|---|
| ads.openai.com Ads Manager Beta (Help Center, May 2026) | Self-serve signup, 60 default max CPM | None disclosed |
| Similarweb Ads in AI deck (Mar-Apr 2026 panel) | “Apply via OpenAI” | $50K minimum |
These are not necessarily contradictory:
- The Similarweb data window (Mar 30 – Apr 13) predates the broader Ads Manager Beta self-serve rollout (Help Center pages updated 8-22 hours before this article’s fetch on 2026-05-06).
- “Apply via OpenAI” likely reflects the invitation-only/managed access tier that ran during the panel window. ads.openai.com is the broader self-serve launch that came after.
- The $50K minimum may persist as a managed-campaigns tier alongside self-serve, or it may have been the early-access threshold. The OpenAI Help Center does not currently disclose any minimum.
For brand teams: assume self-serve via ads.openai.com is the default entry point today. Invitation/managed-access at $50K may still exist for larger campaigns; check directly with OpenAI sales.
Implementation
Tool/Service: Similarweb Ad Intelligence (Conversational AI Ads insights — landing in production “starting this quarter” per Slide 17) Setup: Similarweb account → Ad Intelligence module → Conversational AI Ads insights (Q2 2026 GA per deck timeline) Cost: Not disclosed in deck; Similarweb is enterprise-priced ($1k+/month minimum range) Integration notes:
- Multi-source panel methodology fills the per-platform reporting gap (Google + OpenAI + Shopping all show you your own data, none show competitive)
- The 8-intent classification + three-moment impression framing are the proprietary contribution; usable mental model even without paying for the tool
- Pairs with the Meta Ads CLI + OpenAI Ads practitioner stack on the buy side
- Sister to Similarweb’s organic-citation study — same panel infrastructure, different surface
Open Questions
- Geography of the panel. The deck does not disclose the panel’s geographic mix. ChatGPT ads served only in US/CA/AU/NZ per OpenAI’s own Help Center — does Similarweb’s 460M monthly active users match that geo gating, or is it global ChatGPT discovery (vs the gated-ad-eligible subset)?
- Verification of 460M/month figure. “460 million people now discover products inside AI assistants every month” — combines ChatGPT + AI Mode + AI Overview impressions. Methodology for de-duplication across surfaces not disclosed.
- CTR vs Search benchmark sourcing. The “Search avg 3-5%” benchmark on Slide 9 is high relative to standard public Google data (typical Search CTR 2-3% for paid). Similarweb may be using top-of-funnel branded search benchmarks; not annotated.
- “Implied CPC” derivation. $12 CPC is “implied” — derivation not shown. Likely (assumed CPM × 1000) ÷ (estimated CTR) but exact math not in deck.
- HubSpot/Fiverr/Preply/Cursor as top brands is striking (B2B SaaS dominance). Are these top-by-impressions, top-by-spend, or top-by-share-of-conversation? Methodology not stated.
- The 46% intent-generation stat — across what conversation length, intent buckets, brand categories? Slide 15 calls these “illustrative examples drawn from anonymized panel data” but doesn’t break out the headline number.
- Pharma/Auto/Insurance absence is a category-opportunity claim — but is it absence due to brand-safety policy filtering or absence due to advertisers not yet present? Different implications for new-entrant strategy.
- The $50K minimum — is this still in effect alongside self-serve? Similarweb data is from before the broader Ads Manager Beta launch.
Related
- OpenAI Ads in ChatGPT — the canonical product article on the paid surface this deck measures; pairs as ad-buyer’s reference + Similarweb panel as performance reference.
- Similarweb — Most Cited Domains by LLMs (organic study) — sister Similarweb research on the organic side; together they’re the two-surface AI search snapshot Q1-Q2 2026.
- FLUQs — the “context hints” exercise on the OpenAI Ads side and the conversational-intent classification here both validate the FLUQs framing of buyer-decision-moment targeting.
- Meta Ads CLI — sibling paid-ads-tooling shape; resources-PAUSED-by-default safety affordance is the closest precedent for AI-agent-driven OpenAI Ads campaign management.
- Outcome Kit ads agent — outcome-first attribution methodology that should generalize to AI ad surfaces once measurement parity arrives.
- Reddit + AI-citation playbook — operator playbook for the Reddit side of the dual-channel thesis (organic citation + paid placement now joined by panel-measured performance data).
- AI Marketing topic