FLUQs — Friction-Inducing Latent Unasked Questions
Source: raw/Friction-Inducing Latent Unasked Questions - FLUQs.pdf, raw/FLUQS_ ANSWER THE HIDDEN QUESTIONS OR VANISH IN AI SEARCH.docx (Garrett French, Citation Labs, 2025-06-27), raw/FLUQ's are the key to visibility in AI search engines.docx (Line Sand Kristoffersen)
Author: Garrett French (original coinage, Citation Labs) Pronounced: “flux”
FLUQs are the unspoken, high-friction decision-blockers a buyer has but never types into a search bar. Traditional SEO cannot find them — they have zero keyword volume. But synthesis-layer AI (ChatGPT, Gemini, Perplexity, AI Mode) cites content that resolves them, because those resolutions are net-new facts the models can reuse. FLUQs are the primary unit of content strategy in a post-ranking, reuse-first search economy.
Key Takeaways
- FLUQ = Friction-Inducing Latent Unasked Question. The unconscious, unspoken concern that stalls a buyer without ever appearing in search data.
- FAQ ≠ FLUQ. FAQs answer what buyers do ask (cost, returns, delivery). FLUQs answer what they never ask but that still kills the conversion.
- The new ranking signal is reuse, not position. ChatGPT, Gemini, and Perplexity are the new operating environments; content that can’t survive LLM compression is invisible.
- FLUQs are invisible to keyword tools. Ahrefs, Google Keyword Planner, and search-volume analysis cannot surface them. There is no data on questions that are never asked.^[inferred]
- Resolving a FLUQ creates a “net-new fact.” Facts that didn’t previously exist in the synthesis layer get cited; restated consensus does not.
- EchoBlocks are the preferred output format. Concise, traceable, causally structured fragments (causal triplets, FAQ entries, checklists) that survive compression into a Gemini answer box.
- Three publishing surfaces: controlled (your own site), collaborative (guest posts, co-branded reports), emergent (LLM answer surfaces themselves via MCP / retrieval).
- Google’s own guidance validates the pattern. At Google I/O 2025, Danny Sullivan told SEOs to “make non-commoditized content, give us new data, ground AI Mode in fact” — with no attribution guarantee.
FAQ vs FLUQ
| FAQ | FLUQ | |
|---|---|---|
| Visibility | Explicit, searched, measurable | Unspoken, zero search volume |
| Origin | Known buyer questions | Hidden decision-blockers |
| Example | ”How much does it cost?" | "What if the sofa doesn’t fit through the door?” |
| Example (B2B) | “How do I return it?" | "How do I renegotiate domestic labor before grad school?” |
| Tool to find | Keyword research, support tickets | Customer interviews, review mining, AI hallucination audits |
| Outcome when answered | Reduces support load | Unlocks conversion, creates citable net-new fact |
The Four FLUQ-Identification Questions
Citation Labs’ framework for surfacing FLUQs:
- What’s not being asked by your ICP that directly impacts their success?
- Whose voice or stake is missing across reviews, forums, and existing content?
- Which prompts trigger the model to hallucinate or flatten nuance?
- What’s missing in the AI-cited resources that show up for your ICP’s bottom-funnel queries?
Question 4 doubles as a link-building target list — pull ChatGPT citations for your category and treat them as the publishers you need to get cited on.
Where FLUQs Hide
Source signal locations for FLUQ mining:
- Customer service logs and support tickets
- Reddit threads and community forums
- On-site reviews
- Existing FAQ pages (read the gaps between the questions)
- One-on-one customer interviews (Citation Labs used 500 surveys + 24 interviews for their education client)
- AI hallucination audits — what does ChatGPT/Gemini overgeneralize or fabricate when answering your ICP’s prompts? Those gaps are FLUQs.
- Sales and implementation team conversations
FRFYs — FLUQ Resolution Foresight Yield
Once a FLUQ is spotted, it must be tested and turned into a fact, not left as a hypothesis. FRFYs quantify the payoff:
FRFY Equation (Citation Labs):
FRFY = (Ψr × (Es + V + Tp)) / (At + C + Fd)
| Symbol | Meaning |
|---|---|
| Ψr | Resolution felt — how clearly the fragment resolves latent tension |
| Es | Emotional salience — does it feel trustworthy and safe? |
| V | Return visibility — is the payoff clear and immediate? |
| Tp | Trust persistence — will the user retain confidence over time? |
| At | Anticipatory tension — how much symbolic load precedes the action? |
| C | Cognitive cost — attention, memory, and decoding effort |
| Fd | Feedback delay — how long before the user feels it worked? |
The equation is illustrative rather than strictly operational — the point is that high-yield FLUQ resolutions combine felt resolution and emotional/visibility payoff while minimizing friction, cost, and delay.^[inferred]
EchoBlocks — Structuring Facts to Survive LLM Compression
EchoBlocks are concise, causally-structured content fragments designed to be yanked into an AI answer box and still make sense. Three properties:
- Concise — short enough to survive compression.
- Causally structured — carries subject/predicate/object logic the model can reuse.
- Traceable — attributable back to the source when re-emitted.
Preferred format: the causal triplet.
Example from Citation Labs’ education client:
- Subject: Mid-career students
- Predicate: Often disengage
- Object: Without pre-enrollment stakeholder negotiation
Wrap triplets in a familiar container — FAQ entry, checklist item, or short guide section — so existing CMS templates and schema markup can carry them.
The Three Publishing Surfaces
| Surface | What it is | What it requires |
|---|---|---|
| Controlled | Your glossary, help docs, product pages, blog | Triplets, checklists, causal chains, markup (inline JSON-LD, 200-token chunks), prompt-targeted retrieval fields |
| Collaborative | Guest posts, co-branded reports, Reddit/LinkedIn if ICP is present, community engagement | Mid-funnel influence; still EchoBlock-structured; plants LLM memory |
| Emergent | ChatGPT / Gemini / Perplexity / AI Overviews themselves, your MCP, other people’s MCP resources | Compression-resilient structure, clean intent logic, survives synthesis without context |
Emergent is the hardest. You’re inside someone else’s operating environment; your fragment has to be callable by their planner. Citation Labs is building XOFU (LLM visibility GPT) to measure reuse on this surface.
Why the Old Playbook Is Dead
At Google I/O 2025, a closed-door session with Danny Sullivan and a search engineer gave SEOs this advice:
“Make non-commoditized content. Give us new data. Ground AI Mode in fact.”
No mention of attribution, no traffic guarantees, no way to know if your insights are being used. The shift: synthesis is the new front page. Content that can’t survive the synthesis layer is invisible.
Second-Source Corroboration (Kristoffersen)
Line Sand Kristoffersen’s article — a derivative written for a broader marketer audience — reinforces the same core claim from a conversion-rate angle rather than a link-building angle:
- FLUQs drive conversion behavior even though they have no search data.
- “It’s not about what you show — it’s about what you don’t say.”
- E-commerce examples: “Does the attachment fit my old vacuum?” / “Will the sofa fit through the door?” / “How do I cancel a subscription without calling?”
- Method: put yourself in the customer’s shoes and ask them directly; the data is in the silence.
Two independent writers arriving at the same framework from SEO and CRO angles suggests this is a real pattern, not a single vendor’s positioning.^[inferred]
Try It
A 5-step test from Citation Labs you can run today:
- Find a high-traffic page — pick one that already draws attention.
- Scan for friction-inducing fact gaps — mine reviews, forum threads, service logs, sales conversations.
- Locate one unasked but high-stakes question — focus on what your ICP doesn’t know they need to ask.
- Format the answer as a causal triplet, FAQ entry, or checklist — not a paragraph.
- Publish and monitor what fragments get picked up by RAG pipelines, AI Overviews, or agentic workflows.
FLUQ-finder prompts (ready to use)
Prompt 1 — Known material:
“Given this [FAQ / page], and my ICP is
, what are the latent practitioner-relevant questions they are unlikely to know to ask — but that critically determine their ability to succeed with our solution? Can you group them by role, phase of use, or symbolic misunderstanding?”
Prompt 2 — Ambient signal:
“My ICP is
. Based on this customer review set / forum thread, what FLUQs are likely present? What misunderstandings, fears, or misaligned expectations are they carrying into their attempt to succeed — that our product must account for, even if never voiced?”
Optional add-on:
“Flag any FLUQs likely to generate symbolic drift, role misfires, or narrative friction if not resolved early.”
Implementation
Tool/Service: XOFU (Citation Labs’ LLM visibility GPT) — xofu.com, xofu.com/bofu for URL analysis.
Setup: Drop a URL into XOFU’s analyzer to measure LLM reuse. (Full feature set not documented in sources.)^[ambiguous]
Cost: Not stated in sources.^[ambiguous]
Integration notes:
- Combine with GSC Autonomous SEO Engine (internal)|GSC Autonomous SEO Engine (internal) — use FLUQ-finder prompts to generate enhancement targets, not just query-matched updates.
- EchoBlock structure maps cleanly onto Figma MCP-style inline schema markup and MCP retrieval field patterns.
- For dental clients specifically, customer service logs + review mining of GoHighLevel-captured conversations are the highest-yield FLUQ sources.
Open Questions
- No disclosed measurement methodology for FRFY coefficients — currently a framing device rather than a calibrated model.
- Citation reuse tracking tools (XOFU) are early and not independently validated.
- Unclear how FLUQ content performs in zero-click AI Overviews vs traditional organic — Google provided no attribution guarantee.
- Does FLUQ mining scale beyond high-touch B2B / education verticals, or does e-commerce need a different extraction approach?^[inferred]
- Relationship between EchoBlocks and existing schema.org / JSON-LD standards is asserted but not fully specified.
Related
- GSC Autonomous SEO Engine (internal)|GSC Autonomous SEO Engine (internal)
- SEOmator Audit Skill (internal)|SEOmator Audit Skill (internal)
- Clawdbot Competitive Intelligence (internal)|Clawdbot Competitive Intelligence (internal)
- Blog-Agent-Worker / Pulse (internal)|Blog-Agent-Worker / Pulse (internal)
- SEO Patterns Learned (internal)|SEO Patterns Learned (internal)
- Ecosystem Architecture (internal)|Ecosystem Architecture (internal)
- SEO Content Marketing Pipeline (internal connection)|SEO Content Marketing Pipeline (internal connection)