AI Search GEO Strategy

Reverse-Engineering How AI Search Thinks About Your Query

Nicolas Gorrono ·

A page ranking #15 gets cited in Google’s AI Overview. A page ranking #1 for the same keyword gets ignored entirely. This is happening across thousands of queries, and the explanation has nothing to do with domain authority, backlinks, or on-page optimization in the traditional sense.

The answer is something called query fan-out: the invisible process by which AI search engines decompose your single query into 10 or more parallel sub-queries before generating an answer. Understanding this mechanism is quickly becoming the most important concept in AI visibility and GEO (Generative Engine Optimization).

TL;DR

  • When you search in Google AI Mode, your query gets decomposed into 10+ sub-queries behind the scenes. This is called “fan-out.”
  • 95% of these sub-queries have zero search volume in any keyword research tool, meaning they are invisible to traditional SEO.
  • 68% of pages cited in AI Overviews do not rank in the traditional organic top 10.
  • Pages that cover multiple fan-out sub-topics are 161% more likely to get cited than pages targeting only the head term.
  • The practical fix: structure your content as answer capsules mapped to the sub-questions AI systems will ask about your topic.

What happens when you search in Google AI Mode?

When someone types “best SEO tool for small business” into Google AI Mode, the system does not search for that exact phrase. Instead, Google’s custom version of Gemini analyzes the query and breaks it into a series of more specific sub-queries, each targeting a different facet of the original question.

Here is what the fan-out for that query might look like. Click any sub-query to see what kind of content the AI is looking for:

“best SEO tool for small business”

Click any sub-query above to see what the AI retrieves for it.

Each sub-query gets routed independently to different sources using a combination of keyword matching and semantic matching. The AI retrieves specific passages from pages (not the full page), filters them for quality and relevance, and then synthesizes everything into one coherent answer.

Research from Seer Interactive found that Google’s Gemini 3 generates an average of 10.7 sub-queries per prompt, a 78% increase over the previous Gemini 2.5 model (which averaged 6.01). Google’s Deep Search mode can issue hundreds of sub-queries for a single question.

Query fan-out visualization showing how a single search query splits into multiple parallel sub-queries in AI search engines

Why do 95% of fan-out sub-queries have zero search volume?

This is the part that challenges everything most SEOs take for granted about keyword research.

Fan-out sub-queries are synthetic. No human types “SEO software integrations Google Search Console” into a search bar. These are hyper-specific phrases that the AI generates internally, optimized for retrieval accuracy rather than for matching human search behavior.

Mike King of iPullRank, who has become the leading researcher on this topic, found that 95% of fan-out sub-queries have zero monthly search volume in any keyword tool. They are completely invisible to traditional keyword research workflows.

This creates a fundamental blind spot. You can build a comprehensive keyword strategy using Semrush, Ahrefs, or any tool on the market, and still miss the actual queries that determine whether your content gets cited in AI search. The queries that matter most are the ones no human ever types.

To make it worse, these sub-queries are not stable. A study by Surfer SEO analyzing 173,902 URLs across 10,000 keywords found that only 27% of fan-out sub-queries remain consistent across repeated searches. The other 73% shift each time the query is run.

Why does this break traditional SEO rankings?

Here is the finding that should change how you think about search: 67.82% of pages cited in AI Overviews do not rank in the organic top 10 for the head term.

Traditional rank tracking measures your position for a specific keyword. But the AI does not pick the #1 result. It picks the passages that best answer each individual sub-query. A page ranking #15 overall but containing a dense, specific 150-word section about “SEO tool pricing for teams under 10 people” can get cited for that particular sub-query. Meanwhile, the page ranking #1 that covers features at a surface level and never mentions pricing specifics gets nothing.

Mike King’s data confirms this: a top 10 organic ranking gives you only a 19-25% chance of appearing in AI search citations. Position alone is not the deciding factor.

The Surfer SEO study found that pages ranking for fan-out queries are 161% more likely to be cited in AI Overviews compared to pages ranking only for the head term. And pages ranking for fan-out queries but not the head term are 49% more likely to earn citations than pages ranking exclusively for the main keyword.

The implication is clear: if your competitor analysis strategy only tracks head terms, you are measuring the wrong thing.

Do all AI search platforms use fan-out?

Yes. Fan-out is not a Google-specific quirk. Every major AI search platform uses some form of query decomposition, though they implement it differently.

Google AI Mode runs parallel burst execution. It fires all sub-queries simultaneously, retrieves from the web index and Knowledge Graph in parallel, and synthesizes the results. Standard queries generate 8-12 sub-queries. Deep Search can issue hundreds.

ChatGPT generates 4-20 sub-queries depending on complexity. It tends to add commercial and temporal modifiers to its sub-queries (“best,” “top rated,” “2026”). Notably, 28.3% of pages ChatGPT cites have no organic ranking at all.

Perplexity takes a different approach: a sequential planning model. It first creates an explicit plan for how to answer the question, then generates search queries for each step. Earlier results inform later steps. It processes over 200 million queries daily and is the most transparent about showing its sources.

The takeaway: if you optimize for how fan-out works, you are optimizing for AI visibility across every platform, not just Google.

This is where the concept of answer capsules comes in, and it maps directly to how fan-out works.

If the AI is going to decompose “best SEO tool for small business” into 10+ sub-queries, and then retrieve passages to answer each one, your content strategy becomes straightforward: make sure your page contains a passage that directly answers each likely sub-query.

The technique: use H2 or H3 headings phrased as questions that match likely fan-out sub-queries, then answer each one immediately in a tight, fact-dense passage of 134-167 words. Research from Ekamoira found that this passage length is optimal for AI extraction.

Answer capsule content structure where each section addresses a different fan-out sub-query for AI search optimization

Here is what that looks like in practice. Instead of writing a generic “best SEO tools” listicle, structure the page like this:

How much do SEO tools cost for small businesses?

Entry-level plans from major SEO platforms range from $29 to $129 per month. Semrush starts at $139.95/month, Ahrefs at $129/month, and Moz at $49/month. Several tools offer free tiers with limited functionality. The right price point depends on your team size, the number of domains you track, and whether you need features like rank tracking, site audits, or content optimization.

Which SEO tools integrate with Google Search Console?

Most major platforms support GSC integration, but the depth varies. Some tools only pull basic impressions and clicks data. Others sync your full query and page performance data and use it for recommendations. Look for tools that combine GSC data with third-party metrics for a complete picture rather than treating them as separate dashboards.

What is the easiest SEO tool for non-technical users?

Ease of use depends on what you are trying to do. For keyword research, tools with guided workflows and difficulty scores help beginners make decisions without deep SEO knowledge. For site audits, look for tools that prioritize issues by impact rather than dumping hundreds of technical warnings. The best approach is starting with a tool that matches your current skill level and grows with you.

Each of these sections is an independently extractable passage. When the AI fans out the original query, your page has a matching answer for multiple sub-queries. Ekamoira’s research found that content covering 5+ subtopics gets a 2.1x citation boost compared to single-topic content.

One critical nuance from the Surfer SEO study: the sweet spot is covering 26-50% of a topic’s fan-out surface. Trying to cover 100% actually performs worse, likely because breadth without depth gets penalized during the synthesis stage.

What should you stop doing?

Three changes based on the data:

Stop chasing individual fan-out sub-queries. Surfer SEO explicitly warns against trying to extract and target individual fan-out queries. They are unstable (73% shift between searches), and manually targeting them produces “chaotic lists of near-duplicates and half-phrases that don’t map cleanly to real content.”

Stop treating rank tracking as your primary AI visibility metric. A #1 ranking gives you a 19-25% chance of being cited. That is useful information, but it is not the whole picture. You need to measure whether your content is actually appearing in AI-generated answers, which requires tools built specifically for AI visibility tracking.

Stop writing content that only addresses the head term. If your page about “best SEO tools” is a 500-word listicle with tool names and one-sentence descriptions, you are not providing the passage-level depth that fan-out retrieval rewards. Build topical depth through answer capsules that cover pricing, features, integrations, use cases, and comparisons.

The shift is simple to describe, harder to execute: you are no longer optimizing for a query. You are optimizing for the conversation the AI is having about your topic behind the scenes.


FAQ

How does fan-out interact with E-E-A-T signals?

Fan-out makes authority signals more important, not less. When an AI system retrieves passages from dozens of sources for each sub-query, it needs a way to decide which passages to include in the final answer. E-E-A-T signals act as a tiebreaker: when two passages contain similar information, the one from the more authoritative source is more likely to survive the filtering stage. First-hand experience and demonstrated expertise become critical differentiators when the AI has many candidates to choose from.

Can you see what fan-out queries Google generates for your keyword?

Google does not expose fan-out sub-queries directly. However, tools like Qforia from iPullRank simulate the fan-out process using the same Gemini model that powers AI Mode, giving you an approximation of what sub-queries the system would generate. Other tools like Profound and FinSEO offer dedicated fan-out tracking. These are approximations rather than exact replicas, but they give you a useful starting point for content planning.

Does fan-out apply to local search queries?

Yes. Local queries fan out into location-specific sub-queries that include elements like business reviews, proximity, specific services offered, hours of operation, and pricing. A search for “best dentist in Austin” might decompose into sub-queries about insurance acceptance, emergency availability, specific procedures, patient reviews, and wait times. This makes local SEO even more dependent on comprehensive, structured content about your services, location, and customer experience.

How many sources does the AI typically cite in a single answer?

The number varies by platform and query complexity. Google AI Mode typically cites 3-8 unique sources per response. ChatGPT tends toward fewer citations (2-5), while Perplexity is the most citation-heavy, often referencing 5-10+ sources. The key insight is that being cited is not binary: your domain can appear multiple times across different parts of the answer if different passages from your content answer different sub-queries.

Is it worth creating separate pages for each fan-out sub-topic, or should everything live on one page?

Both approaches can work, but the data suggests a hub-and-spoke model is most effective. Have a comprehensive pillar page that covers the main topic with answer capsules for each sub-topic (targeting the head term and providing breadth), supported by dedicated deep-dive pages for the most important sub-topics (providing depth). Sites with 80%+ topical coverage retain 85.4% AI visibility even as individual sub-queries shift between searches, which suggests that comprehensive coverage at the site level is what matters most.

Put these insights into action

Join the AI Ranking community to get unlimited DataWise access, included with your membership. 7-day risk-free trial.

Join AI Ranking