How to Build an ASEO Strategy in 2026: Beyond AEO, Beyond GEO: The Framework That Measures What Actually Happens
AEO strategy guides tell you to write answer-first content, build topic clusters, add FAQ schema, and update regularly. That's fine advice. It's also incomplete in a specific way: it tells you what to publish but not whether any of it is actually being cited, or why the parts that aren't are failing. ASEO (AI Search Engine Optimisation) is the discipline that closes that gap. This guide covers the full framework.
ASEO vs AEO vs GEO: the distinction you need to make in 2026
These three terms get used interchangeably by people who haven't mapped out what each one actually covers. They don't cover the same thing. Getting clear on the distinctions changes what you build and in what order.
AEO (Answer Engine Optimisation) is a content practice. Write clear definitions. Use FAQ schema. Structure content with headings. Answer the question in the first sentence. Put your best information early in the page. AEO is where most guides stop, and it's a reasonable starting point. The problem is it has no measurement layer. You can do everything an AEO checklist asks and still not know whether you're being cited, which paragraphs are getting retrieved, or whether what AI platforms say about you is accurate.
GEO (Generative Engine Optimisation), a term from the Princeton/IIT paper accepted at KDD 2024, is an infrastructure practice. Build machine-readable layers: AI sitemaps, structured endpoints, JSON-LD at entity level, llms.txt files. GEO makes your site parseable by AI crawlers. It's a necessary foundation but it doesn't determine selection; it determines eligibility. A site can have perfect GEO infrastructure and still have every paragraph failing at retrieval.
ASEO (AI Search Engine Optimisation) sits above both. It's a measurement discipline: Share of Voice across five platforms, block-level Citation Probability Score® (CPS®) per paragraph, funnel-stage mapping, hallucination detection, competitive win rate, and GA4 revenue attribution. ASEO tells you whether the AEO content you published is being cited, which blocks are failing and why, and what the commercial value of your AI visibility actually is.
AEO tells you what to publish. GEO tells you how to structure the site so AI can read it. ASEO tells you whether any of it worked and which sentence to fix next. You need all three layers, in that order.
Phase 1: establish your baseline before you build anything
Most people start an AI strategy by producing content. That's backwards. Before you write a word, you need to know where you currently stand across each platform and what queries are driving or missing your brand. Without a baseline, you can't tell whether anything you do is working.
Run 75-100 prompts across ChatGPT, Perplexity, Gemini, Claude, and Microsoft Copilot, covering your category's Awareness, Consideration, and Decision-stage query patterns. Record every response: whether your brand appears, at what position, with what sentiment, and whether a URL from your domain was cited. That last point matters enormously. A mention without a citation means you're being recalled from training data, not actively selected at retrieval stage. A citation means a specific page earned retrieval on that query. They're different outcomes with different interventions.
Your baseline gives you three numbers: brand mention rate, citation rate, and hallucination rate. Each requires a separate response strategy. Most AEO guides only track the first one.
Phase 2: understand how each platform actually cites: they don't all work the same way
This is where most AI strategy guides fail badly. They treat "AI platforms" as a monolith and give advice that applies to all of them simultaneously. It doesn't. The five major platforms have distinctly different retrieval architectures, and they respond to different signals.
| Platform | Retrieval mechanism | Top citation signals | Strategy implication |
|---|---|---|---|
| ChatGPT | Bing retrieval (Search enabled) plus training knowledge base | Bing domain authority, FAQ schema, recently updated pages, structured headings | Bing indexing is the primary lever. Pages that rank well on Bing and use FAQ schema get cited most reliably. |
| Perplexity | Real-time web retrieval, no fixed index | Date markers, "as of [year]" language, named sources, high fact density | Most freshness-sensitive platform on the market. Undated content loses on time-sensitive queries even when the information is accurate. |
| Gemini | Google Search index plus Knowledge Graph | E-E-A-T signals, Knowledge Graph entity connections, existing Google organic positions | Strong Google organic performance feeds Gemini citation rate directly. Entity Schema markup at org and product level matters more here than anywhere else. |
| Claude | Large training knowledge base, limited real-time retrieval | Self-contained blocks, high fact density, declarative opening sentences | CPS® Content Structure and Fact Density pillars have outsized impact. Claude responds strongly to blocks that answer immediately with verifiable specifics. |
| Copilot | Bing retrieval with commercial intent weighting | Commercial intent queries, pricing/comparison content, FAQ schema | Decision-stage content performs disproportionately well. Comparison pages and pricing pages with structured markup earn Copilot citations on bottom-funnel queries. |
The practical implication: an article that ranks well on Gemini may barely register on Perplexity if it has no date markers. A page that earns Copilot citations on commercial queries might not appear in Claude's responses if its paragraphs are context-dependent. You need per-platform visibility data, not an aggregate share-of-voice score.
The single most common ASEO mistake: publishing content optimised for one platform's signals and then wondering why it doesn't perform across all five. Platform-specific content strategy isn't optional: it's the difference between 8% citation rate and 34% citation rate on the same query cluster.
Phase 3: map your funnel-stage SOV before writing any content
AI citation behaviour changes radically across the funnel. A brand that appears in 60% of Awareness-stage queries ("what is [category]") might appear in 12% of Decision-stage queries ("best [category] for [use case]"). Those aren't the same problem and they don't have the same fix.
| Funnel stage | Query pattern | What AI cites | ASEO intervention |
|---|---|---|---|
| Awareness | "What is [category]," "How does [concept] work" | Definition-first content, Wikipedia-style entity descriptions, explainer articles | Entity markup, declarative definitions, Knowledge Graph presence, self-contained explainer blocks |
| Consideration | "[Brand A] vs [Brand B]," "Best [category] for [use case]" | Comparison tables, structured feature lists, third-party review sites | Comparison pages with structured data, G2/Capterra presence, citation strategy targeting review content |
| Decision | "[Brand] pricing," "How to get started with [Brand]" | Pricing pages, onboarding guides, FAQ content on commercial terms | Hallucination audit on pricing/features, FAQ schema on commercial pages, Copilot-specific optimisation |
Funnel-stage mapping reveals a specific type of failure that generic share-of-voice data hides: a brand with strong Awareness citations but weak Decision citations is generating AI-driven brand awareness that doesn't convert. The buyer hears about the brand through AI but gets no AI help at the moment they're ready to buy. That's a different problem from low visibility overall, and it points to completely different content priorities.
Phase 4: the block-level audit that AEO guides miss entirely
Here's the gap that makes most AEO strategy guides insufficient as practical tools. They tell you to write answer-first content. They don't tell you how to verify whether any specific paragraph on your site would actually be retrieved by a RAG system answering a relevant query.
AI retrieval doesn't evaluate pages. It evaluates chunks: typically 134-167 word segments that get embedded and scored individually. A page with ten paragraphs can have three that score in Grade A territory and five that score below Grade D. The three good ones get cited. The five poor ones don't. A page-level "readiness score" or E-E-A-T checklist shows you the page as a whole. It doesn't show you which five paragraphs are dragging your citation rate down.
This is what the CPS® framework measures. Five pillars, independently scored, per block:
The grade system:
Run a CPS® audit on every key page before you commission a single new piece of content. You'll almost always find existing pages with Grade A and Grade F blocks sitting paragraphs apart. Rewriting the Grade F blocks costs a fraction of producing new content and typically produces faster citation improvements.
Phase 5: hallucination risk: the strategic consideration AEO ignores
This one doesn't appear in most AEO strategy guides. It should be near the top.
Hallucination risk is the probability that an AI platform generates factually incorrect claims about your brand during a high-intent buyer query. Incorrect pricing. A service you don't offer. A headquarters location you stopped using two years ago. An award you never won. A comparison that misrepresents how your product works. The AI states these as facts with no source URL the buyer can verify.
This matters strategically for two reasons. First, hallucinations are most common on Decision-stage queries, exactly when a buyer is closest to converting. Someone asking "what does [Brand] charge for X?" is ready to buy. If ChatGPT gives them a wrong number, the conversion doesn't happen and you never find out why. Second, a high mention rate plus active hallucinations is actively worse than low visibility. You're generating AI-driven brand awareness that misleads the buyers it reaches.
Hallucination risk is highest for brands with: inconsistent pricing or feature information across web sources, entity markup that hasn't been updated after a rebrand or product change, thin first-party content that forces AI to rely on third-party interpretations, and recent company news (acquisitions, funding rounds, leadership changes) that hasn't been structured into the knowledge graph.
The intervention is structured data: Schema.org entity markup at the Organisation, Product, and Service level, combined with consistent first-party content that gives AI platforms a clear, authoritative source for the facts you need them to get right. The AI Brand Accuracy Audit cross-checks every AI-generated claim against verified brand facts and flags any that are incorrect. Run it before you scale any AI visibility programme: hallucinations at scale are harder to correct than hallucinations caught early.
Phase 6: the Zero-Gap Topic Matrix: content planning that starts with absence
Standard AEO content planning starts with keyword or topic research: find questions your audience asks, build topic clusters, create content. That approach finds content gaps relative to your existing site. It doesn't find the queries where AI platforms are already answering and your brand is simply absent from the answer.
The Zero-Gap Topic Matrix inverts the process. Run the 75-100 prompt audit from Phase 1. Flag every query where an AI platform generated a substantive answer about your category and your brand didn't appear. Those are the queries where a competitor is being cited instead of you, and where a single well-structured piece of CPS®-scored content could insert your brand into an existing AI conversation.
This is a meaningfully different content brief from a standard keyword gap. A keyword gap tells you to create a page that might rank on Google and might eventually be cited by AI. A Zero-Gap Matrix entry tells you there's a specific query where Perplexity is already citing a competitor's content, and you need a specific block that opens with a declarative answer to that query, contains at least three verifiable data points, and scores Grade B or above before it's published.
Phase 7: CPS®-scored content production
Generic AI writing tools generate content and assume it'll get cited. That assumption isn't tested. Content that scores below Grade D on Answer Structure or Fact Density won't appear in AI responses regardless of how well-structured the article looks as a whole.
CPS®-scored content production works differently. Every block is written to the 134-167 word optimal chunk size. Every block opens with the declarative pattern for its target query. Fact density is measured at 100-word intervals. Self-containment is checked by reading each block in isolation, not in the context of the surrounding article. Freshness signals are added explicitly rather than hoping the publication date is enough.
Only blocks that clear Grade B across all five pillars get staged for publication. This doesn't mean longer articles; it means every paragraph earns its place by passing a citability test before it goes live. The CPS® Block Scorer lets you test any existing or drafted paragraph in 30 seconds.
Phase 8: machine-readable infrastructure
This is where GEO work belongs in the sequence: after you know what you're publishing and before you try to scale it. The infrastructure layer makes your content accessible to AI crawlers. Without it, even Grade A content might not be discovered. With it, Grade F content still won't be cited; the infrastructure determines eligibility, not selection.
The essentials: an llms.txt file that tells AI agents what your site contains and what's most important, JSON-LD Schema.org markup at Organisation, WebPage, Article, Product, and Service levels, a valid XML sitemap with accurate lastmod dates, and confirmed AI crawler access (GPTBot, ClaudeBot, PerplexityBot, Bingbot). The AI Crawler Access Audit checks all fifteen major AI bots, as many sites block AI crawlers inadvertently through overly restrictive robots.txt rules without knowing it.
Phase 9: the measurement loop that makes the strategy sustainable
ASEO without measurement is content production with extra steps. The monthly measurement loop is what turns a one-time audit into a compound-growth programme.
Run 75-100 prompts per platform, per month. Track mention rate, citation rate, position (first, second, third mention), sentiment, and hallucination instances. Compare against your Phase 1 baseline. Calculate your competitive win rate: on queries where both your brand and a named competitor appear, how often does your brand appear first? Map changes to the specific content updates you made in the previous month.
The measurement loop produces something most AI strategy guides don't give you: evidence of causality. Not "our AI visibility went up this month" but "publishing three CPS®-scored blocks targeting Decision-stage queries on Copilot increased our citation rate on commercial intent queries from 9% to 23% over six weeks." That's the number you take to a budget conversation.
GA4 attribution completes the loop. Set up AI referral tracking by platform: ChatGPT, Perplexity, Claude, and Copilot all appear as referral sources in GA4. Connect AI-referred sessions to goal completions. Calculate the conversion rate and average order value for AI-referred traffic against your other channels. For most brands that have done this properly, AI-referred sessions convert at 2-4x the rate of organic search, because the buyer arrives having already received a recommendation from an AI that they trust.
The difference a scoring layer makes
An AEO strategy without a measurement layer is a set of content publishing decisions made on instinct. You write answer-first content and hope AI platforms cite it. You build topic clusters and hope the authority signals accumulate. You add FAQ schema and hope the structured data helps.
ASEO replaces hope with measurement at every step. You know which platforms are citing you before you write. You know which paragraphs are failing and on which pillar. You know whether your content is being selected at retrieval or recalled from training data. You know whether what AI platforms say about you is accurate. You know which funnel stage is leaking visibility to competitors. And you have a revenue number that connects all of it to the commercial outcome that justifies the investment.
This is the framework that makes Sagashi's guide look like the primer before the real thing. Not because that guide is wrong (it covers the AEO basics correctly), but because it stops exactly where the measurement begins. A readiness checklist tells you whether you're prepared to get cited. CPS® scoring tells you whether you are being cited, which blocks, and what to fix next.
Start with a free ASEO audit
28 modules covering all 9 phases of this framework. Share of Voice baseline, CPS® block scoring, hallucination detection, funnel-stage SOV, and GA4 revenue attribution. Free audit, results in 48 hours.
Get Your Free Audit →Or test any paragraph free with the CPS® Block Scorer.
Build the ASEO strategy. Measure every phase.
28-module audit. CPS® block scoring. Funnel-stage SOV. Hallucination detection. GA4 attribution. Free, 48 hours.
Get Your Free Audit →