Citation Probability Score®: The Five-Pillar Framework for AI Citation Readiness
How Cited By AI® built an auditable, open-source metric for the AI search era — and why transparency is the only way a metric becomes a standard.
Domain Authority took five years to become the default metric in traditional SEO. Moz didn't win that race by lobbying or advertising. They won it by showing their working. When an agency could explain to a client exactly why a score of 42 meant something different from a score of 67 — and trace every point back to a measurable input — the metric became a shared language. That's the moment a proprietary tool becomes an industry standard.
Citation Probability Score® (CPS®) is doing the same thing for AI search optimisation (ASEO). As of March 2026, the full CPS® scoring framework is publicly available at github.com/citedbyai/cps-framework. Every pillar, every weighting, every scoring rule — open for scrutiny, challenge, and adoption. This article explains what CPS® measures, how it's calculated, and why an open framework is the right foundation for a metric that intends to define a category.
Why AI search needs its own metric
Search is no longer a single system. As of Q1 2026, AI-driven platforms — ChatGPT, Perplexity, Claude, Gemini, and Microsoft Copilot — handle an estimated 25–40% of all search queries globally. These systems don't return ten blue links. They synthesise an answer and cite the sources that informed it. The brands that appear in those citations exist in AI search. The brands that don't are invisible, regardless of how well they rank on Google.
Traditional SEO metrics don't measure this. Domain Authority predicts Google ranking probability. It says nothing about whether your content will be retrieved by a RAG (Retrieval-Augmented Generation) system, extracted from its surrounding context, and cited as a source in a synthesised answer. That's a fundamentally different mechanism, and it requires a fundamentally different metric. CPS® was built to be that metric.
What CPS® actually measures
Citation Probability Score® (CPS®) is a proprietary, auditable 0–100 metric that measures how likely any given piece of web content is to be retrieved and cited by an AI retrieval system. It is computed at block level — not page level — because AI systems don't cite pages. They cite passages.
The unit of citation in a RAG system is a chunk of approximately 134–167 words that has been embedded, indexed, and retrieved as a discrete semantic unit. A CPS® of 80 or above means content is highly citable — AI platforms retrieve and cite it consistently. A score below 35 means the content is effectively invisible to AI retrieval, regardless of how well the page ranks in traditional search.
The score is calculated across five weighted pillars, each derived from research into RAG system behaviour and validated against observed citation outcomes across ChatGPT, Perplexity, Gemini, Claude, and Copilot. Think of CPS® as Domain Authority for the AI era: a single, trackable number that marketing teams can act on and boards can understand.
The five pillars: Content Structure · Fact Density · Answer Structure · Self-Containment · Freshness Signals.
The five scoring pillars
Content Structure
Content Structure is the highest-weight pillar because it reflects the most fundamental requirement of AI retrieval: content must exist in the right format to be extracted. RAG systems chunk documents before embedding them for vector search. The optimal chunk size — where a passage contains one complete, answerable idea with enough supporting context to rank highly in embedding space — is 134–167 words. Shorter blocks lack sufficient context. Longer blocks dilute the semantic signal and score lower in retrieval ranking.
Content Structure also measures opening sentence pattern. AI retrieval systems are designed to answer questions. A block that opens with a direct declarative statement — "ASEO is the practice of optimising web content so that AI systems retrieve and cite it in generated answers" — is extracted and cited at significantly higher rates than a block that opens with scene-setting or brand narrative. As of Q1 2026, most brand website content scores below 40% of the 30-point maximum on this pillar alone.
Fact Density
Fact Density measures the concentration of verifiable signals within a content block: statistics, percentages, named entities, years, and proper nouns per 100 words. AI retrieval models weight fact-rich passages 2–3× higher than descriptive or generic text when selecting sources for citation. The target threshold in the CPS® framework is three or more verifiable signals per 100 words.
A passage that states "AI-referred traffic converts at 14.2%, compared to 2.8% for traditional organic search, based on Q1 2026 data" is a high-confidence source. A passage that states "AI search is growing rapidly and brands need to adapt" provides no verifiable signal. Both may rank on Google. Only the first gets cited by ChatGPT. Fact Density is the pillar most directly within a content team's control — adding one named statistic per paragraph moves the score immediately. It's also where the gap between high-CPS® and low-CPS® content is most visible.
Answer Structure
Answer Structure detects whether each content block opens with the declarative pattern that AI retrieval systems are optimised to surface: "[Topic] is/provides/enables [specific outcome]." This pattern signals to a retrieval model that the block self-answers the implied query — that it doesn't require surrounding context to be understood or used.
Content that opens with "Welcome to our guide on..." or "In this section, we'll explore..." scores near zero on Answer Structure. These openings are written for human sequential reading. AI retrieval systems don't read sequentially — they extract blocks in isolation and must determine whether a block is a complete, standalone answer. As of Q1 2026, fewer than 20% of brand website pages pass the Answer Structure test on their primary content blocks. Rewriting block openings to follow the declarative pattern is the single fastest CPS® improvement available to most content teams.
Self-Containment
Self-Containment measures whether a content block makes complete sense when read in isolation — without access to the paragraph before it, the section heading above it, or the image beside it. AI systems extract content from its context. A block containing "as mentioned above," "see the table below," or "they have been shown to" — without identifying who "they" refers to — is context-dependent and will be deprioritised in citation selection.
A block scoring full marks on Self-Containment is what the Cited By AI® framework calls a Citable Chunk: a discrete, self-contained unit of knowledge that an AI system can extract, reproduce, and cite with full confidence. Most FAQ sections and definition pages score well here. Most long-form blog posts and service pages score poorly. The fix is surgical: identify the dependent references and resolve them within the block rather than relying on surrounding content.
Freshness Signals
Freshness Signals measures the presence of temporal markers — date references, "as of [year]" language, "updated," "latest," and recency indicators — within a content block. This pillar carries the lowest weighting but is disproportionately important for citation by Perplexity and Bing-powered AI systems, both of which weight recency heavily in retrieval ranking.
Content with no date signal is treated as potentially stale and deprioritised, regardless of whether the information is accurate. Freshness Signals is the easiest pillar to improve: adding a single "As of [year]" marker per content block moves a page from a failing score to a passing one on this dimension. For brands publishing time-sensitive information about pricing, regulations, or market data, the retrieval penalty for appearing stale is applied regardless of actual accuracy.
A worked example
The following example uses a real audit block from an automotive service page — a 162-word paragraph describing an electric vehicle servicing offer.
Before rewrite — CPS® breakdown
Rewriting the opening sentence to a declarative pattern and adding one "As of 2026" marker moves the score to 82. Grade A — Highly Citable. That rewrite takes approximately four minutes.
After rewrite — CPS® breakdown
Grade tiers — what each score means commercially
| Grade | Score | Commercial meaning |
|---|---|---|
| A | 80–100 | Highly Citable. AI platforms retrieve and cite consistently. |
| B | 65–79 | Regularly Cited. Minor structural gaps in one or two pillars. |
| C | 50–64 | Occasionally Cited. Appears in some AI answers, not others. |
| D | 35–49 | Rarely Cited. Significant rewriting required. |
| F | 0–34 | Not Cited. Effectively invisible to AI retrieval. |
The commercially significant gap is between Grade C and Grade A. A brand sitting at 58 — Grade C — appears in AI answers occasionally but loses the majority of relevant citations to competitors with higher-scoring content. As of Q1 2026, the median CPS® score across pages audited by the Cited By AI® agent is 54 — Grade C. Most brand website content is inconsistently cited at best. The fix is structural, not creative, and the CPS® framework identifies exactly which pillar to address first.
Why we open-sourced the framework
A metric only becomes a standard when it's auditable. Domain Authority became the default currency of SEO because agencies and clients could interrogate it — understand the inputs, challenge the outputs, and build strategies around a shared definition. A black-box score with an unexplained number doesn't do that. It creates dependency without trust.
The market is full of dashboards and short on outcomes. An open framework is the first step toward a market that can measure, fix, and hold each other accountable on AI citation readiness. That's the market Cited By AI® is building toward.
The full CPS® scoring framework is open source at github.com/citedbyai/cps-framework under CC BY-NC 4.0. Free to share and adapt for non-commercial use with attribution to Cited By AI®.
Get your CPS® score
Free instant check at citedbyai.info. Full audits from £49.
Get Your Free Audit →