Cited By AI® vs LightSite AI: Consultancy vs. Platform
These are not competing products. Cited By AI® is a consultancy that diagnoses why you're not cited and verifies the fix. LightSite AI is a platform that deploys the infrastructure that makes your site readable to AI systems. Both matter. They operate at different layers of the AI search stack, with different outputs, different buyers, and almost no overlap in what they actually deliver.
If you're evaluating both, the question isn't which one is better. It's which problem you're trying to solve first: tool or expert?
What each product actually is
LightSite describes itself, accurately, as "not a content optimization tool or GEO dashboard." That framing is accurate: it's a closed-loop infrastructure and execution layer. That framing is honest. Their product deploys a machine-readable layer on your domain so AI crawlers can reliably parse your site, then monitors and acts on what those crawlers find. It's an always-on platform you connect and leave running.
Cited By AI® is not always-on. It's expert-led. Each audit is a structured diagnostic, run by people, producing a 35-section report that tells you exactly which paragraphs on your site are failing at AI retrieval, which AI platforms are generating false information about your brand, what your competitive win rate looks like against named rivals, and how much revenue your current AI citation level is actually generating through GA4. You get answers, not alerts.
The layer difference
AI citation works in two distinct layers. The first is eligibility: can AI crawlers access your site, parse it reliably, and understand what your business does? LightSite owns this layer. Their infrastructure work (AI sitemaps, structured endpoints, JSON-LD at entity level) solves the problem of AI systems that can't extract clean facts because the site was built for humans. That's a real and common problem, and their no-code setup genuinely addresses it.
The second layer is selection: once AI crawlers can read your site, which specific paragraphs do they select when answering a relevant query? This is where most AI search advice stops working. Machine-readability doesn't determine selection. LightSite itself says in its own Profound comparison: "you can publish AI-optimized pages, but the model still has to guess who you are, what you do, which page is canonical, and which facts are stable." That guessing problem is infrastructure. But even with perfect infrastructure, if your paragraphs don't open with declarative answers, don't contain enough verifiable facts per 100 words, or depend on surrounding context to make sense, they won't be cited. That's the selection problem. It's what CPS® measures.
LightSite solves the eligibility layer: can AI crawlers read you? CBA solves the selection layer: once they can read you, which paragraphs do they choose? You need both. They don't substitute for each other.
Head-to-head: what each covers
| Capability | Cited By AI® | LightSite AI |
|---|---|---|
| Machine-readable infrastructure | ✓ llms.txt, schema, robots.txt, crawler access audit | ✓ Core product: AI sitemaps, endpoints, JSON-LD, agent manifests |
| AI visibility monitoring | ✓ 375 API calls per audit across 5 platforms | ✓ Continuous: up to 2,700 prompts/month |
| Share of Voice measurement | ✓ Per platform, per run, with trend tracking | Partial Brand mention and position tracking |
| Funnel-stage SOV | ✓ Awareness / Consideration / Decision separately | No Not in scope |
| Block-level citability scoring | ✓ CPS® 0-100 per paragraph across 5 pillars | No Not in scope |
| Hallucination detection | ✓ AI Accuracy Audit: flags wrong brand facts in every response | No Monitors mentions, not factual accuracy |
| Competitive win rate | ✓ Head-to-head vs named competitors | Partial Competitor gap analysis by query cluster |
| GA4 revenue attribution | ✓ AI-referred sessions to conversions, full revenue loop | No Not in scope |
| CPS®-scored content generation | ✓ AEO Content Writer, verified before delivery | Partial Agentic content creation, not CPS-verified |
| Real AI bot behaviour data | Partial Via AI Crawler Access Audit (15 bots checked) | ✓ Proprietary real-time crawl, query, and extraction data |
| Human expert delivery | ✓ Consultancy model: expert-led, verified output | No Platform model: self-serve with agentic execution |
| 35-section diagnostic report | ✓ Full Word document, actionable per section | No Dashboard and agent outputs, not a structured report |
| Pricing model | Audit-based: free audit, paid engagement | Subscription: $129/yr Starter, $299/yr Pro |
Where LightSite is genuinely strong
LightSite's infrastructure layer is real and valuable. The proprietary AI bot behaviour data (tracking which LLM crawlers visit, which pages they access, and how often they extract content) is something CBA's audit doesn't replicate. Their dataset of 6.5 million LLM bot interactions (per Yahoo Finance coverage) gives them genuine insight into how AI systems actually crawl sites in the wild, not just how they should according to spec.
For a business that's never structured its site for AI discovery, LightSite's no-code setup delivers tangible results fast. The structured endpoint layer, AI sitemap, and JSON-LD for key entities go live without a development sprint. For agencies managing multiple clients at scale, the subscription model at $299/yr per domain is cost-effective for always-on monitoring and agentic execution.
Their agentic approach also suits teams that want automation over analysis. The agents handle content creation, backlink outreach, and competitor monitoring without requiring a human to interpret findings and commission work. That's the right model for a team that doesn't have ASEO expertise in-house and wants the platform to act on their behalf.
Where CBA does things LightSite doesn't
Block-level CPS® scoring doesn't exist in LightSite's product. They monitor whether you appear in AI answers. They don't score whether the specific paragraph that gets retrieved was the best paragraph available, whether it opened with a declarative answer, or whether it contained enough verifiable facts per 100 words to be reliably selected over a competitor's paragraph on the same topic. That distinction matters enormously for clients who already appear in some AI answers but want to increase how often and in what contexts they appear.
Hallucination detection is the other capability with no equivalent in LightSite's platform. When ChatGPT states that your company charges £499 for a service that actually costs £299, or describes a location you don't operate from, LightSite's monitoring tracks that you appeared. CBA's AI Accuracy Audit flags that what was said is factually wrong. Those are different findings with different implications: one tells you you're visible, the other tells you your visibility is actively damaging your brand.
GA4 revenue attribution closes the loop that LightSite's platform leaves open. Knowing your AI-referred sessions convert at 14.2% versus 2.8% for organic search turns an ASEO investment from a marketing line item into a commercial argument. That's the number that gets a retainer renewed. LightSite's platform doesn't produce it.
The honest summary: if your site can't be read by AI systems, fix that first. LightSite is fast and effective at that layer. Once it can be read, the question changes: why aren't those readable paragraphs being cited? That's CBA's question, and it requires a different methodology to answer it.
Who should choose which
Choose LightSite if you need machine-readable infrastructure deployed quickly without developer involvement, you want always-on monitoring across multiple domains at a predictable subscription cost, or you need agentic execution of content and outreach tasks without hiring an ASEO specialist. The $129-$299/yr price point is appropriate for this scope.
Choose Cited By AI® if your infrastructure is in place but you still don't appear where you should, you need to know exactly which paragraphs are failing at AI retrieval and why, you suspect AI platforms are generating incorrect information about your brand, or you need to demonstrate the commercial value of your AI visibility investment to a CFO or board. The free audit is the starting point: 27 modules, results in 48 hours.
Use both if you're building an AI search programme from scratch. LightSite handles the technical foundation; CBA handles the diagnostic and content verification layer above it. They don't conflict. Most serious AI search investments will eventually need both layers covered, and the two products don't compete for the same budget line.
Start with the free ASEO audit
27 modules. 5 platforms. Block-level CPS® scoring per page. Hallucination detection. GA4 revenue attribution. Free audit, results in 48 hours. No commitment required.
Get Your Free Audit →The ASEO audit LightSite doesn't offer
CPS® scoring per paragraph. Hallucination detection. Funnel-stage SOV. GA4 revenue attribution. 35-section report. Free to start.
Get Your Free Audit →