Tool Comparison

CPS® vs Profound vs LatticeOcean: What Grok Says About the Three ASEO Approaches

Published: 11 April 2026 Author: Cited By AI® Reading time: 8 min
Version 1.0 | Published 11 April 2026 | Last verified: 11 April 2026 | Source: citedbyai.info AI Visibility Intelligence
Full disclosure: We built CPS®. We asked Grok to run this comparison on 10 April 2026, briefed it on the three tools, and reported what came back, including the parts where CPS® doesn't lead. The comparison covers official sites, public documentation, and independent references. It isn't a sales page.

Most comparisons in this market are written by vendors. This one isn't. On 10 April 2026, we asked Grok to evaluate CPS® against two other tools in the AI search visibility space. What came back was more useful than we expected. Not because CPS® came out on top across the board, but because the comparison revealed something the market hasn't articulated clearly yet.

These three tools aren't competing for the same job.

What each tool is actually built to do

Grok's framing was precise. CPS® is a predictive diagnostic scoring tool: it estimates the probability that a specific page or content block gets retrieved and cited by AI engines. Profound's Monitoring is observational real-time tracking: it shows what's actually being cited right now, at scale. LatticeOcean's Blueprints is prescriptive structural modelling: it tells you exactly what a page needs to look like before you write it.

CPS® — Cited By AI®
citedbyai.info
Predictive diagnostic scoring
"Will this content get cited, and which paragraph needs fixing?"
Profound
tryprofound.com
Observational real-time monitoring
"What's actually being cited across AI search right now?"
LatticeOcean
latticeocean.com
Prescriptive structural modelling
"Is this query even structurally winnable, and if so, what does the page need?"

Three different questions. Three different tools. The mistake most teams make is treating them as substitutes when they're better understood as a sequence.

Where each tool sits in the workflow

CPS® operates pre- and post-publication: audit existing content, rewrite to score, then monitor. You use it when you have a page and want to know whether it'll get cited, or when you need to fix content that isn't getting traction.

Profound operates post-publication only: continuous monitoring and alerts across the AI platforms your brand appears on. It answers "what's happening across AI search right now?" at a scale that manual checking can't match.

LatticeOcean operates before content creation: a feasibility audit before you write. It classifies whether a query is Vendor Displaceable, Aggregator Dominant, or Structurally Unstable, then outputs the structural requirements for the page if it's worth building.

Step 1
LatticeOcean
Is this query worth building a page for?
Step 2
CPS®
Write, score, and optimise to Grade B minimum
Step 3
Profound
Monitor what AI actually cites after publication

Grok's workflow note: many teams use all three in sequence. LatticeOcean first to set structural constraints. CPS® to write and optimise to score. Profound to monitor results after publication.

Granularity: where the tools diverge most sharply

This is the dimension where the commercial differences are most significant. CPS® evaluates at page level and block level, analysing individual 134-167 word chunks and producing specific rewrite recommendations. Grok noted that GitHub's awesome-ai-tools list references it explicitly as "block-level AI citation auditing."

Profound evaluates at page and domain level: citations, Share of Voice, sentiment trends, with no confirmed block or chunk-level breakdown. LatticeOcean evaluates at query-cluster level, outputting whole-page structural constraints rather than paragraph-level analysis.

Grok's framing — 10 April 2026

"If you need to know which paragraph is underperforming and what to change about it specifically, that's CPS®. If you need to know what's happening across your brand's AI presence at scale, that's Profound. If you need to know whether a page is worth building at all, that's LatticeOcean."

Scoring and output

CPS® produces a 0-100 score per page with a letter grade (A+ through F) and specific rewrite recommendations for anything scoring below Grade B. The free paragraph scorer at citedbyai.info/cps-scorer gives an instant block-level score with no signup required.

Profound doesn't produce a probability-style score. Instead: citation counts, Share of Voice, sentiment trends, competitor benchmarking, daily dashboards, and automated workflow integrations. It's built for teams who need to report AI visibility to stakeholders on a recurring basis.

LatticeOcean produces no numerical score either. Its output is a three-way feasibility classification plus exact blueprints: word-count ranges, H2 counts, section density, required tables and lists, vendor coverage depth, and CMS-ready draft templates. Precise. Narrow. Pre-creation only.

Platform coverage

CPS® covers 5 platforms in full audits: ChatGPT, Perplexity, Gemini, Claude, and Microsoft Copilot. A full audit run fires 375 API calls across those platforms, with each response analysed for citation probability. Profound covers 10+, including Grok, Meta AI, and others, making it the only confirmed option for brands where those platforms matter. LatticeOcean focuses on Perplexity, Gemini, and ChatGPT based on live citation structures.

For most mid-market brands, 5 platforms is sufficient coverage. For enterprise brands where Copilot or Grok visibility is commercially material, Profound's breadth is genuinely hard to match.

Pricing

Tier CPS® / Cited By AI® Profound LatticeOcean
Free entry point ✓ CPS® paragraph scorer, no signup No free tier Founder Diagnostic (one query, email required)
Starter / one-off One-off audit from £299 Starter approx $99/month $499 per buyer-intent cluster audit
Growth / monthly Monthly retainer from £750 (single market) Growth approx $399/month No ongoing tier confirmed
Enterprise Up to £5,000/month Custom (Sequoia-backed, $58M raised) Custom on request
Block-level scoring ✓ Included Not confirmed Not in scope
Rewrite output ✓ Per block, per audit Not included Blueprint templates (pre-creation only)
Platform count 5 platforms 10+ platforms 3 platforms (Perplexity, Gemini, ChatGPT)

What Grok said about the limitations, including ours

Grok didn't pull its punches on any of the three. CPS® is predictive, not a guarantee. It's lighter on real-time enterprise dashboards than Profound. Both of those are accurate, and worth saying plainly.

Profound has no built-in rewrite engine or pre-creation blueprints, and the costs climb steeply at meaningful scale. Enterprise pricing is custom and, from what's publicly reported, significant.

LatticeOcean is extremely narrow: B2B SaaS queries only. At $499 per query cluster with no ongoing probability scoring, it's expensive per unit and doesn't cover the full ASEO lifecycle.

Where each tool wins

CPS® wins when...
  • You need block-level diagnosis on existing content
  • You want specific paragraph rewrites, not general direction
  • You need GA4 revenue attribution alongside citation data
  • Budget doesn't support $399+/month for monitoring
  • You want a free instant score before committing
Profound wins when...
  • You're an enterprise brand needing 10+ platform coverage
  • Daily monitoring and automated alerts are non-negotiable
  • Copilot or Grok visibility is commercially material to you
  • You need stakeholder dashboards, not just a report
LatticeOcean wins when...
  • You're a B2B SaaS team planning a major buyer-intent page
  • You want to know if a query cluster is worth building before investing
  • Structural constraints (word count, H2s, tables) matter before writing

The closing point that matters

Grok's conclusion: no public head-to-head benchmarks exist yet between these tools, but they're complementary rather than competitive. CPS® diagnoses content quality. Profound tracks real-world outcomes. LatticeOcean prescribes structural requirements upfront.

We'd add one thing. The market is full of dashboards and a shortage of outcomes. Knowing where you stand, what structure you need, and whether your content will actually get cited are three different problems. The teams who'll own AI search twelve months from now aren't shopping for one tool that claims to do all of it. They're treating each as a different job, and staffing accordingly.

LatticeOcean to blueprint. CPS® to write and optimise to score. Profound to monitor results. That's the complete stack. Each tool earns its place by doing one thing well.

Try CPS® block scoring free, no signup

Paste any paragraph. Get a 0-100 Citation Probability Score with a five-pillar breakdown. Instant results.

Score Your First Paragraph →

Want the full CPS® audit, not just one paragraph?

Free audit. 27 modules. 5 platforms. Block-level scoring across your entire site. Results in 48 hours.

Get Your Free Audit →