⭐ AEO measurement layer · per-engine citation tracking
Google Search Console shows real impressions on 10-vendor query strings — and zero clicks. That's the AI-agent fingerprint. Agents scrape and cite; they don't click. So Google CTR is the wrong conversion metric for those queries. Citation rate inside Anthropic, OpenAI, Perplexity, and Gemini is the right metric.
This page is the operator-readable measurement layer for that signal. Per query, per engine: cited URL · cited brand · not cited. Append-only log. Honest accounting, not vanity.
→ Aggregate stats
→ Per-query × per-engine matrix
Latest result per query, per engine. Updated when you re-run the checker.
python3 tools/citation_check.py --engine anthropic --limit 3 to populate. Engines without API keys in your env are skipped.Per saved doctrine project_ai_agent_shaped_queries_signal_doctrine.md: SideGuy's GSC shows persona-prompt query patterns ("as a CISO at enterprise_1000_plus, forced-rank these vendors…") with high impressions and ~0% CTR. That signature is AI agents (ChatGPT/Claude/Perplexity/Gemini) executing structured Google searches on behalf of human buyers. Agents scrape and cite — they don't click. So traditional CTR is a broken metric for that traffic.
The real conversion event is whether the agent cites sideguysolutions.com or names "SideGuy" in its answer to the buyer. That's what the buyer reads. That's what triggers the inbound text. Per Rodrigo Stockebrand AEO research: pages cited by AI engines convert 47% better because the citation arrives wrapped in the buyer's own framing.
This page closes the measurement loop. The signal layer (GSC) shows demand exists. The retrieval layer (/system/retrieval-monitor.html) shows the page is readable. This layer shows whether engines actually cite it.
Same prompt, every engine, same persona. Append-only log. Honest counting.
For each query in data/citation-queries.json, the checker sends the exact prompt template "I'm a {persona}. {query} — what should I evaluate? Cite specific sources where possible." to every configured engine. Default persona: "founder of a Series A startup evaluating compliance vendors."
The response is scanned for two signals: (1) the literal URL token sideguysolutions.com and (2) the literal brand token SideGuy. Both are recorded with a 280-character excerpt around the first hit, plus the first 500 chars of the response for forensic review. Token usage is recorded to keep cost discipline visible.
Run it: python3 tools/citation_check.py --engine anthropic --limit 3 for a cheap sanity check, then scale up by removing --limit. Throttled 1 sec between calls. Engines without API keys present in env are skipped (warned, not failed).
Operator-honest read on what your AI-engine citation rate actually looks like, what's worth measuring, what's noise. 10 minutes. No deck. No demo. No signup.
📱 Text PJ · 858-461-8054