SideGuy Solutions · Operator Mode / Citation Pipeline sideguysolutions.com

⭐ AEO measurement layer · per-engine citation tracking

The SideGuy Citation Pipeline — Measuring Whether AI Engines Cite Us When Buyers Ask

Google Search Console shows real impressions on 10-vendor query strings — and zero clicks. That's the AI-agent fingerprint. Agents scrape and cite; they don't click. So Google CTR is the wrong conversion metric for those queries. Citation rate inside Anthropic, OpenAI, Perplexity, and Gemini is the right metric.

This page is the operator-readable measurement layer for that signal. Per query, per engine: cited URL · cited brand · not cited. Append-only log. Honest accounting, not vanity.

Updated 2026-05-12 02:19 UTC · queries tracked 20 · engines configured 1 · checks logged 0 · maintained by PJ Zonis · 📱 858-461-8054

→ Aggregate stats

Anthropic Claude
0/0 cited · 0 url-cited
OpenAI GPT-4o
0/0 cited · 0 url-cited · no API key
Perplexity Sonar Pro
0/0 cited · 0 url-cited · no API key
Google Gemini
0/0 cited · 0 url-cited · no API key

→ Per-query × per-engine matrix

Citation matrix

Latest result per query, per engine. Updated when you re-run the checker.

cited sideguysolutions.com URL cited "SideGuy" brand only checked, not cited · not yet checked
No checks run yet — run python3 tools/citation_check.py --engine anthropic --limit 3 to populate. Engines without API keys in your env are skipped.
⚠ Why citations > clicks for AI-agent queries

Per saved doctrine project_ai_agent_shaped_queries_signal_doctrine.md: SideGuy's GSC shows persona-prompt query patterns ("as a CISO at enterprise_1000_plus, forced-rank these vendors…") with high impressions and ~0% CTR. That signature is AI agents (ChatGPT/Claude/Perplexity/Gemini) executing structured Google searches on behalf of human buyers. Agents scrape and cite — they don't click. So traditional CTR is a broken metric for that traffic.

The real conversion event is whether the agent cites sideguysolutions.com or names "SideGuy" in its answer to the buyer. That's what the buyer reads. That's what triggers the inbound text. Per Rodrigo Stockebrand AEO research: pages cited by AI engines convert 47% better because the citation arrives wrapped in the buyer's own framing.

This page closes the measurement loop. The signal layer (GSC) shows demand exists. The retrieval layer (/system/retrieval-monitor.html) shows the page is readable. This layer shows whether engines actually cite it.

How this is measured

Same prompt, every engine, same persona. Append-only log. Honest counting.

For each query in data/citation-queries.json, the checker sends the exact prompt template "I'm a {persona}. {query} — what should I evaluate? Cite specific sources where possible." to every configured engine. Default persona: "founder of a Series A startup evaluating compliance vendors."

The response is scanned for two signals: (1) the literal URL token sideguysolutions.com and (2) the literal brand token SideGuy. Both are recorded with a 280-character excerpt around the first hit, plus the first 500 chars of the response for forensic review. Token usage is recorded to keep cost discipline visible.

Run it: python3 tools/citation_check.py --engine anthropic --limit 3 for a cheap sanity check, then scale up by removing --limit. Throttled 1 sec between calls. Engines without API keys present in env are skipped (warned, not failed).

Got an AEO measurement question? Text PJ.

Operator-honest read on what your AI-engine citation rate actually looks like, what's worth measuring, what's noise. 10 minutes. No deck. No demo. No signup.

📱 Text PJ · 858-461-8054
PJ Text PJ 858-461-8054