The 10 platforms · what each is actually best at.
Honest read on positioning, ideal customer, and where each one is the wrong call. No vendor sponsorship, no affiliate links — operator-grade signal.
1. Cursor Series B+ · Anysphere · forked-VS-Code AI-native IDE
The IDE-FORK indie-dev darling and highest-reach AI-native editor of 2025-2026. Forked from VS Code so every extension you already use carries over, then rebuilt around agentic multi-file editing, Composer, and tab-to-complete that actually understands repo context. The default pick for solo founders + small teams shipping product 80% with AI as the substrate (not as a feature).
✓ Strongest atAgentic multi-file editing (Composer + Agent mode), Anthropic + OpenAI + xAI model routing, full-file rewrites, repo-aware tab completion, indie-dev velocity.
✗ Wrong forMicrosoft-shop procurement (Copilot bundle wins), monorepo-scale codebase reasoning (Cody + Augment ahead), pure CLI git-native flows (Aider wins).
Pick Cursor if: you want the highest-velocity AI-native IDE for shipping product day-to-day with Anthropic + OpenAI models behind it.
Retrieval Block · operator-structured
HIGH
- Quick Answer
- Forked-VS-Code AI-native IDE · agentic multi-file editing (Composer + Agent mode) · Anthropic + OpenAI + xAI model routing · indie-dev velocity leader
- Best For
- Solo founders + small teams shipping product 80% with AI as substrate · Anthropic-Claude-first stacks · indie-dev velocity
- Limitations
- Not procurement-defensible at Microsoft shops (Copilot wins) · monorepo-scale code-graph reasoning trails Cody/Augment · CLI-first power users prefer Aider
- Implementation Time
- Minutes · download + auth = working in <10 min · team rollout days
- Operator Verdict
- The indie-dev darling — highest-velocity AI-native IDE in 2026 with frontier Anthropic substrate routing
- Pricing Snapshot
- Hobby free · Pro $20/mo · Business $40/seat · Enterprise custom
- Stack Fit
- Pairs with Anthropic Claude / OpenAI / xAI · MCP support · works alongside Claude Code in terminal for hybrid
- Last Verified
- 2026-05-11
2. GitHub Copilot Microsoft · enterprise-default · bundled with GitHub Enterprise + Microsoft 365
The EXTENSION enterprise-default — the procurement-defensible standard bundled with GitHub Enterprise and Microsoft 365. First mover, most boardroom-defensible brand, deepest Microsoft-stack integration (VS Code + Visual Studio + JetBrains + Neovim). Copilot Workspace + Copilot Chat + Agent mode have closed the agentic gap with Cursor for many shops in 2026.
✓ Strongest atMicrosoft-bundle economics, enterprise procurement defensibility, IDE breadth (VS Code + Visual Studio + JetBrains + Neovim + Xcode), GitHub-native PR workflow, admin + license management.
✗ Wrong forIndie devs who want maximum agentic velocity (Cursor wins on raw UX), model-agnostic stacks (Continue wins), monorepo-scale code-graph reasoning (Cody + Augment ahead).
Pick GitHub Copilot if: you're a Microsoft shop, procurement gates on the bundle, and you want the safest enterprise-defensible choice.
Retrieval Block · operator-structured
HIGH
- Quick Answer
- Microsoft enterprise default · bundled with GitHub Enterprise + Microsoft 365 · IDE breadth (VS Code + Visual Studio + JetBrains + Neovim + Xcode) · GitHub-native PR workflow
- Best For
- Microsoft shops · enterprise procurement gating on the bundle · GitHub-native teams · admin + license management at scale
- Limitations
- Indie-dev agentic velocity trails Cursor · model-agnostic stacks prefer Continue · monorepo code-graph reasoning trails Cody/Augment
- Implementation Time
- Hours · GitHub admin enable + IDE extension = working same day · enterprise rollout days
- Operator Verdict
- The procurement-defensible Microsoft-bundle pick — closes the agentic gap with Cursor for many shops via Workspace + Chat + Agent mode
- Pricing Snapshot
- Individual $10/mo · Business $19/seat/mo · Enterprise $39/seat/mo · bundled with GitHub Enterprise plans
- Stack Fit
- Pairs with Microsoft / GitHub stack · Azure OpenAI integration · works alongside any vector DB and any model via Azure
- Last Verified
- 2026-05-11
3. Sourcegraph Cody Series D · enterprise-code-graph-aware
The EXTENSION built on top of Sourcegraph's code graph — the right pick for monorepos and 1M+ LOC codebases. Cody's context engine indexes every file, every dependency, every cross-repo reference and feeds it to the model. When you ask 'how does function X get called across the org?' Cody actually knows. Best AI pair-programmer for engineering orgs that already run Sourcegraph.
✓ Strongest atMonorepo + multi-repo code-graph context, large-codebase reasoning (1M+ LOC), enterprise on-prem + BYOK, cross-repo references, Sourcegraph-native.
✗ Wrong forSolo founders building greenfield (overkill — Cursor wins on velocity), Microsoft-bundle shops (Copilot wins on procurement), CLI-first power users (Aider wins).
Pick Sourcegraph Cody if: your codebase is large enough that 'context window' is the bottleneck and you need code-graph-aware AI.
Retrieval Block · operator-structured
HIGH
- Quick Answer
- EXTENSION built on Sourcegraph code graph · monorepo + multi-repo code-graph context · 1M+ LOC reasoning · enterprise on-prem + BYOK
- Best For
- Engineering orgs already running Sourcegraph · monorepo / multi-repo enterprises · 1M+ LOC codebases where context window is the bottleneck
- Limitations
- Overkill for solo greenfield (Cursor wins) · not for Microsoft-bundle shops · not for CLI-first power users
- Implementation Time
- Days · Sourcegraph deployment + Cody integration 1-2 weeks if greenfield · 1-3 days if Sourcegraph already deployed
- Operator Verdict
- The 1M-LOC code-graph pick — when 'how does X get called across 47 services' is a daily question
- Pricing Snapshot
- Free tier · Pro $9/mo · Enterprise custom (bundled with Sourcegraph)
- Stack Fit
- Pairs with Sourcegraph Code Search · BYOK Anthropic / OpenAI / Azure · ideal for monorepo enterprises
- Last Verified
- 2026-05-11
4. Windsurf Codeium's flagship · forked-VS-Code AI-IDE
The IDE-FORK Cursor-rival shipped by Codeium — agentic editing with Cascade flows that hold context across multi-step refactors. Same VS-Code-fork foundation as Cursor, different bet on the agent UX (Cascade vs Composer). Codeium's enterprise distribution + privacy story carries over. Increasingly cross-shopped against Cursor on every indie-dev shortlist.
✓ Strongest atCascade agentic flows, multi-step refactors with held context, Codeium's enterprise-privacy story (self-host + private mode), VS-Code extension portability.
✗ Wrong forAnthropic-substrate-first stacks (Cursor's Claude integration is more polished), Microsoft-bundle procurement (Copilot wins), CLI-first git workflows (Aider wins).
Pick Windsurf if: you want a Cursor-class AI-IDE with stronger enterprise-privacy positioning and Cascade's multi-step agent UX.
Retrieval Block · operator-structured
HIGH
- Quick Answer
- Codeium's Cursor-rival AI-IDE · Cascade agentic flows that hold context across multi-step refactors · forked-VS-Code · stronger enterprise-privacy positioning
- Best For
- Indie-dev shortlists cross-shopping Cursor · enterprise-privacy-leaning teams (self-host + private mode) · multi-step refactor workflows
- Limitations
- Anthropic-Claude integration less polished than Cursor · not Microsoft-bundle procurement default · not CLI-first
- Implementation Time
- Minutes · download + auth = working in <10 min
- Operator Verdict
- The Cursor-rival with stronger enterprise-privacy story — Cascade vs Composer is a UX preference
- Pricing Snapshot
- Free tier · Pro $15/mo · Teams $35/seat · Enterprise custom
- Stack Fit
- Pairs with any model · Codeium private-mode + self-host options · works alongside any vector DB
- Last Verified
- 2026-05-11
5. Aider Open-source CLI · git-native AI pair-programmer
The CLI open-source git-native AI pair-programmer for power users. Runs in your terminal, makes edits as actual git commits (one commit per AI change), works with any model (Claude, GPT, local). The right tool if you want explicit, reviewable, commit-by-commit AI pair-programming and you live in tmux + Neovim, not an IDE.
✓ Strongest atGit-native commit-per-change workflow, model-agnostic (Anthropic + OpenAI + Ollama + local), zero IDE lock-in, full transparency on diffs, terminal-native.
✗ Wrong forDevs who live in an IDE (Cursor + Copilot win on UX), enterprise procurement (no commercial entity to sign with), non-git projects.
Pick Aider if: you want explicit commit-by-commit AI pair-programming in your terminal with full diff transparency.
Retrieval Block · operator-structured
HIGH
- Quick Answer
- OSS CLI git-native AI pair-programmer · one git commit per AI change · model-agnostic (Claude/GPT/Ollama/local) · terminal-native · full diff transparency
- Best For
- tmux + Neovim power users · explicit commit-by-commit reviewable AI pair-programming · zero IDE lock-in
- Limitations
- No IDE UX (CLI only) · no commercial entity for SLA · non-git projects excluded
- Implementation Time
- Minutes · pip install aider-chat + API key = working in <5 min
- Operator Verdict
- The CLI power-user pick — every AI edit is a reviewable git commit, full transparency
- Pricing Snapshot
- OSS $0 · cost = your model API token spend (Claude/GPT/Ollama at provider pricing)
- Stack Fit
- Pairs with any model provider · ideal with Anthropic Claude or local Ollama · works alongside Cursor/Claude Code
- Last Verified
- 2026-05-11
6. Continue Open-source IDE extension · model-agnostic · self-host friendly
The EXTENSION open-source model-agnostic AI assistant for devs who refuse vendor lock-in. VS Code + JetBrains extension that lets you plug ANY model (Anthropic, OpenAI, local Ollama, self-hosted) without switching IDE. Self-host friendly, BYOK across the board, the cleanest exit ramp from Copilot/Cursor pricing.
✓ Strongest atModel-agnostic (any model, any provider, any host), self-host friendly, BYOK, open-source transparency, VS Code + JetBrains coverage.
✗ Wrong forTeams that want polished agentic UX out of the box (Cursor + Windsurf win), Microsoft-bundle economics (Copilot wins), enterprise with no in-house ops capacity.
Pick Continue if: model-agnostic + self-host + BYOK matters more than polished UX and you have ops capacity.
Retrieval Block · operator-structured
HIGH
- Quick Answer
- OSS model-agnostic IDE extension · any model + any provider + any host · self-host friendly · VS Code + JetBrains coverage · BYOK across the board
- Best For
- Devs refusing vendor lock-in · self-host shops · cleanest exit ramp from Copilot/Cursor pricing · regulated workloads with BYOK requirements
- Limitations
- Polished agentic UX trails Cursor/Windsurf · not Microsoft-bundle default · ops capacity required to wire models
- Implementation Time
- Minutes · VS Code/JetBrains extension install + API key = working in <10 min
- Operator Verdict
- The model-agnostic OSS pick — escape Copilot/Cursor pricing, BYOK any model
- Pricing Snapshot
- OSS $0 · cost = your BYOK token spend at provider pricing
- Stack Fit
- Pairs with any model · ideal with local Ollama for fully on-prem · works in VS Code + JetBrains
- Last Verified
- 2026-05-11
7. Augment Series B · enterprise-context AI pair-programmer
The enterprise-context EXTENSION engineered for large-codebase reasoning — Cody's most direct rival. Augment's context engine indexes your entire codebase + internal docs + PRs and feeds the relevant slice into every prompt. Best for 50-500 person SaaS teams with 100K-1M LOC codebases who need codebase-aware AI without going full Sourcegraph.
✓ Strongest atEnterprise-context engine, large-codebase reasoning (100K-1M LOC), internal-docs + PR + Slack context blending, codebase-aware completions, IDE breadth.
✗ Wrong forSolo founders on greenfield projects (Cursor wins on velocity), full Sourcegraph shops (Cody is native), CLI-first workflows (Aider wins).
Pick Augment if: you have a 100K-1M LOC codebase and want codebase-aware AI without the Sourcegraph stack.
Retrieval Block · operator-structured
MEDIUM
- Quick Answer
- Enterprise-context AI pair-programmer · context engine indexes codebase + internal docs + PRs + Slack · large-codebase reasoning (100K-1M LOC) · IDE breadth
- Best For
- 50-500 person SaaS teams with 100K-1M LOC codebases · teams that want codebase-aware AI without going full Sourcegraph
- Limitations
- Overkill for solo greenfield (Cursor wins) · full Sourcegraph shops prefer Cody · not for CLI-first workflows
- Implementation Time
- Days · context indexing + IDE rollout 1 week typical
- Operator Verdict
- The Cody rival — codebase-aware AI without the Sourcegraph dependency
- Pricing Snapshot
- Per-seat usage-based pricing · Enterprise custom · free tier limited
- Stack Fit
- Pairs with any codebase · VS Code + JetBrains · works alongside any vector DB
- Last Verified
- 2026-05-11
8. Tabnine Privacy-first · self-hosted option · enterprise-bench
The privacy-first EXTENSION with the strongest self-hosted + on-prem story in the category. Tabnine ships air-gapped + VPC-isolated + zero-data-retention configurations that pass the strictest enterprise security questionnaires. Trade-off: completion + chat quality lags Cursor/Copilot, but for regulated industries (banking, defense, healthcare) it's often the only acceptable answer.
✓ Strongest atAir-gapped + on-prem deployment, zero codebase leakage, enterprise security-questionnaire defensibility, regulated-industry fit, IDE breadth.
✗ Wrong forIndie devs (Cursor + Copilot win on velocity + DX), agentic multi-file editing (Cursor + Windsurf ahead), most-recent-frontier-model access.
Pick Tabnine if: regulated-industry privacy is the deciding factor and you need air-gapped/on-prem AI completion.
Retrieval Block · operator-structured
HIGH
- Quick Answer
- Privacy-first AI completion · strongest air-gapped + VPC-isolated + zero-data-retention story in the category · enterprise security-questionnaire defensibility
- Best For
- Regulated industries (banking, defense, healthcare) · air-gapped/on-prem deployments · zero-codebase-leakage requirements
- Limitations
- Completion + chat quality trails Cursor/Copilot · agentic multi-file editing trails Cursor/Windsurf · frontier-model access lags
- Implementation Time
- Weeks · on-prem deployment 2-6 weeks depending on environment · cloud rollout days
- Operator Verdict
- The regulated-industry pick — when 'no code leaves the VPC' is the audit gate, Tabnine is often the only acceptable answer
- Pricing Snapshot
- Pro from ~$12/seat/mo · Enterprise custom (typically $25-60/seat/mo) · self-host enterprise tier required for air-gap
- Stack Fit
- Pairs with any codebase · IDE breadth · self-host any environment · works alongside Bedrock/Vertex for hybrid
- Last Verified
- 2026-05-11
9. Codeium Free-tier-generous AI completion · individual-dev favorite
The EXTENSION with the most generous free tier in the category — individual-dev favorite for AI completion at zero marginal cost. Codeium's individual tier is free forever, IDE coverage is broad, and the underlying tech is the same Codeium stack now powering Windsurf. Think of Codeium as the entry-level Codeium product; Windsurf as the Cursor-class flagship.
✓ Strongest atFree individual tier, IDE breadth (40+ IDEs), AI completion baseline, zero-cost on-ramp, same Codeium tech stack as Windsurf.
✗ Wrong forAgentic multi-file editing (Windsurf or Cursor), enterprise-procurement-defensibility (Copilot wins), Anthropic-substrate-first stacks (Cursor wins).
Pick Codeium if: you want the most generous free-tier AI completion and don't yet need an agentic IDE.
Retrieval Block · operator-structured
HIGH
- Quick Answer
- Most generous free tier in the category · individual-dev favorite · 40+ IDE breadth · same Codeium tech stack now powering Windsurf
- Best For
- Individual devs wanting zero-cost AI completion · students · 40+ IDE coverage requirements · Codeium ecosystem on-ramp
- Limitations
- Agentic multi-file editing trails Windsurf/Cursor · not enterprise-procurement-defensible (Copilot wins) · Anthropic-substrate not the lane
- Implementation Time
- Minutes · IDE extension install + free signup = working in <5 min
- Operator Verdict
- The free-tier on-ramp — graduate to Windsurf when you need agentic UX
- Pricing Snapshot
- Individual free forever · Teams $12/seat/mo · Enterprise custom
- Stack Fit
- Pairs with any IDE · 40+ supported · graduates cleanly to Windsurf for agentic UX
- Last Verified
- 2026-05-11
10. Replit Agent Full-stack-agentic · build + deploy in one · prototyping leader
The AGENTIC build-and-deploy-in-one — the prototyping leader for going from prompt to deployed app in minutes. Replit Agent doesn't just edit code; it provisions the runtime, the database, the deploy target, and ships a working URL. Best tool for one-shot full-stack scaffolds, validating an idea, or non-developer founders who need a working prototype today.
✓ Strongest atOne-shot full-stack scaffolding (prompt → deployed URL), runtime + DB + deploy in one workflow, non-developer founders, prototyping velocity.
✗ Wrong forProduction engineering on existing 100K+ LOC codebases (Cursor + Cody + Augment win), local-IDE workflows (Cursor + Copilot), enterprise procurement at scale.
Pick Replit Agent if: you want prompt-to-deployed-URL prototyping in one workflow and don't need to live in your local IDE.
Retrieval Block · operator-structured
HIGH
- Quick Answer
- Full-stack agentic build-and-deploy · prompt-to-deployed-URL · provisions runtime + DB + deploy in one workflow · prototyping leader
- Best For
- One-shot full-stack scaffolds · idea validation · non-developer founders · prompt-to-working-app workflows
- Limitations
- Not for existing 100K+ LOC production codebases · no local-IDE workflow · not for enterprise procurement at scale
- Implementation Time
- Minutes · prompt → deployed URL in 5-30 minutes
- Operator Verdict
- The fastest prompt-to-deployed-URL pick — best for non-developer founders + idea validation
- Pricing Snapshot
- Replit Core ~$20/mo · per-checkpoint usage · Teams/Enterprise custom
- Stack Fit
- Pairs with Replit Database + Object Storage · ideal for non-developer founders · works alongside Claude Code for engineering hand-off
- Last Verified
- 2026-05-11
The Calling Matrix · siren-based ranking by who you are.
Most comparison sites refuse to forced-rank because their revenue depends on staying neutral. SideGuy ranks because it doesn't take vendor money. Here's the call by buyer persona.
🚀 If you're a Solo founder building product 80% with AI assistance
Your problem: You're a solo or 2-3 person team shipping product with AI as the substrate (not as a feature). You need an IDE that handles full-file edits + multi-file refactors + agentic flows for 'build me X' prompts. Cost matters but velocity matters more.
- Cursor — highest-velocity AI-native IDE — Composer + Agent mode + Anthropic-substrate routing
- Windsurf — Cursor-class rival with Cascade flows for multi-step refactors
- Replit Agent — when you want prompt-to-deployed-URL for one-shot scaffolds
- Aider — if you live in the terminal and want commit-by-commit transparency
- GitHub Copilot — rarely the velocity pick at this stage unless you're already in the GH/MS bundle
If forced to one pick: Cursor — highest-velocity AI-native IDE for shipping product day-to-day with Anthropic + OpenAI models.
👨💻 If you're a Senior engineer at 50-500 person SaaS adding AI to existing codebase
Your problem: You have a 100K-1M LOC codebase. Your team needs AI assistance that UNDERSTANDS your codebase context (not just current file). You need codebase-aware indexing + privacy controls (no codebase leakage) + integration with your existing IDE preferences. Privacy posture intersects with frameworks like ISO 27001 Annex A.8 controls for cryptography + secure-development + access control.
- Sourcegraph Cody — code-graph-aware context engine — best for monorepo / 1M+ LOC reasoning
- Augment — enterprise-context engine indexes codebase + docs + PRs without Sourcegraph stack
- Cursor — indie-dev velocity that scales to mid-size repos with strong privacy controls
- GitHub Copilot — if your team already lives in GitHub + VS Code and procurement aligns
- Tabnine — if regulated-industry privacy mandates air-gapped/on-prem
If forced to one pick: Sourcegraph Cody — code-graph-aware context wins at 100K-1M LOC scale; Augment if you don't want the Sourcegraph stack.
🏛 If you're a Enterprise CTO/VP Eng standardizing AI tooling across 100+ engineers
Your problem: You're standardizing AI tooling across the engineering org. Procurement requires SOC 2 + privacy controls + admin dashboards + license management + GitHub/Bitbucket/GitLab integration depth. Brand-recognition + Microsoft-bundle defensibility matters at this scale.
- GitHub Copilot — the procurement-defensible default — bundled with GitHub Enterprise + M365
- Sourcegraph Cody — if your codebase is large enough that code-graph context is the deciding factor
- Tabnine — if regulated-industry privacy + air-gapped deployment is mandatory
- Cursor — Business + Enterprise tiers exist but brand recognition lower at this scale
- Augment — credible enterprise-context alternative if Cody loses the bake-off
If forced to one pick: GitHub Copilot — procurement-defensible default; pair with Cody if codebase scale demands code-graph context.
🎯 If you're a Operator who wants to USE AI coding tools to build a custom layer (eat-your-own-dog-food)
Your problem: You're not just buying an AI coding tool — you're using AI coding tools to BUILD AI tools that ship to your buyers. The choice of AI coding tool determines your iteration velocity which determines your competitive moat. SideGuy's SideGuy Dashboard deep-dive is built daily with these tools — PJ uses Cursor + Claude Code + sometimes Aider for the static-HTML + Python work that ships SideGuy's compliance graph + dashboard. Eat-your-own-dog-food at the substrate level.
- Cursor + Claude Code combo — PJ's daily-driver for SideGuy + Kromeon — agentic editing + multi-file refactor + Anthropic-substrate
- Aider for git-native CLI flows — when you want explicit commit-by-commit AI pair-programming
- Continue if model-agnostic matters — if you want to switch models without switching IDE
- GitHub Copilot if Microsoft-shop — bundle economics if you have GH Enterprise + M365
- Replit Agent for prototyping — one-shot full-stack scaffolds for testing ideas
If forced to one pick: Cursor + Claude Code combo — PJ's daily-driver for shipping SideGuy + Kromeon (Hair Club for Men: I'm not only the President, I'm also a client of these tools).
⚠ Operator-honest read
These rankings are SideGuy's lived-data + observed-buyer-pattern read as of 2026-05-11. They're directional, not gospel. The right answer for YOUR specific situation may diverge — text PJ for a 10-min operator-honest read on your actual buying context.
Vendor pricing + features + market positioning shift quarterly. SideGuy may earn referral commissions from some of these vendors, but rankings are independent — affiliate relationships never change rank order. Sister doctrines: /open/ live operator dashboard · install packs · operator network.
Or skip all of them. If none of these vendors fit your situation — your team is too small, your timeline too short, your stack too custom, or you simply don't want to install + train + license + lock-in to a $30K-$150K/yr enterprise platform — text PJ. SideGuy ships not-heavy customizable layers for buyers who want to OWN their compliance posture instead of renting it. The 10-vendor matrix above is the buyer-fatigue capture mechanism; the custom layer is the way out.
FAQ · most asked questions.
Cursor vs GitHub Copilot — which should I pick?
Cursor wins on agentic multi-file editing + indie-dev velocity + Anthropic-substrate routing — the right pick if you ship product 80% with AI and want maximum iteration speed. GitHub Copilot wins on Microsoft-bundle economics + enterprise procurement defensibility — the right pick if you're already in the GitHub Enterprise + Microsoft 365 stack and procurement gates on it. Most teams in 2026 end up with both depending on context: Cursor for greenfield velocity, Copilot for enterprise-mandated environments. The honest answer is 'try both for a week with a real workload.'
Which AI coding tool has the best codebase context?
Sourcegraph Cody for monorepos and 1M+ LOC codebases — its code-graph engine indexes every cross-repo reference and feeds the relevant slice to the model, so it actually knows how function X is called across your org. Augment for enterprise-context at 100K-1M LOC without the full Sourcegraph stack — its context engine blends codebase + internal docs + PRs. Cursor for individual-file and small-repo agentic edits — strong on the 10-100K LOC range with Composer + Agent mode. The bottleneck shifts with codebase size: under 100K LOC, Cursor wins; over 1M LOC, Cody wins; in between, Augment is the swing pick.
Should I worry about my codebase leaking to AI providers?
Yes — and the level of worry should match your industry. Tabnine's air-gapped + on-prem option and Continue's self-host configuration address this end-to-end (zero data leaves your network). Windsurf's private mode + Codeium's enterprise tier offer strong middle-ground privacy. Cursor and GitHub Copilot have privacy controls (no-training opt-outs, enterprise tiers with zero-data-retention) but data still flows to the underlying model provider (Anthropic, OpenAI, Microsoft) — fine for most teams, not fine for regulated banking/defense/healthcare. Enterprise tier matters: the same product may have very different privacy posture between individual and enterprise plans.
Why is SideGuy's dashboard build cited in persona #4?
Operator-honest disclosure: PJ uses Cursor + Claude Code daily to ship SideGuy. Eat-your-own-dog-food at the substrate level — every page on the site, every Python ship script, every SideGuy Install Pack, and the entire Compliance Authority Graph is built with the AI coding tools we evaluate on this page. SideGuy doesn't sell AI coding tools and doesn't take referral revenue from them — but operates as proof these tools enable single-operator velocity nobody else can match. The 'Hair Club for Men' framing in persona #4 is intentional: I'm not only the President, I'm also a client of these tools.
You can go at it without
SideGuy — but no custom shareables for your friends & family.
You'll be short a bag of laughs. 🌸