Text PJ · 858-461-8054
Operator-honest · Siren-based ranking · 2026-05-11

Anthropic · OpenAI · Google Vertex AI · AWS Bedrock · Together AI · Replicate · OpenRouter · Modal · Fireworks AI · Groq.
One question: which one is right for your stage?

Honest 10-way comparison of AI Infrastructure — Privacy, Data Residency & Self-Host Comparison (Zero-Data-Retention Contracts · BAA Availability · Data Residency · Self-Host Options) across Anthropic · OpenAI · Google Vertex AI · AWS Bedrock · Together AI · Replicate · OpenRouter · Modal · Fireworks AI · Groq platforms. No vendor sponsorship. Calling Matrix by buyer persona below — operator's siren-based read on which one to pick when you're forced to pick.

The 10 platforms · what each is actually best at.

Honest read on positioning, ideal customer, and where each one is the wrong call. No vendor sponsorship, no affiliate links — operator-grade signal.

1. Anthropic ZDR contracts · HIPAA BAA available · SOC 2 Type II · ISO 27001

The strongest enterprise privacy posture among frontier AI vendors — zero-data-retention contracts on Enterprise tier, HIPAA BAA available, SOC 2 Type II, ISO 27001. Default API does not train on customer data (per current Anthropic ToS). Enterprise tier extends to ZDR (no retention beyond request lifetime) + custom data-residency contracts. Anthropic Claude is also available inside AWS Bedrock (AWS BAA + GovCloud) and Google Vertex AI (GCP IAM + audit) for buyers who need procurement inside a major cloud compliance umbrella.

✓ Strongest atZDR contracts on Enterprise tier, HIPAA BAA available direct, SOC 2 Type II + ISO 27001, Anthropic Claude on AWS Bedrock + Google Vertex for cloud-native procurement, transparent privacy posture.
✗ Wrong forAir-gapped / on-prem requirements (no self-host option — Claude weights not available), absolute-cheapest commodity OSS hosting, US-government FedRAMP High workloads (use Bedrock GovCloud).
Pick Anthropic if: ZDR contracts + HIPAA BAA + SOC 2 + ISO 27001 enterprise privacy posture is the deciding factor.

2. OpenAI ZDR contracts (Enterprise tier) · BAA via Azure OpenAI · API does not train on data

API does not train on customer data by default (per current OpenAI ToS) — Enterprise tier extends to ZDR contracts. HIPAA BAA available via Azure OpenAI (Microsoft compliance umbrella) — direct OpenAI API doesn't currently offer BAA. SOC 2 Type II in hand. The Microsoft / Azure OpenAI variant gives buyers OpenAI models inside Microsoft's full compliance umbrella (FedRAMP via Azure GovCloud for some workloads, BAA via Azure, etc).

✓ Strongest atZDR contracts on Enterprise tier, Azure OpenAI for BAA + Microsoft compliance umbrella, SOC 2 Type II, default-no-training API ToS, GovCloud option via Azure.
✗ Wrong forDirect HIPAA BAA without Azure (use Anthropic or Bedrock), air-gapped / on-prem (no self-host), shops that won't add Microsoft as a data processor.
Pick OpenAI if: ZDR + Microsoft compliance umbrella via Azure OpenAI fits your procurement.

3. Google Vertex AI GCP-native · BAA via GCP · multi-region data residency · self-host model garden

GCP-native privacy posture — BAA via Google Cloud, multi-region data residency on GCP infrastructure, GCP IAM + audit + KMS encryption integration. Vertex Model Garden offers some open-source models you can deploy in your own GCP project (closer to self-host than direct API). Anthropic Claude on Vertex inherits GCP procurement + IAM + audit posture. The right pick when you want Anthropic Claude or Gemini inside a single GCP compliance boundary.

✓ Strongest atGCP-native BAA + IAM + KMS + audit + multi-region data residency, Anthropic Claude on Vertex inside GCP compliance, Vertex Model Garden for OSS deployment in your GCP project, GovCloud option via Google for some workloads.
✗ Wrong forTeams not on GCP (no procurement bundle), pure air-gapped on-prem (Vertex is cloud-only), shops that won't add Google as a data processor.
Pick Google Vertex AI if: GCP-native BAA + data residency + Anthropic Claude inside GCP compliance is the deciding factor.

4. AWS Bedrock AWS-native · BAA via AWS · GovCloud · CloudTrail audit · Provisioned Throughput

The AWS-native procurement-defensible default for AI infrastructure privacy — BAA via AWS, GovCloud variant for FedRAMP High workloads, CloudTrail audit, KMS encryption, VPC endpoint isolation. Anthropic Claude on Bedrock is contractually inside AWS BAA + GovCloud — most regulated AWS shops route Claude through Bedrock specifically for this. Provisioned Throughput offers dedicated capacity inside your AWS account. Multi-model marketplace (Anthropic + Llama + Mistral + Cohere + Amazon + Stability) all served from one AWS API.

✓ Strongest atAWS-native BAA + GovCloud + FedRAMP High variants for some workloads, CloudTrail audit, KMS encryption, VPC endpoint isolation, multi-model marketplace inside one compliance boundary, Provisioned Throughput dedicated capacity.
✗ Wrong forTeams not on AWS, pure air-gapped on-prem (Bedrock is AWS-cloud-only), bleeding-edge model access (1-2 weeks behind direct).
Pick AWS Bedrock if: AWS-native BAA + GovCloud + multi-model marketplace inside AWS compliance is the deciding factor.

5. Together AI OSS-first · SOC 2 · dedicated endpoints in your VPC · self-host friendly via OSS weights

SOC 2 Type II + dedicated endpoints + the path to self-host (Llama / DeepSeek / Qwen weights are open). Together hosts open-source models, so the underlying weights are downloadable — if Together's privacy posture isn't enough, you can self-host the same model on your own GPUs. Dedicated endpoints offer single-tenant inference. The right OSS-first privacy story if you want a path to full self-host without changing model.

✓ Strongest atSOC 2 Type II, dedicated endpoints (single-tenant inference), open-weight models (path to self-host), no training on customer data, OSS-first transparency.
✗ Wrong forHIPAA BAA requirements (limited BAA — verify current Together posture), enterprise procurement requiring Microsoft / AWS / Google compliance umbrella, air-gapped requirements (need to self-host the weights yourself).
Pick Together AI if: SOC 2 + dedicated endpoints + path-to-self-host on open weights is the deciding factor.

6. Replicate Cloud-only · SOC 2 · prototyping privacy posture · path to deploy on your infra

SOC 2 Type II + cloud-only by design — your code, model, and inference run inside Replicate's environment. Privacy posture is fine for prototyping, evaluation, and non-regulated workloads. For regulated production, you'd typically deploy the same open-source model on Modal (your AWS / GCP / Azure account) or self-host on your own GPUs. Replicate's value is prototyping velocity, not enterprise privacy depth.

✓ Strongest atSOC 2 Type II, easiest prototyping privacy posture (no setup, no infra), pay-per-second metering, public model marketplace breadth.
✗ Wrong forHIPAA BAA / regulated production workloads (use Bedrock / Vertex / Anthropic / Modal), air-gapped requirements, IP-sensitive production code.
Pick Replicate if: SOC 2 + prototyping privacy is enough and you'll redeploy regulated workloads to a stronger compliance posture.

7. OpenRouter Multi-provider routing · privacy posture inherits upstream provider · transparent

Privacy posture inherits the upstream provider you route to — transparent about which provider serves which request. If you route to Anthropic via OpenRouter, the request inherits Anthropic's ZDR + privacy posture. If you route to OpenAI, you inherit OpenAI's ToS. OpenRouter itself has SOC 2 Type II and does not train on customer data. The trade-off: you can't get the same enterprise contracts (BAA, custom DPA, custom rate limits) through OpenRouter that you'd get going direct.

✓ Strongest atSOC 2 Type II at OpenRouter layer, transparent upstream provider attribution, no training on customer data, no-PII-pass-through for evaluation workloads.
✗ Wrong forBAA + custom DPA + enterprise contracts (go direct to provider), air-gapped requirements, regulated production where you need ZDR contracts on the underlying provider.
Pick OpenRouter if: SOC 2 + upstream-inherited privacy + multi-provider evaluation is enough for your workload.

8. Modal Serverless GPU · SOC 2 · runs in your AWS / GCP / Azure account option

SOC 2 Type II + the option to deploy Modal infrastructure inside your own AWS / GCP / Azure account (Enterprise tier). The right pick when you want serverless GPU compute with enterprise privacy posture — Modal manages the platform, your code + data + models run inside your cloud account perimeter. Closer to self-host than any other vendor on this list except direct self-host.

✓ Strongest atSOC 2 Type II, Enterprise tier deploys inside your AWS / GCP / Azure account (closest-to-self-host option), serverless GPU with enterprise privacy posture, Python-native developer experience.
✗ Wrong forStandard hosted-model API needs (use direct providers), teams without ML ops capacity to manage custom inference, air-gapped on-prem (still cloud-based even in your account).
Pick Modal if: serverless GPU + Enterprise tier deployment inside your cloud account is the deciding privacy posture.

9. Fireworks AI OSS-first · SOC 2 · HIPAA BAA available · dedicated deployments · enterprise-tier privacy

SOC 2 Type II + HIPAA BAA available on enterprise tier + dedicated deployments (single-tenant inference). Strongest privacy posture among the OSS-hosting specialists — HIPAA BAA puts Fireworks ahead of Together for healthcare-adjacent OSS workloads. Open-weight models mean you have a path to self-host if Fireworks' posture isn't enough. Dedicated deployments offer single-tenant inference inside Fireworks-managed infrastructure.

✓ Strongest atSOC 2 Type II, HIPAA BAA on enterprise tier (rare among OSS hosting specialists), dedicated single-tenant deployments, open-weight model path to self-host, function-calling + JSON mode included.
✗ Wrong forAir-gapped requirements (need to self-host weights yourself), enterprise procurement requiring Microsoft / AWS / Google compliance umbrella as the primary boundary.
Pick Fireworks AI if: SOC 2 + HIPAA BAA + OSS-first + dedicated deployments fit your privacy posture.

10. Groq LPU specialist · SOC 2 · enterprise privacy emerging · path to GroqCloud private deployment

SOC 2 Type II + enterprise privacy posture emerging — GroqCloud Enterprise offers private deployments of LPU infrastructure for regulated customers. The privacy story is younger than frontier vendors but improving fast. Open-weight models served on LPU mean you have a path to self-host the same model on GPU if needed (with the latency trade-off). The right pick when sub-100ms latency is the deciding factor and SOC 2 is enough for your workload.

✓ Strongest atSOC 2 Type II, GroqCloud Enterprise private deployment option, open-weight models served on LPU (path to self-host on GPU), sub-100ms latency that compounds privacy + UX benefits.
✗ Wrong forHIPAA BAA without verifying current posture (BAA story emerging — verify), air-gapped on-prem (LPU hardware is GroqCloud-hosted), enterprise procurement requiring Microsoft / AWS / Google compliance umbrella.
Pick Groq if: SOC 2 + GroqCloud Enterprise private deployment + sub-100ms latency fits your privacy + performance equation.

The Calling Matrix · siren-based ranking by who you are.

Most comparison sites refuse to forced-rank because their revenue depends on staying neutral. SideGuy ranks because it doesn't take vendor money. Here's the call by buyer persona.

🔓 If you're a OSS / non-regulated dev (privacy is not a procurement gate)

Your problem: You're building OSS, side projects, or non-regulated workloads. Privacy isn't your bottleneck — velocity + cost are. You want the best AI substrate without worrying about BAA / DPA / ZDR contracts.

  1. Anthropic — default-no-training API + production-trust substrate even when privacy isn't your gate
  2. OpenAI — default-no-training API + widest model range + fastest 0→prototype
  3. Together AI — OSS-first transparency + cheapest per-token for high-volume non-regulated workloads
  4. Replicate — easiest prototyping privacy posture + zero infra overhead
  5. OpenRouter — multi-provider transparent routing for evaluation phase
If forced to one pick: Anthropic — operator-honest substrate even when privacy isn't gating, the production-trust default for non-regulated work.

💼 If you're a Startup with proprietary product but no regulatory burden

Your problem: Your IP matters but you're not regulated (no PHI / PCI / FedRAMP / GDPR-strict). You want enterprise-tier privacy controls (your data doesn't train future models, ZDR contracts available) without full self-host.

  1. Anthropic — ZDR contracts on Enterprise tier + SOC 2 + ISO 27001 — operator-honest substrate fits most non-regulated SaaS
  2. OpenAI — ZDR contracts on Enterprise tier + Microsoft compliance umbrella via Azure OpenAI
  3. AWS Bedrock — if you're AWS-native — Anthropic Claude inside AWS BAA + IAM + audit perimeter
  4. Google Vertex AI — if you're GCP-native — Anthropic Claude on Vertex + Gemini inside GCP IAM + audit
  5. Fireworks AI — SOC 2 + HIPAA BAA + dedicated deployments for OSS-first proprietary workloads
If forced to one pick: Anthropic Enterprise — ZDR contracts + SOC 2 + ISO 27001 + operator-honest substrate is the safest non-regulated SaaS default.

🏥 If you're a Healthcare / finance dev with regulated workloads (HIPAA / PCI / GDPR scope)

Your problem: Your workload touches PHI / PCI / PII. Sending it to an AI API risks compliance violation. You need vendor with enterprise BAA + SOC 2 + maybe self-host. Cross-link to HIPAA ePHI Continuous Monitoring axis for the broader vendor stack.

  1. AWS Bedrock — BAA via AWS + Anthropic Claude inside AWS HIPAA boundary + GovCloud variant for some FedRAMP workloads — the regulated-AWS-shop default
  2. Google Vertex AI — BAA via GCP + Anthropic Claude on Vertex + Gemini inside GCP HIPAA boundary
  3. Anthropic direct — HIPAA BAA available direct + ZDR Enterprise contracts — operator-honest substrate with regulated posture
  4. Azure OpenAI — BAA via Azure + GPT models inside Microsoft HIPAA boundary
  5. Fireworks AI — HIPAA BAA available on enterprise tier + dedicated deployments for OSS-first regulated workloads
If forced to one pick: AWS Bedrock — BAA via AWS + Anthropic Claude inside AWS HIPAA + GovCloud is the auditor-defensible default for regulated production AI.

🛡 If you're a Defense / government dev needing FedRAMP / on-prem / fully air-gapped

Your problem: You're DoD-adjacent or intelligence. Cloud-only AI is a non-starter or limited to GovCloud. You need FedRAMP authorization and ideally a path to on-prem self-host. Limited vendor options.

  1. AWS Bedrock GovCloud — FedRAMP High via AWS GovCloud + Anthropic Claude inside the boundary — the strongest commercial cloud option for fed-adjacent AI
  2. Azure OpenAI GovCloud — FedRAMP via Azure GovCloud + GPT models inside Microsoft fed-defensible umbrella
  3. Google Vertex AI GovCloud — Google Cloud GovCloud variants for some federal workloads (verify FedRAMP scope per workload)
  4. Self-host (Llama / DeepSeek on your GPUs) — OSS weights deployed on your own infrastructure — the only fully air-gapped path
  5. Modal Enterprise — deploys inside your AWS / GCP / Azure GovCloud account + serverless GPU with enterprise privacy posture
If forced to one pick: AWS Bedrock GovCloud — FedRAMP High + Anthropic Claude inside the boundary is the most-defensible commercial cloud option for fed-adjacent AI in 2026.
⚠ Operator-honest read

These rankings are SideGuy's lived-data + observed-buyer-pattern read as of 2026-05-11. They're directional, not gospel. The right answer for YOUR specific situation may diverge — text PJ for a 10-min operator-honest read on your actual buying context.

Vendor pricing + features + market positioning shift quarterly. SideGuy may earn referral commissions from some of these vendors, but rankings are independent — affiliate relationships never change rank order. Sister doctrines: /open/ live operator dashboard · install packs · operator network.

Or skip all of them. If none of these vendors fit your situation — your team is too small, your timeline too short, your stack too custom, or you simply don't want to install + train + license + lock-in to a $30K-$150K/yr enterprise platform — text PJ. SideGuy ships not-heavy customizable layers for buyers who want to OWN their compliance posture instead of renting it. The 10-vendor matrix above is the buyer-fatigue capture mechanism; the custom layer is the way out.

FAQ · most asked questions.

Does my data get used to train the AI model?

Depends on the vendor and the tier. Anthropic API does NOT train on customer data by default (per current Anthropic ToS) — Enterprise tier extends to ZDR contracts. OpenAI API does NOT train on customer data by default (per current OpenAI ToS) — Enterprise tier extends to ZDR. AWS Bedrock contracts inherit AWS's no-training-on-customer-data posture across all hosted models. Google Vertex AI inherits GCP's no-training-on-customer-data posture. Together AI / Fireworks AI / Replicate / OpenRouter / Modal / Groq all explicitly state no training on customer data on standard tiers (verify each vendor's current ToS — these terms have changed multiple times and will keep changing). Always re-check current ToS at the time you contract.

What's the difference between 'no training' and 'zero data retention'?

'No training' means your data is NOT used to train future models — but the vendor may still log requests for abuse monitoring, debug, or 30-day retention windows. 'Zero data retention' (ZDR) means your data is NOT retained beyond the request lifetime — once the response is returned, the prompt + response are dropped. ZDR is typically Enterprise-tier-only and is required for HIPAA BAA / PCI scope / GDPR-strict workloads. Anthropic + OpenAI + AWS Bedrock + Google Vertex AI all offer ZDR contracts at Enterprise tier. The difference matters: 'no training' is the default; 'ZDR' is the procurement-defensible posture for regulated workloads.

Which AI infrastructure vendors have FedRAMP authorization?

FedRAMP-authorized AI infrastructure is concentrated in the cloud-native variants: AWS Bedrock via AWS GovCloud (FedRAMP High for many workloads), Azure OpenAI via Azure GovCloud (FedRAMP for many workloads), Google Vertex AI via Google Cloud GovCloud variants (verify scope per workload). Direct API vendors (Anthropic, OpenAI direct) do not currently have FedRAMP — most fed-adjacent customers route through the cloud-native variants. Always confirm scope with your contracting officer — 'available on GovCloud' is not the same as 'FedRAMP authorized for this specific use.' For pure air-gapped workloads, the only realistic path is self-host (Llama / DeepSeek / Qwen on your own GPUs in your fed-authorized environment).

Can I run a fully air-gapped AI infrastructure today?

Yes — three realistic paths in 2026: (1) Self-host open-weight models (Llama 3.x / DeepSeek-V3 / Qwen 2.5) on your own GPUs in your air-gapped environment — the OSS weights are downloadable, the velocity trade-off vs frontier-cloud (Claude / GPT-5) is real but narrowing; (2) AWS Bedrock GovCloud + Anthropic Claude inside the FedRAMP High boundary — closest to air-gapped while still using a commercial frontier model; (3) Modal Enterprise tier deployed inside your air-gapped cloud account with self-hosted open-weight models — serverless GPU with enterprise privacy posture inside your perimeter. The fed-adjacent default in 2026 is AWS Bedrock GovCloud for commercial frontier model access; pure air-gapped DoD work still uses self-hosted OSS.

Why is Anthropic's privacy posture singled out as 'operator-honest'?

Two reasons. (1) Anthropic ships ZDR contracts + HIPAA BAA + SOC 2 Type II + ISO 27001 transparently, with a published Trust Center and direct ToS that operators can read without negotiation. (2) Claude's model behavior — refuses to fabricate when uncertain — is itself a privacy-relevant property: a model that confidently fabricates user data, account context, or PII based on partial input is a privacy risk regardless of contractual posture. Operator-honest model behavior + transparent enterprise contracts together = the production-trust posture SideGuy bets on. PJ uses Anthropic API daily to ship the entire SideGuy site (compliance graph + dashboard + Calling Matrix pages). Eat-your-own-dogfood at the trillion-dollar substrate level. See AI Coding Tools comparison for the IDE-substrate operator-honest decision.

What other AI Infrastructure axes does SideGuy cover?

The AI Infrastructure cluster covers six operator-honest pages: 10-Way Megapage (Anthropic · OpenAI · Vertex · Bedrock · Together · Replicate · OpenRouter · Modal · Fireworks · Groq) · Operator-Honest Ratings axis (Quality of Support · Uptime · Roadmap Velocity · Operator-Honest Behavior) · Pricing & TCO axis (per-token vs flat vs serverless GPU vs self-host) · Inference Speed + Latency axis (sub-100ms · tokens-per-second · batched) · Multi-Provider Routing + Vendor Lock-In axis (OpenRouter · Bedrock multi-model · Vertex multi-model). Plus the sister cluster: AI Coding Tools 10-Way Megapage. And the broader graphs: Compliance Authority Graph · Operator Cockpit · Install Packs. Same operator-honest doctrine across every page: no vendor sponsorship, siren-based ranking by buyer persona, parallel-solutions custom-layer pitch (buy from whatever vendor you want — but you're going to want a SideGuy).

Stuck choosing? Text PJ.

10-minute operator-honest read on your actual buying context. No deck, no demo call, no signup. If we're not the right fit, we'll say so.

📱 Text PJ · 858-461-8054

Audit in 6 weeks? Enterprise customer waiting? Regulator finding?

Skip the 5 vendor demos. 30-day delivery. No procurement cycle. No demo theater. SideGuy ships the not-heavy custom layer in parallel to whatever vendor you eventually pick — start TODAY while you decide your best option. Custom builds in 30 days →

📱 Urgent? Text PJ · 858-461-8054

I'm almost positive I can help. If I can't, you don't pay.

No signup. No seminar. No bullshit.

PJ · 858-461-8054

PJ Text PJ 858-461-8054