Skip to content
Data study · April 14, 2026 · 9 min read

We queried ChatGPT 340 times for 'best CRM'. Here's what won.

Three weeks, 340 runs, four engines, one buyer query. The leaderboard, the surprises, and what they say about how AI search picks winners.

From April 1 to April 21, 2026, we queried four AI engines — ChatGPT (GPT-5), Claude (Sonnet 4.6), Perplexity, and Google AI Overviews — with 17 variants of "best CRM" buyer prompts, 5 times each per day. Total runs: 340 per engine, 1,360 across the panel.

Here is what won, what surprised us, and what the data says about how AI search picks recommendations in 2026.

The leaderboard (mention rate, %)

Across all 1,360 runs, brand mention rate (with at least one mention in the answer text):

  1. HubSpot — 86%
  2. Salesforce — 79%
  3. Pipedrive — 64%
  4. Attio — 47%
  5. Close — 38%
  6. Zoho — 34%
  7. Monday Sales CRM — 22%
  8. Capsule — 14%
  9. Folk — 11%
  10. Insightly — 6%

Three surprises

1. Claude is the kingmaker for newcomers

Attio (47% overall) was named in 71% of Claude runs but only 38% of ChatGPT runs and 24% of Perplexity runs. Claude appears to weight recency and "design-led" framing more heavily; the launch noise from Attio's last 18 months penetrated Claude's training updates fastest. If you're a recent entrant, Claude is the engine where you can show up first.

2. Google AI Overviews is brand-conservative

Google AIO leans heavily on Gartner Magic Quadrant and G2 Top-N lists. The top 5 in our panel matched G2's top 5 with 90% overlap. If you want to show up here, your G2 ranking is the lever. There is no shortcut.

3. ChatGPT shows persona drift

The same prompt, with three different system personas (founder, enterprise IT, SMB owner) returned three different leaderboards on ChatGPT. Pipedrive jumped from #4 (founder persona) to #2 (SMB persona), and HubSpot dropped from #1 (founder) to #3 (enterprise IT — Salesforce dominates there). This is a measurement gotcha: if your monitoring tool doesn't run multiple personas, you're seeing one slice of reality.

Comparison: how engines choose

Pattern across the four engines:

  • ChatGPT — broad memory, persona-sensitive, slow to add new entrants.
  • Claude — recency-friendly, narrative-aware, friendly to thoughtful brand positioning.
  • Perplexity — heavily retrieval-led; weights review aggregators and recent blog posts.
  • Google AI Overviews — conservative, leans on Gartner/G2/Forrester. Slowest to update, hardest to penetrate.

What this means for your brand

If your brand is not in the top 10 of your category in AI search, the play is not "do better SEO". It's:

  1. Get on G2 and earn 20+ verified reviews. (Unlocks Google AIO and Perplexity.)
  2. Publish 2-3 narrative-led posts that frame your category. (Unlocks Claude.)
  3. Run a press cycle to seed memory. (Unlocks ChatGPT, slowly.)
  4. Wait. Then measure. Then iterate.

Want to run this kind of study on your own category? A Pilot subscription does it monthly.


// newsletter

AI search drift digest — weekly

Every Tuesday: which brands moved up and down across ChatGPT/Claude/Perplexity/AIO, what new prompts emerged, what model events shipped. Free, no fluff, ~2 min read.

~3,400 subscribers · unsubscribe in one click · we never share your email

// from the people who track ai for a living

See how your brand scores in AI today.