Why ChatGPT doesn't recommend your SaaS (and how to fix it)
Five concrete reasons your brand is invisible in AI answers — and the practical content, structured-data, and authority moves that change it.
You ask ChatGPT for the best [your category]. It names three competitors. It does not name you. You check Claude, Perplexity, Google AI Overviews. Same story. Your SEO is fine. Your G2 reviews are fine. So what's wrong?
From running a thousand citation audits, here are the five most common reasons — ranked by how often they're the actual culprit — and what to do about each.
1. You aren't on the third-party authority sites LLMs read
LLMs synthesize answers from training data plus, increasingly, retrieval. The retrieval signals over-index on a small set of trusted publications and aggregators: G2, Capterra, ProductHunt, Reddit, Hacker News, large Substack newsletters, and major trade publications.
If your brand appears on your own site and nowhere else, you don't exist to the model. Fix: get listed on G2/Capterra (and earn 20+ reviews to surface in their listings), seed mentions in 2-3 niche newsletters, post a launch on Hacker News even if it doesn't trend.
2. You don't have a clear category claim
LLMs need to slot you into a category to recommend you. If your homepage says "the work OS for modern teams", a buyer prompt for "best CRM" will not find you — even if you do CRM work. Fix: in your H1 and meta description, name the category your buyers actually search for. The model is not creative on your behalf.
3. Your structured data is missing or thin
JSON-LD with SoftwareApplication, Organization, and Offer schema gives both Google AIO and (increasingly) other engines a clean machine-readable description. Without it, you depend on the model parsing prose. Fix: implement Schema.org on your homepage, pricing page, and any feature pages. Include a price range — "Custom pricing" is a citation killer.
4. Your founders/team aren't visible
This one surprises people. LLMs heavily reward "real humans behind the brand" signals: founder LinkedIn presence, podcast appearances, conference talks, technical Twitter engagement. If your About page is anonymous, the model treats your brand as low-trust. Fix: founder bios, LinkedIn linked from About, at least one podcast or interview a quarter.
5. You're invisible in the comparison surface
"X vs Y" content is what feeds the comparison answers ("is X better than Y?"). If only your competitor has written that page, the model will lean on your competitor's framing. Fix: write your own comparison pages — fair, factual, with a clear position. Don't trash the competitor; just define the trade-off in your terms.
The 60-day plan
- Days 1–14. Audit. Run 50 prompts daily for two weeks across ChatGPT, Claude, Perplexity. Record where you appear, where you don't, who wins your absent prompts.
- Days 15–30. Authority moves. Get listed on the 2-3 missing aggregators. Seed 5 niche-newsletter mentions. Post on Hacker News.
- Days 31–45. On-page. Rewrite H1/meta to name your category. Add Schema.org. Publish 3 comparison pages.
- Days 46–60. Re-audit. Check what changed. Iterate on the prompts where you're still absent.
This works. Not all citation gaps close in 60 days — but most categories show measurable lift in week 4–6. The compounding starts there.
If you want a tool that runs the audit and tracks the progress automatically, that's BrandMirror. See what a real report looks like.
// newsletter
AI search drift digest — weekly
Every Tuesday: which brands moved up and down across ChatGPT/Claude/Perplexity/AIO, what new prompts emerged, what model events shipped. Free, no fluff, ~2 min read.
~3,400 subscribers · unsubscribe in one click · we never share your email
// from the people who track ai for a living