The Study
We ran 24,000 prompts across 60 SaaS categories through four major AI platforms: ChatGPT (GPT-4o with web search), Google Gemini (with Google Search grounding), Perplexity (with built-in web search), and Google AI Overview.
Each prompt mirrors a real buyer research query, things like "what's the best CRM for a 50-person team?" or "top cybersecurity platforms for mid-market companies." We tracked which products were mentioned, their position in the recommendation list, the sentiment of each mention, and which external sources were cited.
Here's what we found.
Finding 1: The Average SaaS Product Is Invisible
Across all categories, the average SaaS product appears in only 12% of relevant AI queries.
That means for every 100 times a buyer asks AI about your category, you're mentioned 12 times. Your top competitor? Likely mentioned 40-50 times.
The distribution is heavily skewed. In most categories, the top three products capture over 60% of all AI recommendations. Everyone else fights over scraps.
Finding 2: There's a Massive Gap Between #1 and #5
The category leader typically appears in 45% or more of relevant queries. The fifth-most-mentioned product appears in about 15%.
That 30-point gap is enormous. It means the category leader is recommended three times as often as the fifth-place product, and both of those are doing far better than the average.
If you're not in the top five for your category, you're statistically invisible to AI-assisted buyers.
Finding 3: FAQ Schema Is the Highest-Impact Technical Signal
Products with FAQ schema markup on their website are roughly three times more likely to be cited by AI models than products without it.
This makes sense when you consider how AI models work. When a user asks "does [product] support SSO?" and your FAQ page has that exact question answered with proper schema markup, the AI can extract your answer with high confidence and attribute it to you.
FAQ schema is also relatively easy to implement. It's a few lines of JSON-LD in your page header. Yet fewer than 20% of the SaaS products we analyzed had it.
Finding 4: G2 Is the Most-Cited Review Source
When AI models cite sources to support their product recommendations, G2 appears more frequently than any other review platform. Capterra is second, followed by TrustRadius.
The practical implication: if your G2 profile is empty or has fewer than ten reviews, you're missing one of the strongest signals AI models use to validate product recommendations.
We also found that the content of reviews matters. Products with reviews that mention specific features and use cases are more likely to be recommended for those specific queries. A review that says "great project management tool for remote teams" directly feeds into AI's ability to recommend that product when someone asks about project management for remote teams.
Finding 5: Perplexity Has the Highest Citation Rate
Of the four AI platforms we tested, Perplexity includes source citations most consistently. Nearly every Perplexity response includes numbered references to external sources.
This is significant for two reasons:
- If your content is good enough to be cited, Perplexity will link to it, driving direct referral traffic.
- Perplexity's citation behavior gives us the clearest window into which sources AI trusts.
Finding 6: Google AI Overview Has the Highest Bar
Google AI Overview, the AI-generated summary shown at the top of Google Search results, is the hardest platform to appear in.
Products that appear in Google AI Overview tend to have:
- Strong structured data
- High domain authority
- Presence on multiple review platforms
- Recently updated content
Finding 7: Comparison Content Punches Above Its Weight
Products that have "[Product A] vs [Product B]" comparison pages on their website are disproportionately represented in AI responses to comparison queries.
When a user asks "Notion vs Asana," AI models search the web for comparison content. If you have a well-structured comparison page, you become the source. If you don't, a third-party review site or competitor's comparison page fills that role, and the framing may not be in your favor.
We found that the most effective comparison pages are:
- Factual and balanced (not overtly biased toward your product)
- Structured with clear feature tables
- Recently updated (AI models prefer fresh content)
- Marked up with appropriate schema
Finding 8: AI Models Disagree More Than You'd Think
The same prompt can produce different product recommendations across different AI platforms. On average, only about 60% of products recommended by one platform were also recommended by another.
ChatGPT and Perplexity had the highest agreement (likely because both rely heavily on web search). Gemini and Google AI Overview had moderate agreement with each other (shared Google infrastructure) but diverged from ChatGPT on about 35% of recommendations.
This means monitoring only one AI platform gives you an incomplete picture. A product might be completely invisible on ChatGPT but well-represented on Perplexity, or vice versa.
Finding 9: Sentiment Varies Significantly by Model
When AI models do mention a product, the tone and framing can vary dramatically.
We found cases where ChatGPT described a product positively ("a powerful and well-established platform") while Gemini was neutral or even negative ("a legacy solution that some teams still use"). This sentiment difference directly affects whether a buyer follows through.
Tracking AI sentiment, not just whether you're mentioned, is critical. A negative mention can be worse than no mention at all.
Finding 10: The Gap Represents Real Revenue
Using industry conversion rate benchmarks, we estimated the revenue impact of AI invisibility for a typical B2B SaaS company:
- Category: Managed IT Services
- Monthly AI queries for the category: ~25,000
- Top competitor discovery rate: 34%
- Our subject's discovery rate: 0%
- Estimated annual revenue at risk: $72,000
What Separates the Winners
Products that consistently rank in the top three for their category share a set of common characteristics:
- Comprehensive structured data: Organization, Product, and FAQ schema on their website.
- Active review presence: 50+ reviews across G2, Capterra, and at least one other platform.
- Comparison content: Dedicated pages comparing themselves to key competitors.
- FAQ depth: Extensive FAQ pages with schema that cover both product-specific and category-level questions.
- Fresh content: Regularly updated blog, documentation, or resource pages that AI models pick up through web search.
- Multi-platform consistency: They appear consistently across ChatGPT, Claude, Gemini, Perplexity, and Google AI Overview, not just one.
Methodology
- Prompts: 24,000 across 60 SaaS categories, sourced from common buyer research queries
- Platforms: ChatGPT (GPT-4o with web search), Google Gemini (with Google Search grounding), Perplexity (Sonar with web search), Google AI Overview
- Data collection: Real browser sessions and API calls with web search enabled, not static training data queries
- Analysis: Brand mention extraction, position tracking, sentiment analysis, citation source mapping
- Tool: Foxish AI visibility platform
Want to see where your product stands? Foxish tracks your AI visibility across ChatGPT, Claude, Gemini, Perplexity, and Google AI Overview and shows you exactly where to improve. Start your free trial at foxish.ai.