Back to Blog

We Analyzed 10,000 AI Agent Conversations: Here's What They Actually Recommend (And Why)

AI agents are becoming the new shopping assistants. When someone asks ChatGPT, Perplexity, Claude, or Gemini "what's the best wireless headphone under $200?" - the answer they get determines which products win. We ran 10,000 product recommendation queries across 5 major AI platforms, tracked which products appeared in top-3 recommendations, and reverse-engineered the factors that determined selection. The results challenge everything marketers assume about how AI agents decide what to recommend. Price isn't king. SEO isn't the game. And the brands winning in AI recommendations are doing things most competitors haven't even considered.

16 min readAI & GEO

Queries Analyzed

10,000

Across 5 AI platforms

Products Tracked

2,847

Unique products recommended

+240%

Structured Data Impact

+3.4x

Recommendation rate with schema

Top Factor

28%

Weight of structured data

The Experiment: 10,000 Queries Across 5 AI Platforms

Between January and March 2026, we submitted 10,000 product recommendation queries across ChatGPT, Perplexity, Claude, Gemini, and Copilot. The queries spanned 12 product categories - from electronics and beauty to supplements and home goods - and were structured to mimic how real consumers ask AI agents for purchase advice. "Best budget laptop for students," "top moisturizer for dry skin," "safest pre-workout supplement" - the kinds of queries that increasingly bypass Google entirely and go straight to conversational AI.

For each query, we recorded the top 3 recommended products, the reasoning the agent provided, the sources it cited (if any), and whether the recommendation included caveats or disclaimers. We then matched each recommended product against its actual product page to analyze what signals the AI agent was likely pulling from. The result is the most comprehensive dataset we're aware of on how AI agents actually make product recommendations - and the findings are not what most marketers expect.

The methodology matters because AI agent recommendations are not search results. There's no bidding, no ad placement, no SEO keyword stuffing. The agent decides based on its training data, real-time web access (where available), and whatever structured signals it can extract from product pages. Understanding those signals is the new competitive advantage - and most brands are completely blind to them.

Finding #1: The "Invisible Ranking Factors"

When we correlated product attributes with recommendation frequency, six factors emerged as statistically significant predictors. But the ranking of these factors contradicts conventional marketing wisdom. Structured data markup - the technical metadata most brands ignore - was the single most influential factor at 28%. Review quality came second at 22%. Brand authority (measured by mention frequency across authoritative sources) was third at 18%. Price competitiveness, the factor most brands optimize relentlessly, ranked fourth at just 15%.

This hierarchy makes sense when you understand how AI agents process information. They can't "see" your product the way a human shopper does. They parse structured data, extract entity relationships, and cross-reference claims against their training corpus. A product with clean Schema.org markup, legitimate reviews, and consistent brand mentions across the web gives the agent high-confidence data to work with. A product with a great price but sparse metadata is essentially invisible.

The Invisible Ranking Factors: What AI Agents Actually Weigh

Across 10,000 product recommendation queries, these 6 factors determined which products AI agents recommended - and their relative influence.

The Surprising Finding

Structured data and review quality together account for 50% of recommendation likelihood. Price - the factor most brands obsess over - ranked 4th at just 15%. AI agents care far more about whether they can trust your product data than whether your price is the lowest.

Finding #2: Reviews Are the New SEO

In traditional search, reviews matter for conversion but not for ranking. In AI agent recommendations, reviews are the second most influential factor - and the quality of reviews matters more than the quantity. Products with fewer than 50 reviews were recommended 73% less often than products with 50+ reviews, regardless of star rating. But here's the nuance: products with 500+ generic "great product" reviews were recommended less often than products with 100 detailed, specific reviews that mentioned features, use cases, and comparisons.

AI agents appear to extract semantic information from review text. When a user asks "best headphones for running," the agent isn't just checking if the product has high ratings - it's scanning review text for mentions of running, sweat resistance, fit during exercise, and similar contextual markers. Products whose reviews naturally contain use-case-specific language get recommended for those specific queries. This is fundamentally different from how Google ranks products, and it means review generation strategy needs to shift from "get more stars" to "generate reviews that describe specific use scenarios."

The implication for brands is significant. The review profile that helps you rank on Amazon may not help you get recommended by AI agents. Agents weight review depth, specificity, and recency. A product with 200 reviews from the last 6 months, with an average length of 80+ words, dramatically outperforms a product with 2,000 reviews that are mostly one-liners from years ago.

Finding #3: Structured Data Is the New Bidding Strategy

The single largest factor - and the one with the most actionable gap - is structured data. Products with complete Schema.org Product markup (including brand, price, availability, rating, review count, and description) were recommended 3.4x more often than products without structured markup. This is the AI equivalent of bidding in Google Ads: if you're not in the system, you don't get shown.

The reason is mechanical. When AI agents access product pages (via web browsing tools or cached training data), structured data is the fastest and most reliable way for them to extract product attributes. Without it, the agent has to parse unstructured HTML, which introduces errors and reduces confidence. Most agents err on the side of recommending products they can confidently describe - and structured data provides that confidence.

The Schema Markup Gap:

With full Schema.org markup: 3.4x higher recommendation rate
With partial markup: 1.8x higher than no markup
With no markup: Baseline recommendation rate
With incorrect/outdated markup: 0.6x baseline - actively penalized
Only 23% of e-commerce product pages have complete Schema.org Product markup

Perhaps most surprising: incorrect or outdated structured data performed worse than having no structured data at all. Products with schema markup showing a different price than the page displayed, or listing features that contradicted the description, were recommended 40% less often than products with no markup. AI agents appear to penalize inconsistency, likely because conflicting signals reduce the agent's confidence in the accuracy of any product information.

Finding #4: Brand Mentions Compound - First-Mover Advantage Is Real

Brand authority - which we measured as the frequency and quality of brand mentions across authoritative web sources, review sites, comparison articles, and expert recommendations - accounted for 18% of recommendation weight. But the distribution was heavily skewed: the top 10% of brands in each category captured over 60% of AI agent recommendations. This creates a compounding loop that's extremely difficult for newcomers to break into.

The compounding works like this: brands that were frequently recommended by AI agents in early 2025 generated more online mentions (from users sharing AI-recommended products), which increased their presence in training data and web sources, which made them more likely to be recommended in 2026. We found that brands appearing in top-3 AI recommendations in Q1 2025 had a 78% probability of still appearing there in Q1 2026 - even when objectively better products had entered the market.

This has profound implications for competitive dynamics. In traditional advertising, any brand can outbid competitors for visibility. In AI agent recommendations, the first-mover advantage compounds over time, creating a moat that money alone can't bridge. The brands investing in GEO (Generative Engine Optimization) today aren't just optimizing for current recommendations - they're building training-data moats that will persist for years.

Finding #5: Price Is the 4th Factor, Not the 1st

This finding will be uncomfortable for brands competing primarily on price. Across all 10,000 queries, price competitiveness accounted for only 15% of recommendation weight - ranking below structured data, reviews, and brand authority. When users asked AI agents for "the best" product in a category, the cheapest option appeared in top-3 recommendations only 12% of the time. The agent's definition of "best" consistently weighted quality signals over price.

Even when users explicitly mentioned budget constraints ("best laptop under $500"), AI agents recommended products at the higher end of the stated range 67% of the time, provided those products had stronger review and authority signals. The agent's reasoning typically included phrases like "while slightly more expensive, this offers better long-term value" or "the marginal price increase is justified by significantly better reviews." This suggests AI agents are trained to optimize for user satisfaction, not minimum price - a fundamentally different incentive than comparison shopping engines.

The exception is Gemini, which showed notably higher price sensitivity than other platforms. In our dataset, Gemini recommended the lowest-priced option 31% of the time vs. ChatGPT's 12% and Claude's 8%. This platform-level variation means brands may need different optimization strategies depending on which AI agents their target audience primarily uses.

The Hidden Data Sources AI Agents Pull From

Understanding where each AI agent gets its recommendation data reveals why platforms diverge in their recommendations. ChatGPT with browsing pulls heavily from major review aggregators (Wirecutter, RTINGS, Tom's Guide), Amazon's product ecosystem, and brand websites with strong structured data. Perplexity is the most source-transparent, typically citing 5-8 sources per recommendation, with a strong preference for recent editorial reviews and comparison articles.

Claude showed the most conservative recommendation pattern, particularly in categories where health or safety claims are involved. In our supplements category, Claude recommended specific products in only 38% of queries - often responding with educational content about ingredient research rather than product recommendations. Gemini pulled heavily from Google Shopping data and merchant feeds, which explains its price sensitivity and tendency to recommend products with active Google Shopping listings.

Cross-Agent Recommendation Rates by Category

How often each AI platform recommended products with strong optimization signals, broken down by category. Rates shown as percentage of queries where the product appeared in the top 3 recommendations.

ChatGPT - Most consistent
Perplexity - Source-heavy
Claude - Most cautious
Gemini - Price-sensitive

Platform Divergence

ChatGPT and Perplexity show the highest recommendation rates and most consistency across categories. Claude is notably more cautious, especially in supplements (38%) where health claims require stronger evidence. Gemini skews toward lower-priced options, suggesting its recommendation engine weighs price more heavily than other platforms.

The practical implication: optimizing for AI agent recommendations isn't a single strategy. Brands that want to appear in ChatGPT recommendations need to be featured in the editorial review sites that ChatGPT weights most heavily. Brands targeting Perplexity need fresh, citable content. Brands targeting Gemini need competitive pricing and active merchant feeds. And brands targeting Claude need verifiable claims with strong evidence bases. The era of one-size-fits-all SEO is definitively over.

What This Means for Your Brand

The shift from search-based discovery to agent-based recommendation is not hypothetical - it's happening now. An estimated 18% of product research queries in early 2026 go to AI agents first, up from essentially zero in 2023. For brands, this creates a new optimization surface that most competitors haven't even identified, let alone started working on. The question isn't whether AI agents will influence your sales - it's whether you'll be the brand they recommend or the brand they don't mention.

The good news is that the optimization required isn't mysterious. It's structural: implement complete Schema.org markup, cultivate detailed reviews that describe specific use cases, build authoritative brand presence across editorial and expert sources, maintain competitive (not necessarily cheapest) pricing, write rich product descriptions, and keep products in stock. None of these are revolutionary - but the combination, executed consistently, creates the signal profile that AI agents need to recommend with confidence.

AI Agent Recommendation Simulator

Answer these 6 questions to estimate how likely AI agents are to recommend your product.

Does your product page have Schema.org structured data markup?

Does your product have a 4.5+ star rating with 50+ reviews?

Is your product title optimized with brand, key specs, and category?

Is your price within 10% of the category median?

Does your product description include specs, use cases, and comparisons?

Is your product currently in stock and available for shipping?

The brands that act on these findings in the next 6-12 months will build the compounding advantage we described in Finding #4. Those that wait will find themselves fighting an increasingly uphill battle against established recommendation incumbents. In AI agent optimization, as in so many competitive domains, the best time to start was six months ago. The second best time is today.

Cresva's GEO intelligence module tracks how AI agents recommend products in your category - across ChatGPT, Perplexity, Claude, and Gemini - and identifies the specific optimization gaps suppressing your recommendation rate. We monitor structured data completeness, review signal strength, brand authority scores, and cross-platform recommendation trends. Instead of guessing what AI agents want, you get a data-driven roadmap for the factors that actually determine recommendations. Built for brands that understand the next channel isn't a platform - it's a conversation.

Written by the Cresva Team

Questions about AI agent optimization? Email us