Back to Home
CompanyJanuary 4, 2026

The End of Traditional SEO

Daniel Wang

Daniel Wang

Founder · UC Berkeley MIDS

For two decades, search engine optimization has been the dominant discipline for online visibility. But the infrastructure of discovery is shifting beneath our feet. AI assistants are becoming the primary interface between users and the internet — and the rules that governed traditional search don't apply to how Large Language Models decide which brands to recommend.

The shift is already measurable

AI search engines now influence billions of queries per month, and they don't use the same ranking signals as Google.

When someone asks ChatGPT “What's the best CRM for a 50-person team?” or tells Perplexity “Compare project management tools under $20/month,” the response isn't a list of ten blue links. It's a synthesized recommendation — and the factors that determine which brands appear are fundamentally different from traditional search rankings.

Research from the University of Toronto (Chen et al., 2025) found that AI search engines deliver 69–82% earned media citations, compared to Google's 40–45%. The implication is stark: in AI search, third-party mentions of your brand matter far more than your own website's optimization.

An Ahrefs study of 75,000 brands confirmed this. The single strongest predictor of whether an LLM recommends your brand is how often you're mentioned across the web — with a correlation coefficient of 0.664. Backlink profiles, the backbone of traditional SEO, showed a correlation of just 0.218. Domain authority showed near-zero direct correlation.

Why traditional SEO metrics fail in AI search

Large Language Models don't rank pages — they synthesize information, evaluate semantic authority, and make probabilistic decisions about which brands to recommend.

The foundational Generative Engine Optimization paper by Aggarwal et al. (KDD 2024, Princeton/Georgia Tech/IIT Delhi) tested nine optimization methods across 10,000 queries. Their finding was unambiguous: keyword stuffing performed 10% worse than baseline. The core tactic of traditional SEO actively harms AI visibility.

What worked instead? Adding statistics to content delivered a 30–40% visibility improvement. Including source citations showed similar gains. A companion study from UC Berkeley (Wan et al., ACL 2024) revealed why: LLMs largely ignore stylistic authority signals that humans find persuasive — scientific references, neutral tone, appeals to authority. Instead, they rely on textual relevance and factual density.

SearchAtlas analyzed 21,767 domains and found correlations between domain authority metrics and LLM visibility of r = –0.12 for ChatGPT, –0.18 for Perplexity, and –0.09 for Gemini. The relationship isn't just weak — it's slightly negative. Your PageRank doesn't matter if the model doesn't understand why you're relevant.

Each AI engine has different citation DNA

Only 11% of domains are cited by both ChatGPT and Perplexity — a generic optimization strategy cannot cover the landscape.

Research from CMU (Wu et al., 2025) introduced AutoGEO, an automated optimization framework that achieved a 35.99% average improvement in AI visibility. Their key finding: engine-specific optimization rules consistently outperform generic strategies. What works for ChatGPT doesn't necessarily work for Perplexity.

AI EngineTop Citation SourcesKey Behavior
ChatGPTWikipedia, Forbes, G2, TechRadarMatches Bing top-10 results 87% of the time; 60.5% of cited pages published within 2 years
Google AI OverviewsReddit (21%), YouTube (18.8%), Quora93.67% of citations from top-10 organic results
PerplexityReddit (46.7%), YouTube, GartnerReal-time web search for every query; 50% of citations from current-year content
ClaudeTraditional databases, directoriesStrongest bias toward established companies; no native web search
GeminiAuthoritative lists, Google reviewsLocal business reviews dominate at 38% for local searches

This fragmentation means brands need platform-specific visibility strategies. A brand that dominates ChatGPT recommendations may be invisible on Perplexity, and vice versa. Understanding each engine's citation behavior is a prerequisite for optimization.

Introducing Generative Optimization

Generative Optimization (GEO) is the discipline of understanding and influencing how AI models perceive and recommend your brand — built on peer-reviewed research, not speculation.

The term was coined in the foundational GEO paper published at KDD 2024 by researchers from Princeton, Georgia Tech, and IIT Delhi. Their work, along with subsequent studies from CMU, Berkeley, Columbia, MIT, and Harvard, established that AI visibility can be systematically measured and improved.

The research converges on a clear hierarchy. Off-site brand mentions are the strongest controllable lever — earning mentions across YouTube, Reddit, Wikipedia, review sites, and press creates the foundational signal that makes LLMs aware of your brand. On-site content optimization is the selection mechanism — content structure, statistical density, freshness, and schema markup determine whether retrieved content gets cited.

Think of off-site factors as determining whether you're in the candidate pool, and on-site factors as determining whether you're selected from that pool. Both matter. But if LLMs don't know your brand exists, no amount of on-page optimization will help.

What Sill does differently

Sill measures AI visibility directly — running purchase-intent queries across every major platform and scoring your brand against 18+ peer-reviewed GEO factors.

Most SEO tools tell you where you rank on Google. Sill tells you what happens when someone asks an AI assistant for a recommendation in your category. We run 50 purchase-intent queries across ChatGPT, Perplexity, Gemini, Claude, Copilot, and Google AI Overviews, then score your pages against on-site and off-site factors grounded in published GEO research.

Every Sill report includes:

  • AI Share of Voice — how often each platform recommends your brand versus competitors
  • Platform-specific breakdowns — because each engine has different citation DNA
  • Content audit — page-by-page scoring against GEO factors like statistical density, answer capsule presence, heading structure, and content freshness
  • Off-site presence analysis — your visibility across YouTube, Reddit, Wikipedia, review sites, and press
  • Competitive landscape — exactly where competitors outperform you and why
  • Prioritized recommendations — ranked by expected impact, grounded in published evidence

The measurement problem — and our approach

There is no Search Console for LLMs. We built Sill to provide the closest thing that exists.

We're transparent about a hard truth: GEO measurement is fundamentally more difficult than traditional SEO measurement. There are no crawl reports from AI engines, no impression data, and no click-through rates from AI answers. LLM outputs are non-deterministic — the same query can produce different results each time. Citation patterns drift 40–60% month over month.

Our approach accounts for this uncertainty. We sample across many prompts run multiple times, track trends over time rather than point-in-time snapshots, and ground every recommendation in evidence with documented effect sizes. We don't claim precise causation where it can't be proven. We do give you the most comprehensive, research-backed picture of your AI visibility available today.

Our mission

Sill exists to give every brand the tools to understand and optimize how AI engines perceive and recommend them.

The brands that understand how AI discovery works will have a compounding advantage over those that don't. This isn't about gaming another algorithm — it's about understanding how a fundamentally new class of information systems evaluates and recommends products and services.

We believe in transparency, scientific rigor, and empowering marketers with the same research that was previously locked behind academic paywalls and enterprise consulting fees. Every recommendation Sill makes traces back to published evidence.

The future of search is generative. We're building the tools to help you thrive in it.

Research cited

  • Aggarwal et al. “GEO: Generative Engine Optimization.” KDD 2024. Princeton / Georgia Tech / IIT Delhi.
  • Wan et al. “What Evidence Do Language Models Find Convincing?” ACL 2024. UC Berkeley.
  • Chen et al. “AI Search Citation Analysis.” 2025. University of Toronto.
  • Wu et al. “AutoGEO: Automated Generative Engine Optimization.” 2025. Carnegie Mellon University.
  • Ahrefs. “75,000 Brand Study: AI Citation Predictors.” 2025.
  • SE Ranking. “129,000 Domain AI Citation Study.” November 2025.
  • SearchAtlas. “21,767 Domain LLM Visibility Analysis.” 2025.
Daniel Wang

Founder · UC Berkeley MIDS

Previously at Nordstrom, Bloomberg, Hexagon (now Octave)