← All posts

Keywords are dead. Traffic can't tell you what AI thinks of your brand.

April 2026


For twenty years, traffic and keyword rankings were a reasonable proxy for visibility — and the logic held. Keywords signaled relevance. Rankings drove clicks. Clicks meant someone chose to find out more about you. The chain was tight enough that optimizing for keywords meant optimizing for visibility.

Keywords aren't dead. You still need to be crawlable, indexed, and topically relevant. But they can't be your primary signal anymore. The chain broke — quietly, about two years ago — and most teams are still running the same playbook while the actual game moved somewhere else.

The logic held because discovery had exactly one shape: someone types a query, Google returns a ranked list, the user clicks something. If they clicked you, you showed up in analytics. If you ranked well, you got clicked. If you got clicked, you were visible. The signal and the outcome were coupled tightly enough that optimizing for one meant optimizing for the other.

That coupling is gone. Most teams don't know it yet.

What changed

When someone asks ChatGPT "what's the best inventory system for a small manufacturer" or asks Perplexity "how do I reduce customer churn in a subscription business," something happens that didn't exist two years ago: they get an answer. Not a list of links to read. An answer, with citations.

Your brand either appears in that answer or it doesn't.

No click happened. No session was recorded. Your analytics dashboard looks exactly the same whether you were cited in ten thousand AI responses today or zero. The game is being played and your scoreboard doesn't show the score.

This is the thing that matters: AI systems have already formed an opinion about your brand's relevance to your market. You just can't see it in any tool you're currently using.

Traffic was always a proxy. The problem is what it was proxying for.

Traffic wasn't the thing you cared about. You cared about visibility — being present and trusted when a potential customer was making a decision. Traffic was a good proxy because it meant a human had chosen to find out more about you. It was evidence of relevance.

Keyword rankings were a proxy for traffic. Backlinks were a proxy for rankings. Domain authority was a proxy for backlinks. The whole stack was a chain of proxies, and it worked because the underlying reality — that humans discovering your content had to click — gave the chain meaning.

AI answers break the chain at its first link. The human never clicks. The session never fires. But they still got an answer, and they still formed a judgment, and your brand was or wasn't part of it.

A company can rank #1 on Google for every keyword in their category and still be completely invisible in the AI answers those same people are getting. The rankings look great. The citations are zero.

The inverse is also real. A site with no particular SEO profile — no DR, no high-volume keywords, no domain authority to speak of — can be consistently cited by ChatGPT and Perplexity because someone wrote a clear, specific, direct answer to exactly the question people are asking. It's not visible in any traditional metric.

The citation rate is the new signal

When a RAG system answers a question, it does something specific: it retrieves candidate passages, ranks them by semantic similarity to the query, and synthesizes a response from the top results. The question your content is being evaluated on is not "does this document rank for the query keyword?" It's "does this specific passage directly answer this specific question?"

That's a different test. And it produces different winners.

The signal that matters is citation rate: what percentage of AI responses about topics in your space include your brand? How does that compare to competitors? When you are cited, what's being attributed to you, and is that the thing you want to be known for?

This data exists. It's not in Google Search Console. It's not in Ahrefs. It's not in any traditional analytics stack because all of those were built around a model where visibility required a click.

Why this is worse than it sounds

In the old model, if you were losing visibility, you could see it. Rankings dropped. Traffic fell. Leads softened. There was signal lag, but the signal existed.

In the AI model, you can lose the entire AI-generated conversation about your category — every Perplexity answer, every ChatGPT recommendation, every Gemini response — and your dashboard shows nothing unusual. Traffic might actually be stable because the people who were going to click anyway still click. The loss is invisible and growing.

The companies that will get caught flat-footed aren't the ones ignoring SEO. They're the ones running excellent SEO programs, hitting their traffic targets, showing strong numbers in every traditional metric — while quietly losing every AI-mediated consideration to competitors who figured this out earlier.

What thought leaders in this space are starting to do differently

The teams that are ahead on this have stopped asking "how do we rank for X?" as their primary question. They're asking a different one: "When someone asks an AI about our category, what does it say, and where do we appear?"

That question requires a different kind of measurement — running the actual queries, reading the actual responses, tracking citation frequency over time, understanding which competitors are being mentioned and why.

It also requires writing differently. Not keyword density. Not backlink velocity. Something more like: does this page directly and specifically answer the question someone is actually asking? Is the answer in the first 150 words? Is it a claim — something specific, falsifiable, and citable — or is it an observation that an LLM has no reason to surface?

The format that wins in AI search is close to good technical writing or good journalism: lead with the answer, support it with specifics, name things precisely. It's not a new trick. It's the format LLMs were trained on because it's what humans find useful when they actually want an answer.

The measurement gap is the strategic gap

Most of the advantage available right now comes from measuring something almost no one is measuring.

Citation rate in AI responses is auditable. You can run the queries your prospects are asking. You can read what ChatGPT and Perplexity say. You can see who they cite, with what claims, for what context. You can track this over time.

The teams doing this have a map. Everyone else is navigating by instruments that stopped measuring the terrain.

Traffic isn't going to zero. Keywords aren't meaningless. The fundamentals of making content findable still matter because many RAG systems use search as their retrieval layer. But those are table stakes — necessary, not sufficient.

The signal has shifted. Traffic tells you about the people who clicked. It doesn't tell you anything about the people who asked an AI instead — which is an increasingly large share of the people who are thinking about buying something in your category.


Spotlight tracks your citation rate across ChatGPT, Perplexity, and Gemini — which queries you appear in, which you don't, and what competitors are being cited instead. Free audit takes two minutes.