GEO·AEOMetricsUpdated 2026.04.28

Citation Rate

인용률

Also known as인용률Citation Share

In one line

Citation rate is the share of a defined prompt set in which an AI answer cites your brand or domain — the headline KPI of GEO.

Going deeper

Citation rate is the headline KPI of GEO. The definition is simple: across a fixed prompt set run N times, what share of answers cite your brand or domain? It plays the role click-through rate played in classic search. It exists because keyword rank lost meaning the moment search engines started composing answers — what matters now is whether you cross the threshold of being cited at all.

The measurement workflow is five steps. Define a prompt set (30 to 100 category, brand and long-tail questions). Define the surfaces — ChatGPT, Perplexity, AI Overviews, Claude. Run each prompt N times, typically five to ten. Automatically detect whether your brand or domain is cited in the answer. Aggregate by surface and time window. Because models do not pick the same sources every time, the metric only makes sense as an averaged, repeated measurement — never a single shot.

For marketers, citation rate is the fastest read on whether GEO work is moving the needle. A practical KPI structure is three axes — citation rate by surface, by category, and relative to competitors. Villion auto-aggregates these weekly across ChatGPT, Perplexity and AI Overviews, splitting surfaces so you can compare like-for-like instead of squashing them into one global number.

Read alongside the surfaces, the metric becomes richer. Perplexity is the easiest to instrument because every sentence is footnoted. ChatGPT Search needs you to split source-card placement from in-prose mentions. AI Overviews track closely with classic SEO citation patterns because they ride the search index. Compared to share of voice, citation rate is closer to your absolute presence, while SoV captures share against competitors.

Two pitfalls to flag. First, optimising the number alone can backfire. A citation in a negative context or attached to a wrong product is a citation that hurts you. Always read the metric beside sentiment and definition accuracy. Second, treating a single run as the KPI is misleading — model answers are stochastic, so you need an average across repeated runs and ideally a confidence interval before you call a trend.

Sensible next steps: build a 30 to 50 prompt category set, baseline citation rate over four weeks, split KPIs by surface, flag negative-context and factual-error mentions separately, and when the metric moves, diagnose whether the change came from your own pages or from upstream authority signals like press and Wikipedia. Citation rate is the AI-era equivalent of keyword rank — it deserves a permanent slot on the quarterly KPI board.

Related terms

How does your brand show up in AI answers?

Villion measures how your brand appears across ChatGPT, Perplexity and AI Overviews, then automates the work that lifts citation rate and share of voice.

Get a free audit