Hallucination
In one line
Hallucination is when an LLM produces confidently-stated information that is simply wrong — and it is one of the biggest threats to citation accuracy in GEO.
Going deeper
Hallucination is when an LLM writes something that simply isn't true, in the same fluent tone it uses for accurate facts. The risk gets worse with specifics — brand names, prices, specs, founding years.
In GEO this is not just a factual nit; it is a reputation risk. Models really do quote wrong prices, invent features, or attach a competitor's capability to your brand. Users tend to trust the answer at face value, so the fallout shows up in support tickets.
Teams typically attack it from two sides. One, clean up the source material the model grounds against. Two, run regular monitoring prompts about your own brand. Without both, you usually only discover the wrong answer after it has already calcified.
Related terms
Grounding
Grounding is the practice of anchoring an LLM's answer to external evidence — retrieved documents, search results, structured data — to push factual accuracy higher.
GEO·AEOSource Attribution
Source attribution is the way AI answers expose where the information came from — citation cards, footnotes or inline links that let users verify the claim and click through.
GEO·AEOAI Sentiment
AI Sentiment captures the tone — positive, neutral, negative — of the passages where your brand appears inside AI answers, used to adjust raw citation numbers.
GEO·AEOLLMO
LLMO (Large Language Model Optimization) is the work of shaping content, data and context signals so that LLMs understand and cite your brand correctly.
GEO·AEOCitation Rate
Citation rate is the share of a defined prompt set in which an AI answer cites your brand or domain — the headline KPI of GEO.
How does your brand show up in AI answers?
Villion measures how your brand appears across ChatGPT, Perplexity and AI Overviews, then automates the work that lifts citation rate and share of voice.
Get a free audit