GEO·AEOFoundationsUpdated 2026.04.28

Hallucination

Also known as환각할루시네이션AI 사실 오류

In one line

Hallucination is when an LLM produces confidently-stated information that is simply wrong — and it is one of the biggest threats to citation accuracy in GEO.

Going deeper

Hallucination is when an LLM writes something that simply isn't true, in the same fluent tone it uses for accurate facts. The risk gets worse with specifics — brand names, prices, specs, founding years.

In GEO this is not just a factual nit; it is a reputation risk. Models really do quote wrong prices, invent features, or attach a competitor's capability to your brand. Users tend to trust the answer at face value, so the fallout shows up in support tickets.

Teams typically attack it from two sides. One, clean up the source material the model grounds against. Two, run regular monitoring prompts about your own brand. Without both, you usually only discover the wrong answer after it has already calcified.

Related terms

How does your brand show up in AI answers?

Villion measures how your brand appears across ChatGPT, Perplexity and AI Overviews, then automates the work that lifts citation rate and share of voice.

Get a free audit