LLMInference & InterfacesUpdated 2026.04.28

Context Rot

Also known as긴 컨텍스트 성능 저하Long-Context Degradation

In one line

Context rot is the phenomenon where LLM accuracy degrades as the context grows longer — the empirical reason a giant context window does not automatically mean better answers.

Going deeper

Context rot describes how LLM accuracy degrades as the context window fills up. A model that accepts a million tokens does not use all million tokens equally well. Longer contexts cause missed details, over-weighting of early content and conflicting facts getting blended into the answer.

The marketing-side takeaway is simple: dumping a giant PDF or full product manual into the prompt is less safe than it looks. In-house assistants loaded with everything at once routinely miss a single critical policy line and produce confidently wrong answers.

In practice teams compensate by retrieving only the relevant slices via RAG, chunking long documents along semantic boundaries and putting key information near the beginning or end of the context. The rule of thumb is shifting from 'we have a huge window, fill it' to 'feed only what is needed, cleanly'.

Related terms

How does your brand show up in AI answers?

Villion measures how your brand appears across ChatGPT, Perplexity and AI Overviews, then automates the work that lifts citation rate and share of voice.

Get a free audit