Context Rot
In one line
Context rot is the phenomenon where LLM accuracy degrades as the context grows longer — the empirical reason a giant context window does not automatically mean better answers.
Going deeper
Context rot describes how LLM accuracy degrades as the context window fills up. A model that accepts a million tokens does not use all million tokens equally well. Longer contexts cause missed details, over-weighting of early content and conflicting facts getting blended into the answer.
The marketing-side takeaway is simple: dumping a giant PDF or full product manual into the prompt is less safe than it looks. In-house assistants loaded with everything at once routinely miss a single critical policy line and produce confidently wrong answers.
In practice teams compensate by retrieving only the relevant slices via RAG, chunking long documents along semantic boundaries and putting key information near the beginning or end of the context. The rule of thumb is shifting from 'we have a huge window, fill it' to 'feed only what is needed, cleanly'.
Related terms
Context Window
The context window is the maximum number of tokens an LLM can take in at once — it defines how much content the model can consider in a single prompt.
LLMLost in the Middle
Lost in the Middle is the well-documented effect where information sitting in the middle of a long context is used much less reliably than what sits at the beginning or end — the empirical basis for 'put the important stuff at the front or back'.
LLMRAG
RAG (Retrieval-Augmented Generation) lets an LLM fetch external documents at answer time and ground its response in them — the technique behind ChatGPT Search, Perplexity and most AI search products.
LLMContext Engineering
Context engineering goes beyond crafting a single prompt — it is the design discipline of deciding which context to assemble and how to feed it to the model, an idea that crystallised in 2024–2025.
LLMLLM
A large language model (LLM) is a neural network trained on massive text corpora to understand and generate human language — the engine behind ChatGPT, Claude, Gemini and similar products.
How does your brand show up in AI answers?
Villion measures how your brand appears across ChatGPT, Perplexity and AI Overviews, then automates the work that lifts citation rate and share of voice.
Get a free audit