The Risks of Overcontexting
Overview
Overcontexting — the inclusion of excessive or undifferentiated information — is a common failure mode that degrades output accuracy rather than improving it. This lesson examines the mechanisms by which excess context produces inferior results.
Failure Mechanisms
- The model misprioritizes information, assigning attention to irrelevant content
- Output coherence degrades as competing signals introduce ambiguity
- Conflicting context elements generate reasoning noise that compounds across tokens
The correct heuristic is not "include more to be safe" but rather:
Include precisely what is necessary for correctness — no more, no less.
Impact Analysis
Excess context is not merely a waste of token capacity. It actively dilutes the signal that should guide the model's response, reducing the probability of correct output. The relationship between context volume and output quality is not linear; beyond the sufficiency threshold, additional context produces diminishing and eventually negative returns.
Key Takeaways
- Excess context actively reduces output accuracy
- Noise competes with signal for the model's attention budget
- Beyond the sufficiency threshold, additional context yields negative returns