Context window exhaustion and degradation after compaction
7/10 HighClaude Code runs out of context window capacity; after compaction, the context becomes less effective and loses track of earlier instructions, requiring constant re-explanation of project conventions and specifications.
Sources
- Claude Code Gotchas
- Real Challenges of Claude Code in Coding
- Catastrophic Failures of ChatGpt that's creating major problems for ...
- Top Problems with ChatGPT (2025) and How to Fix Them
- Claude Code: The complete guide to AI-Assisted development
- AI Coding Tools Battle for Developer Loyalty in 2025
- Claude Code Trust Crisis: Why Developers Are Jumping Ship
- what are the top 3 things Claude Code users strugg...
- Reddit User Feedback on Major LLM Chat Tools
- MCP: Building the Bridge Between AI and the Real World
Collection History
Even in freshly started chat sessions, the model sometimes references details or conversations that never occurred or fails to recognize clear and recent input from the user. One OpenAI forum user observed that ChatGPT 'fails to retain or recall critical context… often remembering less relevant details while missing key points.'
If there's not enough context, the model may very well just make stuff up. If you provide too much, the model bogs down or becomes too pricey to run. The more data you provide within that window, the more fragile the entire set up becomes.
Both tools struggle with massive codebases due to context window limits, requiring developers to break tasks into smaller chunks.
After it compacts the context, it's dumber... As soon as it compacts, the context window is immediately 80% full again … It can't get more than a couple of steps before compacting again and losing its place.