cuckoo.network
Reddit User Feedback on Major LLM Chat Tools
Excerpt
## ChatGPT (OpenAI) ### Common Pain Points and Limitations **Limited context memory:**A top complaint is ChatGPT’s inability to handle long conversations or large documents without forgetting earlier details. Users frequently hit the context length limit (a few thousand tokens) and must truncate or summarize information. One user noted *“increasing the size of the context window would be far and away the biggest improvement… That’s the limit I run up against the most”*. When the context is exceeded, ChatGPT forgets initial instructions or content, leading to frustrating drops in quality mid-session. **Message caps for GPT-4:**ChatGPT Plus users lament the 25-message/3-hour cap on GPT-4 usage (a limit present in 2023). Hitting this cap forces them to wait, interrupting work. Heavy users find this throttling a major pain point. **Strict content filters (“nerfs”):**Many Redditors feel ChatGPT has become overly restrictive, often refusing requests that previous versions handled. A highly-upvoted post complained that *“pretty much anything you ask it these days returns a ‘Sorry, can’t help you’… How did this go from the most useful tool to the equivalent of Google Assistant?” … **Hallucinations and errors:**Despite its advanced capability, ChatGPT can produce incorrect or fabricated information with confidence. Some users have observed this getting worse over time, suspecting the model was “dumbed down.” For instance, a user in finance said ChatGPT used to calculate metrics like NPV or IRR correctly, but after updates *“I am getting so many wrong answers… it still produces wrong answers [even after correction]. I really believe it has become a lot dumber since the changes.”*. Such unpredictable inaccuracies erode trust for tasks requiring factual precision. **Incomplete code outputs:**Developers often use ChatGPT for coding help, but they report that it sometimes omits parts of the solution or truncates long code. One user shared that ChatGPT now *“omits code, produces unhelpful code, and just sucks at the thing I need it to do… It often omits so much code I don’t even know how to integrate its solution.”*This forces users to ask follow-up prompts to coax out the rest, or to manually stitch together answers – a tedious process. **Performance and uptime concerns:**A perception exists that ChatGPT’s performance for individual users declined as enterprise use increased. *“I think they are allocating bandwidth and processing power to businesses and peeling it away from users, which is insufferable considering what a subscription costs!”*one frustrated Plus subscriber opined. Outages or slowdowns during peak times have been noted anecdotally, which can disrupt workflows.
Source URL
https://cuckoo.network/blog/2025/04/15/reddit-user-feedback-llm-chat-tools-underserved-needsRelated Pain Points
Rate limit enforcement disrupts development workflows
7Developers encounter frequent RateLimitError exceptions that block API calls and slow development cycles. Rate limits lack transparency regarding sharing across APIs and methods to increase quotas.
OpenAI API reliability degradation from rapid feature shipping
7OpenAI experiences roughly one incident every 2-3 days, with a major incident on January 8 affecting image prompts across ChatGPT and the API. The pattern reflects a speed-vs-stability tradeoff where rapid shipping of new models, Codex, and image generation features is compromising reliability.
Incomplete code outputs and omitted solution parts
7Developers report that ChatGPT omits parts of code solutions, truncates long code segments, and provides unhelpful or incomplete code. This forces users to make multiple follow-up prompts or manually stitch together answers, making iterative development tedious.
Context window exhaustion and degradation after compaction
7Claude Code runs out of context window capacity; after compaction, the context becomes less effective and loses track of earlier instructions, requiring constant re-explanation of project conventions and specifications.
Model regression and quality degradation
7Users report that GPT-4 performance has regressed, performing closer to GPT-3.5 than expected, and there is a widespread perception that the model was 'dumbed down' over time. Tasks that worked correctly in 2024 now produce incorrect or inconsistent results.
Limited context window causes information loss
6ChatGPT cannot handle long conversations or large documents without hitting context length limits (a few thousand tokens). Users must truncate or summarize information, and when context is exceeded, ChatGPT forgets initial instructions or content, leading to quality drops mid-session.
Conservative Content Policies Limiting Creative Use Cases
4Anthropic's safety-first approach results in overly cautious responses for creative writing, marketing content, and edgy humor. Users report 23% more declined requests compared to GPT-4 for legitimate creative tasks, frustrating marketing and creative professionals.