community.openai.com
Severe regression in GPT-5 Codex performance
Excerpt
div I need to raise a critical issue with GPT-5 Codex. Since the update, coding tasks that GPT-4.1 (and even 4o) handled smoothly are now **4–7 times slower** with GPT-5 Codex. This isn’t about “deeper reasoning” — it’s basic coding workflows that are now painfully delayed, breaking developer productivity. Key problems: - **Severe slowdown** compared to GPT-4.1 (minutes instead of seconds). - **No option to select the old models** (4.1, 4o) that worked much better for fast coding. - **Flow disruption**: it’s impossible to keep a fast development pace when the model “thinks” this long. - Competitors (Claude Code, DeepSeek, etc.) are noticeably faster right now. … I’m not a dev per se, this is more of a personal project to see what AI coding is capable of, but it seems like this newer model is overcomplicating things that the previous model did just fine and faster. Now I’ve once again hit my rate limit. I’ve been using codex for a little over a week and it’s only today and yesterday that I’ve started hitting my rate limit. Model is frequently having issues with simple indentation in Python, things that I can then go back and correct in a couple seconds. … I signed in today with high hopes after severe CC degradation of quality. I’m new to Codex, so perhaps it’s my fault that I don’t know how to use it properly, but compared to any of the agents in VSCode, it’s excruciatingly slow so as to make it practically unusable. What’s even more concerning is that after making some refactoring after the CC and GPT-5 (from Copilot), it introduced several errors that it is now incapable of correcting. It’s not about the 23€ but about the hype and expectations vs. reality. I really wonder what the real story is behind this - whether it’s my lack of understanding or Codex not working as advertised.
Related Pain Points
Code generation regressions and unreliable output quality
8Post-update Codex exhibits significant regressions in previously stable workflows, generating code with logical inconsistencies, ignoring design specifications (e.g., front-end ignoring provided UI designs), and requiring multiple re-runs and manual fixes.
Unclear quota and billing transparency issues
6The API does not provide clear feedback on remaining quota or detailed billing breakdowns. Developers cannot easily track usage or understand cost allocation across API calls.
GPT-5 performance degradation on simple tasks
4GPT-5 can feel slower than GPT-4o for simpler, everyday queries and coding tasks. Community backlash occurred regarding performance degradation for simple coding tasks before OpenAI fine-tuned model routing.