Sources

1577 sources collected

Next.js become a hype of the day (or year), but it's slowly suffocating under the weight of technical debt. There are many big enough pain points with Next.js: - Slow and painful dev mode, if you changed something and need to check more than 1 route, you need to wait till route compiled. Next.js server also uses few gigabytes of RAM. Turbopack complaining about perfectly working TS code, and struggle to understand `:global` in CSS modules, still not production ready.

12/13/2024Updated 3/22/2026

😬 Why teams like Northflank are moving away from Next.js ⚠️ Major developer frustrations and real-world bottlenecks 💡 The hidden downsides: Overengineering, vendor lock-in & performance issues 🚀 Best alternatives in 2025: Astro, Remix, Qwik, and good ol' Vite! … {ts:111} client confusion with RSC, slower development due to the {ts:115} complexity. Real challenges of the NexJS {ts:118} over engineering, app router, middleware, edge, functions, etc. Simple {ts:125} apps feel too complex, steep learning {ts:127} curve, performance concerns, cold starts on serverless, RSC introducing latency {ts:134} and the complexity. Okay, so let me just {ts:138} go with this one. Vendor lockin, versal optimized {ts:142} features, hard to migrate to AWS, do … {ts:160} static export, RSC until suddenly a simpler project feel like launching a {ts:165} spaceship. Common complaints to {ts:167} manufacturers, steep and harder learning curve. With larger apps, complexity {ts:172} grows exponentially. the framework … {ts:191} routes. Tight integration with versel for easy deployment strong documentation {ts:196} and the {ts:197} ecosystem. Okay. Cons: Increased complexity, poor DX for {ts:203} the small projects, performance issues

4/20/2025Updated 1/1/2026

So we had. Huge amount   of files we needed to transfer to the client. And now we have nearly 200,000 lines of code. {ts:589} And that got even worse, right? Even though it got better with the bundling mechanisms there,   but still Webpac wasn't an issue for us.

8/24/2025Updated 8/29/2025

We dive deep into the downsides of Next.js, including performance issues, serverless limitations, growing complexity, and vendor lock-in concerns. Whether you're a frontend developer, CTO, or tech enthusiast, this video gives you the full picture. … where developers often share the unfiltered truth ah yes the real talk happens there sometimes exactly and the article sums up some recurring frustrations people are voicing about NexJS things like uh slowness during the actual development ment process So not just the final sight speed but the dev experience right That plus unexpected bugs being introduced and some uh limitations cropping up around server client interactions how

4/17/2025Updated 4/19/2025

The web development community is experiencing a wave of criticism toward Next.js, one of the most popular React frameworks. What started as a single developer's blog post about logging difficulties has sparked a broader conversation about the framework's increasing complexity and its tight coupling with Vercel's hosting platform. ### Middleware and Logging Nightmares The core issue that triggered this discussion centers around Next.js middleware and production logging. Unlike traditional web frameworks where setting up logging is straightforward, Next.js presents unique challenges due to its complex execution model. Developers report spending hours trying to implement basic logging functionality that works across different runtime environments - edge functions, server-side rendering, and client-side code. The problem stems from Next.js running code in multiple contexts simultaneously. Some code executes on edge servers, some on traditional Node.js servers, and some in browsers. This creates confusion about where logs actually appear and how to maintain consistent logging across the entire application lifecycle. **Common Next.js Pain Points:** **Middleware limitations**: Single middleware file requirement, complex chaining **Logging difficulties**: Inconsistent behavior across edge/server/client environments **Runtime confusion**: Code execution context unclear (edge vs server vs client) **Vercel coupling**: Features optimized for Vercel hosting, problematic elsewhere **Documentation gaps**: Missing details about execution contexts and gotchas ### The Vercel Vendor Lock-in Debate A significant portion of the community discussion focuses on allegations that Next.js is deliberately designed to push developers toward Vercel's paid hosting services. Many developers report that features work seamlessly on Vercel but become problematic when deployed elsewhere. This has led to accusations that the framework's complexity isn't accidental but rather a business strategy. ... Several developers shared stories of inheriting Next.js projects that were so tightly coupled to Vercel's infrastructure that migrating to other hosting providers proved nearly impossible, sometimes requiring complete rewrites. ### Breaking Changes and API Instability The community has expressed frustration with Next.js's rapid release cycle and frequent breaking changes. With version 15 recently released, developers note that the framework has introduced 15 major versions in 8 years, each potentially containing backwards-incompatible changes. This creates maintenance burdens for long-term projects and makes it difficult for teams to keep applications updated. The transition from the Pages Router to the App Router has been particularly controversial. Many developers found the Pages Router intuitive and straightforward, but the newer App Router introduces additional complexity that some argue is unnecessary for most applications. **Next.js Version History:**

9/2/2025Updated 10/3/2025

In this video I'll share 5 performance killers that are eating your nextjs application performance. Many developers say that Next.js is slow, when in fact they are using it wrong. ... And even worse, all your images load immediately when the page {ts:111} opens. Even the ones below the fold that users might never even scroll to. I see developers ship apps with hero images {ts:119} that take 8 seconds to load on mobile connections because they never tested on anything slower than their office Wi-Fi. … Fetching data in use effect {ts:150} creates a wasted round trip. Your component renders triggers a data fetch then rerenders with the data. When child {ts:159} components also fetch data, you create a butterfall where each component waits for its parent before making its own {ts:167} request. So here's the exact sequence that happens. The page loads, the React hydrates your component, uh, use effect {ts:176} runs, and then the fetch request starts, and only then does your data begin loading. That's three steps before any {ts:186} data even starts moving. And the problem gets worse when you have nested components that each fetch their own … You might think this is fine because the data loads eventually, but your users are {ts:217} staring at the loading spinners for anywhere between 3 to 5 seconds when they could have seen the content {ts:223} immediately. And here's what nobody mentions. Every one of those used effect fetches happens after your JavaScript … The most common culprit is importing entire libraries when you only need one function. For example, writing {ts:298} import from load dash pulls in 70 kilobytes when you probably only needed one 2 {ts:306} kilobyte function. So here's what actually happens when you import an entire library. You add one line of code {ts:313} that looks innocent, but your bundle size jumps by 70 kilobytes or more. Most developers never check what they are {ts:321} actually shipping until uh the app starts to feel slow. The worst part is that you are not just importing load {ts:329} dash. You are probably also importing moment.js for dates and entire icon libraries when you use five icons and {ts:337} analytic packages that pull in dependencies you never asked for. Another common mistake is not using {ts:344} dynamic imports for heavy components like models, charts or admin panels that users might never even see. … {ts:391} you import. Backend performance matters just as much as front end optimizations. The classic mistake is the N plus1 query {ts:400} problem. Fetching all list items then making a separate database call for each items related data. If you're loading, {ts:408} for example, 20 blog posts and fetching the author for each one in individually, that's 21 queries when it should be two. … If you have 50 posts on the page, you just made 51 trips to the database. Other common mistakes include {ts:439} missing database indexes on columns you frequently cury, fetching entire tables when you only need 10 rows, and not {ts:447} using connection pooling. So your app creates new database connection on every request. You might think your queries {ts:455} are fast because they work fine in development with 100 rows of test data, but in production with 50,000 rows and {ts:462} no indexes, that same query takes 3 seconds instead of 30 milliseconds. The serverless functions on platforms like … Your API response times will drop from seconds to {ts:495} milliseconds. Analytics, chat widgets, ads, and social media embeds can easily destroy your performance. Loading these {ts:504} scripts synchronously in your document head blocks your entire page from rendering. So, users see a blank screen {ts:512} while Google Analytic downloads and executes. … {ts:535} they are all competing for bandwidth and CPU on the initial page load. A slow analytics script can add two to three {ts:544} seconds to your load time. And if their server is having issues, your entire page is stuck waiting. I've seen {ts:550} perfectly fast apps become unusable because a chat widget get took 8 seconds to load and blocked everything else.

10/15/2025Updated 3/12/2026

## Developer Productivity Still Faces Friction Points The report highlights that, despite improvements in tooling and culture, many teams still experience bottlenecks in everyday work: - Pull requests stuck in review - Tasks without clear estimates - Slowdowns in the “inner development loop” Even with great culture and tooling, friction still exists, especially around planning and execution. Knowing where dev productivity stalls helps us focus improvements where they matter most.

2/5/2026Updated 3/15/2026

Firstly, I apologise for the rant. ... Since then, I haven't been able to dedicate much time to solving *any* of the issues I've outlined in that thread, but what I will say is that docker has caused me nothing but pain, and I have realised zero benefits from attempting to utilise it. Right from the start, the syntax for docker, docker-compose, and Dockerfiles is confusing and full of edge cases which no one explains to you in the hype of actually discussing it: - These 'images' you build grow to insane sizes unless you carefully construct and regiment your `RUN`, `COPY`, and other commands. - Docker complains to you about leaving empty lines in multi-line RUN commands (which is itself, as I see it, basically a hack to get around something called a "layer limit"), even if it contains a comment (which is not an empty line) and does not provide a friendly explanation on how to solve this issue. - There's basically no good distinction between bind mounts and volumes, and the syntax is even more confusing: declaring a `volumes`entry in a docker-compose.yml? You have no good idea if you're creating a volume or a bindmount. - Tutorials & documentation tends to either assume you're a power user who knows this sort of thing, or are so trivial they don't accurately represent a real-world solution, and are therefore basically useless. I've suffered endless permissions issues trying to run portions of my application, such as being unable to write to log files, or do trivial things like clearing a cache—that I have tried a dozen different ways of fixing with zero success. Then, when I run some things from within the docker container, such as tests, they can take an excruciatingly long time to run—only then did I discover that this is yet another docker issue. The whole point of docker is to abstract away the host OS and containerise things and it can't even do that. … `docker container exec -it php sh`. Docker-sync, kubernetes, docker-compose, images, containers. It's legitimately too much. I'm not a dev-ops or infrastructure guy. I just want to write code and have my app work. I don't have the money to employ anyone to solve this for me (I'm not even employing myself yet). … Well, that was just an example, but the truth is that the framework I'm using expectsto be able to write to its own internal log file, irrespective of my actions. It's encountering permissions issues not because I'm violating the informal "one container = one unit" rule, but rather because of how permissions are transferred in bind mounts/volumes from the host system in Docker. One problem is that you are using docker for Mac. Docker is hot trash outside of Linux, because on other platforms it has to run on a virtual machine instead of being a simple container. If you are working on a project with just yourself, I don't really recommend using docker in general. It's just another layer of complexity. Docker is only really useful if you have a team wit This has struck me as messed up ever since I started using Linux; other devs use Macbooks and companies seem to force it and mandate it for all developers (business people use Thinkpads or whatever Windows-equipped laptops are around) and yet we end up deploying our software onto Linux servers. All the user-facing stuff is in HTML/CSS and it would make more sense to run an emulator for Mac or Windows on top of Linux to make sure the frontend stuff looks good in different browsers. … Lot of good information in this post. However this bit: From personal experience with H2, MSSQL, PCF, and docker, be picky about H2. H2 is great for prototyping and initial development. However I've inevitably run into times where syntax differences between H2 and the production MSSQL required writing a different query for each environment. In and of itself this is not a big deal, but over the lifetime of the app it grows and becomes more overhead. So I recommend ditching H2 as soon as you can, get a copy of whatever the prod DB is running locally. … That's how I feel with any software development to be honest. The difference is how often the interruptions to development and the amount of yak-shaving that needs to be done. Docker is just yet another complicated bit of machinery that slows down dev once in a while for me (in a previous company it slowed down development a lot). … TBF, your critiques are valid coming from someone who uses Docker for Mac exclusively. But most of this seems like you're just not willing to learn the lingo/research solutions. That's not to say that Docker is fantastic, it definitely has stuff to improve on, but a lot of your issues seem like non-issues to me. Docker isn't meant to be a quick and effortless solution to every coders' problem, it's a toolset all on its own.

4/11/2015Updated 7/8/2025

Docker issues rarely announce themselves clearly. A container that worked perfectly in the staging environment suddenly fails in production. An image that built successfully last week now throws cryptic errors during the build process. Networking between containers that communicated flawlessly for months suddenly drops packets. Volume mounts that preserved data reliably now produce permission errors or data corruption. These are not theoretical scenarios — they are the daily reality of teams running containerized workloads at scale. … ### Image Build Failures and Layer Caching Issues Dockerfile build processes that worked reliably for months can suddenly break due to upstream base image changes, expired package repository keys, or subtle changes in build context. These issues are particularly frustrating because they block the entire deployment pipeline — no new code can reach production until the build is fixed. ### Data Persistence and Volume Problems Volume-related Docker issues carry the highest risk because they can result in data loss. Permission mismatches between the container user and the host filesystem, volume driver failures, and orphaned volumes consuming disk space are all problems that require careful, methodical resolution by someone who understands Docker's storage architecture. … ... What Docker issues are most common in production? The most frequent production Docker issues include container resource exhaustion causing OOM kills, networking configuration failures between containers, volume permission problems causing data access errors, and image build failures from dependency changes in upstream packages.

2/24/2026Updated 3/23/2026

In another Golang project, a similar situation occurred. When we attempted to install all dependencies locally for development, one of us unwittingly upgraded the version of the Protobuf generator. Consequently, when the code is committed, thousands of changes are generated, even if only one line of code was updated. After this issue arose, we adopted Docker as a lifesaver. … ## Nothing is perfect Yeah, Docker is really fast. It only takes anywhere from a few milliseconds to a few seconds to start a Docker container from a Docker image. But how do you feel when every time you change the code, you have to rebuild the Docker image and restart the container again for debugging? That would be a real nightmare. To avoid it, you can only run the application locally with Docker container dependencies, or rack your brain to find a way to optimize the Dockerfile. Most of the time, it's fine, but the real problem occurs in edge cases. The same issue arises when our team tries to pack all related development tools into a Docker image. While it successfully avoids the problem of different versions of dependencies, this approach encounters a bottleneck as the time to start the application is longer than usual. So what is actually happening? In the Docker, each modification to the codebase necessitates rebuilding the image and restarting the container. Despite leveraging build caching, this process can be time-consuming if not managed carefully. It's crucial to recognize that even a minor change in any layer prompts Docker to rebuild all subsequent layers, irrespective of whether alterations were made to those lower layers. Furthermore, incorporating packages into a Docker image without proper consideration can lead to inefficiencies. Executing `apt-get upgrade` at the onset of your Docker build might replace files within the container image. Consequently, these surplus files generate redundant shadow copies, gradually consuming additional storage space over time. One significant issue that is often overlooked is that Docker builds have access to the public internet. If dependencies are pulled directly from the internet during builds, it can make it difficult to ensure reproducibility of builds over time. Different versions of dependencies may be pulled, leading to inconsistencies between builds. For example, we often include something like `RUN apt-get install ...` in the Dockerfile. This command handles everything necessary for your container to successfully execute your application. However, as mentioned above, this approach doesn't ensure complete reproducibility of the Docker image over time. Each time this command is run, the version of dependencies installed may vary. To mitigate this, we can specify the version of dependencies. However, if that exact version is no longer available, Docker will throw an error. ## What’s new gate? So, with all the challenges mentioned above, do we have any way to avoid them in a peaceful manner? Certainly, there are various ways to address these problems, but none of them are perfect or bad. Most of them involve optimizing your approach to using Docker. However, I would like to introduce another approach that keeps us away from Docker during development but still allows us to leverage Docker for deployment.

4/19/2024Updated 5/25/2025

on monitoring status, transferring data, and authenticating users are more popular among developers compared to the other topics. Specifically, developers face challenges in web browser issues, net- working error and memory management. Besides, there is a lack of experts in this domain. Conclusion: Our research findings will guide future work on the development of new tools and techniques, … challenging for developers to solve their issues for the topics on web browsers, networking errors and memory management. Also, there is a substantial lack of Docker experts in the SoF community when compared to other areas such as web development. The rest of the paper is organized as follows.

Updated 3/18/2024

**Data Quality as a Bottleneck for AI/ML Applications**: Data quality issues are a major hurdle for building AI and machine learning-powered applications. ... The report delves into three main areas: **Developer Productivity**: Despite improvements in culture and tools, developers still face challenges. Issues such as delayed pull requests and tasks lacking clear estimates are common friction points in the development process. **AI’s Impact on Software Development**: Contrary to popular belief, AI’s integration into software development is not as pervasive as one might think.

7/10/2025Updated 10/24/2025