Sources

453 sources collected

div TypeScript, while a powerful programming language, has limitations that arise from its type system's attempt to manage dynamically typed JavaScript code. From handling return types and function expressions to the behavior of else statements, developers often encounter challenges when working with TypeScript files. Issues can emerge at compile time, especially when using generic functions, creating an instance, or managing type information. This article explores the blind spots in TypeScript, such as handling function objects, top-level constructs, and dynamically typed scenarios, offering insights into workarounds and practical solutions. … This issue extends beyond people to include their tools and machines. ... TypeScript is no exception: while it can accurately describe 99% of JavaScript features, one percent remains beyond its grasp. This gap doesn’t only consist of reprehensible anti-features. Some JavaScript features that TypeScript doesn’t fully understand can still be useful. Additionally, for some other features, TypeScript operates under assumptions that can’t always align with reality. Like any tool, TypeScript isn’t perfect; and we should be aware of its blind spots. This article addresses three of these blind spots, offers possible workarounds, and explores the implications of encountering them in our code. … In web development, where developers don’t have to manually create every object from a class constructor, this rule is very pragmatic. On one hand, it results in relatively minor semantic errors (Listing 3), but on the other, it can also lead to more significant pitfalls. **Listing 3:** Structural subtyping triggers an error … Regardless of how you approach it, rejecting parameters that are subtypes of a given type or enforcing an exact type at the type level isn’t possible. TypeScript has a blind spot here. But is this truly a problem? … In our case, the key factor is that a *Set* and a *WeakSet* have very different semantics, even though the *WeakSet* API is a subset of the API of *Set*. In TypeScript’s type system, this means that *Set* is evaluated as a subtype of *WeakSet*, leading to the assumption of a relationship and substitutability where none exists. This blind spot in the type system leads us to solve a problem that isn’t actually a problem at all, and which we ultimately can’t resolve, especially at the type level. … Imperative programming doesn’t get any easier than this: you take a bunch of variables and manipulate them until the program reaches the desired target state. But as we all know, this programming style can be error-prone. Every *for* loop is an off-by-one error in training. So it makes sense to secure this code snippet as thoroughly as possible with TypeScript. … ### The problem with the imperative iteration bBefore we add *Combine<K, V>* to the signature of *combine(keys, values)*, we should fire up TypeScript and ask what it thinks of the current state of our function (without return type annotation). The compiler is not impressed (Listing 23). **Listing 23:** Current state of combine() … The truth is, nothing is correct. The operation that *combine(keys, values)* performs is not describable with TypeScript in the way it’s implemented here. The problem is that the result object *obj* mutates from *{}* to *Combine<K, V>* in several intermediate steps during the *for* loop, and that TypeScript doesn’t understand such state transitions. The whole point of TypeScript is that a variable has exactly one type, and it can’t change types (unlike in vanilla JavaScript). However, such type changes are essential in scenarios where objects are iteratively assembled because each mutation represents a new intermediate state on the way from A to B. TypeScript can’t model these intermediate states, and there is no correct way to equip the *combine(keys, values)* function with type annotations. … ### What to do with intermediate states that can’t be modeled?o The TypeScript type system is a huge system of equations in which the compiler searches for contradictions. This always happens for the program as a whole and without executing the program. This means that, by design, TypeScript can’t fully understand various language constructs and features, no matter how hard we try. ... The more pragmatic solution is to accept the possibilities and limitations of our tools and work with what we have. Unmodelable intermediate states are bound to occur when writing low-level imperative code. If the type system can’t represent them, we need to handle them in other ways. Unit tests can ensure that the affected functions do what they’re supposed to do, documentation and code comments are always helpful, and for an extra layer of safety, we can use runtime type-checking if needed.

12/16/2024Updated 3/24/2026

### The Problem: JavaScript in Large-Scale Backend Systems JavaScript’s flexibility is one of its strengths—but also one of its weaknesses in backend development. While the language allows rapid prototyping and development, it can be prone to **runtime errors, inconsistent coding practices, and lack of compile-time safety**. For example: - **No type enforcement:** Functions might receive unexpected parameters, leading to hidden bugs. - **Hard-to-maintain code:** As projects scale, ensuring consistency across a large codebase becomes challenging. - **Refactoring risks:** Without strict type definitions, changing a data structure or function signature can cause unexpected breakages. - **Poor tooling for large teams:** JavaScript lacks some of the safety nets found in strongly typed languages like Java, C#, or Go. In smaller projects, these issues are manageable. But in enterprise-scale systems with **hundreds of thousands of lines of code** and **distributed teams**, the margin for error narrows significantly. This is where TypeScript shines. ... **TypeScript** was designed to address these very pain points by adding: - **Static typing**: Catching errors before code runs. - **Interfaces and generics**: Enforcing contracts between different parts of the application. - **Enhanced tooling**: Better IntelliSense, auto-completion, and refactoring in IDEs. - **Compatibility**: Fully compiles to plain JavaScript, so it works wherever JavaScript does. … ### Common Concerns About the Switch Even with clear advantages, some developers and companies hesitate to adopt TypeScript for Node.js. Common concerns include: - **Learning curve**: Developers new to static typing may find it slower initially. - **Longer development time at the start**: Writing types feels slower for small scripts. - **Refactoring cost**: Migrating a large JavaScript codebase to TypeScript requires careful planning. - **Overhead for small projects**: For quick prototypes, TypeScript might feel like overkill. … ### 10. Common Migration Pitfalls and How to Avoid Them **Pitfall 1:** Trying to achieve 100% perfect types from day one. - **Solution:** Allow temporary any types and refine over time. **Pitfall 2:** Ignoring type coverage metrics. - **Solution:** Use tools like TypeStat to track progress. **Pitfall 3:** Forgetting about performance impact in build times. … #### a) Over-Engineering for Small Projects TypeScript adds a layer of complexity that may be unnecessary for very small, short-lived projects. For quick prototypes, pure JavaScript might still be faster. #### b) Developer Skill Gaps Not all JavaScript developers are comfortable with static typing. Companies may face **longer hiring cycles** when looking for developers proficient in both Node.js and TypeScript. #### c) Tooling Overhead Although TypeScript tooling is mature, it still adds build steps, compilation time, and sometimes complex configurations that can be frustrating for newcomers. … ### 8. Predictions for 2025–2030 Here’s how the next few years might unfold: - **2025–2026:** TypeScript solidifies its dominance in backend development, with most new Node.js frameworks offering TS-first design. - **2027–2028:** AI-powered code generation fully integrates with TypeScript to produce “zero-runtime-error” applications for many use cases.

8/14/2025Updated 3/11/2026

No matter how experienced you are with TypeScript, you’ll eventually encounter confusing type errors, tricky edge cases, or design dilemmas. Over time, I’ve run into (and solved) most of these headaches in production code. This chapter is your reference guide for avoiding common anti-patterns, troubleshooting recurring issues, and getting quick answers to frequent questions. ## 1. Pitfall: Misusing any in TypeScript **The Problem:** Using any disables type safety entirely. let data: any = fetchSomething(); data.toUpperCase(); // runtime error if data is a number **The Fix:** Prefer unknown when the type is unclear—it forces narrowing. function handle(input: unknown) { if (typeof input === "string") { return input.toUpperCase(); } } … **3. Pitfall: Forgetting to Narrow Union Types** **The Problem:** Not narrowing types leads to errors. type Result = string | number; function handle(result: Result) { console.log(result.toFixed(2)); // ❌ Error } **The Fix:** Use type guards. function handle(result: Result) { if (typeof result === "number") { console.log(result.toFixed(2)); } } **4. Pitfall: Forgetting to Export Types Across Files** Always export types you intend to reuse. // types.ts export type User = { name: string }; ## 5. Pitfall: Confusing interface vs type **Quick Rule of Thumb:** **interface**→ Use for object shapes, extension, OOP-style patterns. **type**→ Use for unions, primitives, mapped types, aliases, and function signatures. **6. Pitfall: Over-Engineering with Complex Types** Avoid writing overly “smart” or recursive conditional types. - Start simple. - Use runtime helper functions instead of forcing everything into types. - Document trade-offs for clarity. **7. Pitfall: Not Using TypeScript Utility Types** Lean on built-in utilities like: - Partial<T> - Required<T> - Pick<T, K> - Omit<T, K> - Record<K, T> ## 8. Pitfall: Skipping strict Mode Without strict mode, you’ll miss valuable errors. Always enable it: { "compilerOptions": { "strict": true } } Includes checks like noImplicitAny, strictNullChecks, and more. **9. Pitfall: Ignoring .d.ts Files for External JS** … **Quick Recap** - Avoid common mistakes like misusing any, skipping null checks, or over-engineering types. - Use strict mode and utility types to simplify your workflow. - Stick to simple, clear, and maintainable type designs. - Reference FAQs for quick solutions to recurring TypeScript questions. **From Zero to Production: Closing the Strictly Typed Series**

9/18/2025Updated 9/26/2025

www.dennisokeeffe.com

Composition Over Inheritance

We will be covering a few topics where each is ramping up into the next: 1. The prerequisites - Code volume - Control flow - State management 2. The principles - Composition over inheritance - Parse, don't validate - Never throw errors - Metadata - Define your source of truth - Let controllers tell you everything - Don't emulate network infrastructure - Don't let AI take the driver's seat - Generate as much code as possible - Write to refactor programmatically - Don't go overkill on abstraction layers … Over-bloat and unnecessary abstractions may be easier to imagine. If you're working in the codebase where you need to make jumps to ten different definitions in order to understand the inheritance chain or follow the path of the business logic, then you have probably over-engineered the shit out of it. Principles like "composition over inheritance" and "parse, don't validate" can help mitigate volume creep (which I touch on in their own section), but there are some general guiding principles that I recommend to get around this: … 1. Unpredictable Behavior: When state can be modified from multiple places without clear patterns, applications become unpredictable. Developers can't easily reason about what will happen when code executes. 2. Debugging Nightmares: Without clear state flows, finding the root cause of bugs becomes extremely difficult. A bug might manifest in one component but originate from state modifications elsewhere. 3. Technical Debt Accumulation: Poor state management compounds over time through things like duplicated state, stale state and side-effects. 4. Readability and Maintainability Issues: New developers struggle to understand applications. 5. Performance Problems: Unnecessary re-renders, Memory leaks, Network request redundancy. Although this post won't spend too much time on state management, it is also partly related to these topics: … In the above case, we are throwing errors as stand-ins for what could be handled as expected errors. A non-exhaustive list of problems with this: … . A developer cannot grok from our types what can go wrong in an expected way. In my experience as well, this approach also doesn't really happen in practice. Not all thrown errors are caught and managed correctly, so you end up with hard-to-follow try-catch behavior littered throughout implementation. **Do**: … . 2. There are no try-catch clauses. In the case where an error is thrown from something **unexpected**, we consider this a**defect** and should have systems in place to capture that error and inform the developers (not shown here). 3. Our controller can have an easier time managing responses at the boundary, while our developers working on this can learn a lot about this endpoint and possible responses without diving into the business logic. If you look at the `Data` and expected error classes, you'll known the `_tag` property (which I've adopted from EffectTS). I'll talk more to this on the metadata section. I should finish here by saying that "never" here is a bit strong. I've recently heard an engineering manager use the quote "use exceptions for exceptional circumstances", and I find that to be a useful quote around throwing errors in TypeScript. Do so sparingly and with good reason.

3/16/2025Updated 3/26/2026

www.holgerscode.com

My Take: Hype Vs. Reality

## The Security Question: Server Actions and Beyond ¶ Let's address the elephant in the room: **yes, Next.js has had security concerns**, particularly around Server Actions. In 2024 and 2025, several security researchers highlighted potential vulnerabilities in how Server Actions could be exploited if developers weren't careful about authorization checks. These were real concerns that the Next.js team took seriously, and they've implemented multiple layers of protection:

1/30/2026Updated 3/20/2026

I wrote a blog post the other day about how Next.js Middleware can be useful for working around some of the restrictions imposed by server components. ... From my perspective, Next.js’ App Router has two major problems that make it difficult to adopt: … While exposing the request/response is very powerful, these objects are inherently **dynamic** and affect the entire route. This limits the framework's ability to implement current (caching and streaming) and future (Partial Prerendering) optimizations. > To address this challenge, we considered exposing the request object and tracking where it's being accessed (e.g. using a proxy). But this would make it harder to track how the methods were being used in your code base, and could lead developers to unintentionally opting into dynamic rendering. > Instead, we exposed specific methods from the Web Request API, unifying and optimizing each for usage in different contexts: Components, Server Actions, Route Handlers, and Middleware. … ... It’s not that it’s necessarily incorrect - it’s unexpected. That original post also mentioned a few other subtleties. One common footgun is in how cookies are handled. You can call `cookies().set("key", "value")` anywhere and it will type-check, but in some cases it will fail at runtime. Compare these to the “old” way of doing things where you got a big `request` object and could do anything you wanted on the server, and it’s fair to say that there’s been a jump in complexity. I also need to point out that the “on-by-default” aggressive caching is a rough experience. I’d argue that way more people expect to opt-in to caching rather than dig through a lot of documentation to figure out how to opt-out. … ## Just because something is recommended, doesn’t mean it’s right for you One of my biggest issues with the App Router was just this: Next.js has officially recommended that you use the App Router since before it was honestly ready for production use. Next.js doesn’t have a recommendation on whether TypeScript, ESLint, or Tailwind are right for your project (despite providing defaults of Yes on TS/ESLint, No to Tailwind - sorry Tailwind fans), but absolutely believes you should be using the App Router. The official React docs don’t share the same sentiment. They currently recommend the Pages Router and describe the App Router as a “Bleeding-edge React Framework.” When you look at the App Router through that lens, it makes way more sense. Instead of thinking of it as the recommended default for React, you can think of it more like a beta release. The experience is more complicated and some things that were easy are now hard/impossible, but what else would you expect from something that’s still “Bleeding-edge?”

5/14/2024Updated 3/24/2026

## 7 Reasons Why Companies are Thinking of Moving Off Next.js! ### 1️⃣ Complexity with App Router and React Server Components The introduction of the App Router and React Server Components (RSC) in Next.js aimed to enhance performance and developer experience. However, these features have become **next.js vulnerabilities** because of their complexity, and some developers find them challenging.​ For instance, the App Router’s handling of server and client components can lead to confusion, especially when dealing with navigation and data fetching. Additionally, the mental model required to effectively use RSCs differs significantly from traditional React practices, leading to a steeper learning curve.​ ### 2️⃣ Performance Concerns Despite Next.js’s reputation for performance, some developers have reported **Next.js issues** with performance, particularly with development server speed and build times. The integration of new features like RSCs and the App Router has, in some cases, led to slower builds and increased memory usage.​ For example, developers have noted that the development server can become sluggish, requiring frequent restarts due to memory leaks. This hampers productivity and can be frustrating during development.​ **Did you know, ** as per **Github, Dynamic routes in App Router are reportedly ** **4x slower** to load than those in the older Pages Router ### 3️⃣ Limited Flexibility and Customization Next.js provides a set of conventions and built-in features that streamline development. However, these conventions can sometimes limit flexibility, making it challenging to implement custom configurations or workflows.​ For companies with unique requirements or those needing fine-grained control over their applications, this Next.js vulnerability can be a hindrance. Customizing aspects like routing, data fetching, or build processes may require workarounds or significant effort. ### 4️⃣ Vendor Lock-In Concerns Next.js is developed and maintained by Vercel, and while it’s open-source, some developers express problems with Next.js’s potential vendor lock-in. Features like Image Optimization and Middleware are tightly integrated with Vercel’s platform, which can make migrating to other hosting providers more complex.​ This tight coupling may deter companies seeking to maintain flexibility in their infrastructure choices.​ ### 5️⃣ Unstable Development Experience During Migration Companies that began adopting the new App Router and RSC model mid-project often report a **disjointed developer experience**. Many teams find themselves working with both the old **Pages Router** and the new **App Router** simultaneously, creating inconsistencies in routing logic, layout handling, and data-fetching methods. This next.js issue of hybrid state makes it harder to onboard new developers or maintain code consistency, especially in larger teams. ### 6️⃣ Increased Debugging and Tooling Challenges The abstraction and complexity introduced by RSCs and server/client boundaries also pose challenges for **debugging and observability**. Standard browser dev tools often fall short in offering meaningful stack traces or a clear separation of server vs. client components. Additionally, another problem with Next.js is that many popular monitoring and logging tools still lack deep integration with the latest Next.js features. This disconnect can slow down bug fixing and result in longer QA cycles, increasing the overall cost of development. ### 7️⃣ Over-Optimization for Specific Use Cases Next.js’s evolution seems increasingly aligned with **Vercel’s product vision**, which can be frustrating for companies with different infrastructure goals. For example: - Features like **Edge Middleware**, **Incremental Static Regeneration (ISR)**, and **Image Optimization** are tailored to Vercel’s edge network. - Running these on platforms like AWS, Netlify, or your own servers often leads to degraded performance or extra setup overhead.

4/28/2025Updated 3/5/2026

Next.js applications face distinct security challenges due to their hybrid nature. Unlike traditional single-page applications (SPAs) or server-rendered applications, Next.js combines: 1. **Server-Side Rendering (SSR)**: Code execution on the server before sending HTML to clients 2. **Static Site Generation (SSG)**: Pre-built pages that can expose build-time data 3. **API Routes**: Backend functionality within the same codebase 4. **Client-Side Navigation**: Dynamic routing that happens in the browser 5. **Edge Runtime**: Code running at the edge with different security contexts … #### 1. Cross-site scripting (XSS) attacks XSS remains one of the most dangerous vulnerabilities in web applications. In Next.js, XSS can occur through: - Improper use of … - Unvalidated user input in dynamic content - Third-party scripts and dependencies - Server-side rendering of malicious content #### 2. Cross-site request forgery (CSRF) CSRF attacks trick authenticated users into performing unwanted actions. Next.js doesn't include built-in CSRF protection, making applications vulnerable without proper implementation. #### 3. Authentication and authorization flaws Common authentication vulnerabilities in Next.js include: - Insecure session management - Weak token validation - Missing authorization checks on API routes - Client-side only authentication #### 4. API route security issues Next.js API routes can be vulnerable to: - Injection attacks (SQL, NoSQL, command injection) - Rate limiting bypass - Information disclosure through error messages - Missing input validation #### 5. Dependency vulnerabilities The JavaScript ecosystem's reliance on numerous packages creates supply chain risks through: - Outdated dependencies with known vulnerabilities - Malicious packages - Transitive dependency issues … ``` are sent to the browser. Everything else stays on the server (safe). Here's how to manage them securely: ## Database security with Next.js **Why database security matters:** Your database contains all your valuable information. If someone gains unauthorized access, they could steal or delete everything. **Common database vulnerabilities:** - SQL injection attacks (malicious code in queries) - Exposed connection strings - Unencrypted sensitive data - Too many database connections Here's how to secure your database properly: ## Security testing and monitoring **Why test security?** Even with all the security measures in place, you need to regularly check for vulnerabilities and monitor for attacks. **What to test:** - Authentication systems (can people break in?) - Input validation (do forms reject malicious data?) - API security (are endpoints properly protected?) - Dependencies (do any libraries have known vulnerabilities?) Here's how to implement security testing: ## Deployment security considerations ... **Use this comprehensive security checklist to systematically audit your Next.js application.** Each section provides actionable security measures organized by priority and implementation complexity. How to use this checklist? Review each section systematically. Start with **Essential** items for immediate security, then progress through **Important** and **Advanced** measures based on your application's needs and risk profile. ### Authentication Security Audit |Security Area|Priority|Implementation|Validation| |--|--|--|--| |JWT Security|Essential|32+ char secrets, secure storage, environment separation|``` echo $JWT_SECRET | wc -c ``` ≥ 32| |Session Management|Essential|HttpOnly cookies, Secure flag, SameSite=Strict, 15-30min timeout|Browser dev tools → Application → Cookies| |Password Policy|Essential|8+ chars, complexity, bcrypt cost ≥ 12, account lockout|Test weak passwords, verify hashing| |Multi-Factor Auth|Important|TOTP support, backup codes, recovery options|Test MFA flow end-to-end| |OAuth Integration|Important|PKCE implementation, state validation, scope limits|Verify OAuth flow security| |Role-Based Access|Advanced|RBAC system, server-side checks, least privilege|Test role escalation attempts| ### API Security Measures ### Input Validation **Essential**: All endpoints use Zod schemas - Request payload validation - Query parameter validation - File upload restrictions - Headers validation **Verification**: ```

7/8/2025Updated 3/27/2026

## Common Next.js Mistakes That Affects CWV ## Mistake 1: CSS-in-JS Runtime Overhead Let’s start with one of the most simple and widespread common Nextjs mistakes. CSS-in-JS libraries like Styled Components run JavaScript on the client to hash class names and inject styles into the DOM. Each injection causes browser style recalculation for the entire page, significantly hurting **LCP, FCP, and INP**. CSS-in-JS libraries introduce significant client-side overhead: - JavaScript runs in the browser. - Class names are hashed at runtime. - Styles are injected into the HTML dynamically. - Each injection triggers style recalculations. All of this happens during runtime and directly affects INP and FCP. Utility-first approaches like Tailwind avoid this problem by compiling a single CSS file at build time, with no runtime JavaScript and no client-side style recalculations. ### Mistake 2: CSS Modules with Lazy-Loaded Components Even without CSS-in-JS, performance issues can appear when combining: - CSS Modules. - Lazy-loaded components. When a lazily loaded component is mounted, its styles are injected dynamically. If this happens during a user interaction (click, input, form submission), the browser must recalculate styles at a critical moment. This often leads to increased input delay and worse INP. This matters, because: - When a dynamically imported component imports CSS Modules, styles are injected on demand. - Each injection triggers expensive full-page style recalculation. - If this happens during user interaction (button click, typing), it increases **INP** significantly. … ### Mistake 3: Missing Dynamic Imports (next/dynamic) A common mistake is shipping too much JavaScript in the initial bundle. next/dynamic allows components to be loaded only when they are actually needed, which: - Reduces the initial bundle size. - Improves LCP, FCP, and INP. - Improves perceived performance. A typical example is modal dialogs. They are rarely visible on initial page load, yet often end up in the main bundle if dynamic imports are not used. … ### Mistake 4: Using Third-Party Font CDNs Web fonts loaded from third-party CDNs introduce a costly request chain: 1. DNS lookup. 2. Request for CSS with @font-face. 3. Request for the font files. Even on fast networks, these extra round trips add latency. **Key optimization recommendations: ** … ### Mistake 5: Not Using next/image Using regular <img> tags in Next.js means losing built-in image optimizations for the web. next/image provides: - Lazy loading by default. - Automatic image optimization. - Preloading for LCP images. - Responsive image sizes via srcset and sizes. Without proper sizes, the browser may download large desktop images even on mobile devices, which directly hurts LCP. … ### Mistake 6: Unoptimized Third-Party Scripts Many websites are bloated with third-party scripts, and the problem is rarely the scripts themselves, but **when and how they are loaded**. By default, the browser treats all JavaScript as equally important, which can easily block the main thread during critical rendering phases. To avoid this, Next.js provides fine-grained control over script execution timing. … next/script allows precise control over when scripts are loaded: - beforeInteractive – critical scripts - afterInteractive – analytics and widgets - lazyOnload – non-critical scripts - worker – heavy logic off the main thread Using the wrong strategy can severely impact INP and Total Blocking Time. … ### Mistake 8: Sending Too Much Data to the Client Any data passed to a client component is sent to the browser and parsed there. **Real-world example**: a page returning a 7 MB payload from getStaticProps, most of which was never used by the UI but still had to be transferred and parsed by the browser. **Rule: ** do as much filtering, mapping, and transformation as possible on the server. Send only what the component actually needs. … ### Mistake 9: Too Much Client-Side JavaScript Logic Excessive client-side logic such as filtering and sorting: - Slows down FCP and LCP. - Increases memory usage. - Forces repeated JavaScript execution. Even with React Compiler, memoization itself has a cost. If logic does not depend on browser APIs or direct user interaction, it should live on the server. ### Mistake 10: Heavy Middleware Logic Middleware runs on every request. Problems arise when middleware includes: - Large libraries. - Heavy imports. - Poorly tree-shaken dependencies. This increases TTFB, which directly affects both FCP and LCP. Middleware should do as little work as possible and import only what is strictly necessary.

3/4/2026Updated 3/16/2026

You've invested time learning Next.js, built projects with it, and enjoyed its server-side rendering capabilities. But lately, development feels like wading through mud. Your build times are painfully slow, server functions don't work in parallel, and you find yourself mixing different techniques just to get things working. When you make a simple server-side change, you're forced to wait for the entire frontend to rebuild. … Next.js starts simple but gradually introduces complexity that becomes difficult to manage. As one developer put it: "The usual pain points, fundamental reason behind it being complexity of RSC and unmodular architecture with 'God level' build process that has to know every detail of entire app to work." This "God level" build process creates a system where changes in one area necessitate rebuilding large portions of your application. The React Server Components (RSC) implementation, while powerful, adds layers of complexity that can be challenging to debug and optimize. **Development Speed That Tests Your Patience** Perhaps the most common complaint revolves around development speed. The painfully slow development experience drives many developers away from Next.js: "The painfully slow development experience was what caused me to move away." When you're in a productive flow state, waiting for rebuilds after minor changes can be incredibly frustrating. This slowdown is particularly noticeable when making server-side changes, as one developer explains: … These limitations often manifest when trying to build complex backend functionality or when integrating with specific libraries. One developer shared their frustration with Next.js and Three.js compatibility: "After ruining a perfect weekend with Next/ThreeJs incompatibility, I'm over edge as well." **The Vercel Dependency Concern** Next.js is developed by Vercel, and while it's technically open-source, there's a growing concern about the tight coupling between the framework and Vercel's platform: "The way Vercel tightly couples NextJS with its own architecture is disappointing." This vendor lock-in concern makes some developers hesitant about building large-scale projects on Next.js, especially if they prefer flexibility in deployment options or are working in environments where Vercel isn't the optimal hosting solution. **Internationalization Headaches** If your application needs to support multiple languages, Next.js might present additional challenges: … This leads to an inconsistent approach where you might use Next.js server functions for some tasks but need to implement alternatives like tRPC for others. The result is often a fragmented codebase with multiple paradigms for handling server-side logic. Once you start using libraries like TanStack Query (formerly React Query) alongside Next.js, you're essentially maintaining parallel caching systems, which further adds to the complexity:

4/4/2025Updated 12/12/2025

But then two things hit me: 1. **The content wasn’t showing up immediately on slower connections.** - Users saw a blank screen for a moment before the data appeared. 2. **Google couldn’t crawl my blog posts properly.** - The SEO was practically nonexistent. My blog wasn’t showing up anywhere.

4/10/2025Updated 10/1/2025

For one, self-hosting Next.js in a traditional enterprise deployment pipeline is a pain. The framework doesn't lend itself to the common build-once-deploy-anywhere pattern. Because it tightly binds the output to environment variables and runtime settings, you often need a separate build per environment—a frustrating constraint for anyone used to promoting artifacts from staging to production with confidence. Then there's the middleware story. Middleware runs in a weird hybrid runtime that supports some Web APIs and a restricted subset of Node.js. This awkward middle ground feels more like an internal tool built to fit Vercel's infrastructure than a broadly useful feature. In fact, much of Next.js seems increasingly shaped by Vercel's hosting model—which is great if you're all-in on their platform, but less so if you're not. From a developer experience standpoint, things aren't much better. The documentation is sprawling, inconsistent, and full of "old vs new" decisions that beginners have to internalize. Should you use the App Router or Pages Router? getServerSideProps or a server component with fetch? When do you use the use client directive? How does caching even work? The answer, often, is "it depends," followed by hours of documentation spelunking. All of this results in a framework that feels overengineered and unnecessarily complex. For newcomers, the learning curve is steep. You don't just learn React—you also have to learn Next.js's routing model, its rendering modes, its proprietary caching behavior, its deployment quirks, and its middleware runtime.

5/17/2025Updated 6/15/2025