Sources
1577 sources collected
devnewsletter.com
State of TypeScript 2026 - The Dev NewsletterThe ecosystem faced sophisticated, automated threats across multiple npm compromises in 2025, alongside critical serialization vulnerabilities in frameworks like Next.js, such as the "React2Shell" RCE (CVE-2025-55182), a CVSS 10.0 vulnerability forcing a reevaluation of security models governing full-stack JavaScript. … ## Security and Supply Chain Pressure The npm ecosystem saw a chain of incidents (s1ngularity, debug/chalk, Shai‑Hulud) that exposed systemic weaknesses in maintainer auth and CI workflows. Security responses now emphasize granular tokens, publish-time 2FA, and stricter release policies. On the app side, React2Shell (CVE-2025-55182) and follow-on issues underscored the risks in RSC serialization, while Angular’s XSS and other runtime CVEs kept security upgrades at the top of 2025’s backlog. ## Standards and Language Trajectory TC39 withdrew Records & Tuples after the proposal failed to reach consensus, while Temporal began shipping in engines even as TypeScript’s standard libs still lack `Temporal` typings (track TypeScript issue #60164). The type-annotations proposal remains early-stage, but it frames the longer-term path: a JS runtime that can ignore type syntax while TS evolves as a superset. Combined with TypeScript 7’s upcoming breaking changes and API shifts, the standards story is about consolidation, stricter defaults, and fewer “magic” features at runtime.
effectivetypescript.com
A Small Year for tsc, a Giant Year for TypeScript - Effective TypeScriptThe two big announcements in 2025 were: 1. Microsoft is rewriting the TypeScript compiler and language service in Go. 2. Node.js began supporting TypeScript natively. … When I was developing my inferred type predicates feature, I was struck that the TypeScript in `tsc` is written in a distinctive, low-level style. It often looks more like C than JavaScript. I started to think about how you could turn that into a faster `tsc`. ... The upshot is that, sometime next year, you'll update your packages and everything will get 10x faster. Slow compiler and language service performance has always been one of the biggest complaints about TypeScript. I've experienced this myself on large projects and I'm looking forward to the speed boost. My other hope is that, once the dust settles, we'll see a renewed focus on new language features. ... Impressive stuff! This should work with any version of Node.js after 22.18.0, which was released on July 31st, 2025. (This behavior has been available since Node 22.6.0 last year via `--experimental-strip-types`.) This is a big deal. Ever since Node came out in 2009, people have been running preprocessors in front of it to improve JavaScript in various ways. CoffeeScript was one of the first, then we started using "transpilers" like Babel to get early access to ES2015 features, and now we use TypeScript to get types. In all these cases, we're adding a tool to the stack. It has to be configured, you have to know it exists, and something might go wrong with it. In short, it adds friction. … 2. Since this works by stripping types, you can't use TypeScript's niche runtime features: enums, parameter properties, triple-slash imports, experimental decorators, and member visibility modifiers (`private`). I've long advised against doing this (See Effective TypeScript Item 72: Prefer ECMAScript Features to TypeScript Features) and, as of TypeScript 5.8, there's an `--erasableSyntaxOnly` flag to keep you away from these. … I want to reiterate that this doesn't do any type checking! Node will happily run programs with clear type errors: `tsc` can strip type annotations, of course, but there are several other tools that do the same thing, like Bloomberg's aptly-named ts-blank-space. Node uses `@swc/wasm-typescript`, which uses WASM for speed.
## Myth 1: It is typed and safe. TypeScript is either typed nor safe. Being a typed language means the compiler 100% know the type every single value, the binary representation of the value in memory and how to access it. The typescript transpiler/compiler cannot and does not know this information. The dynamic nature of JavaScript/ECMAScript prevent it. What TypeScript actually do is try to figure that information out or let you specify it, but it has no way to actually knowing. … ## Myth 2: It create less bugs Contradictory what many people say, TypeScript do not create less bugs compared to applications written in JavaScript. It's not my opinion, its research. The research study checked the 15% most popular applications on github. The study in a nutshell says the following: - The use of "any" has a positive impact on bug resolution time. - TypeScript code contains equal or more bugs than JavaScript code. - Its take longer time to fix bugs in projects that is written in TypeScript. … ## Myth 4: Future proof TypeScript is not in any way future proof. In every places you can run TypeScript today, do you actually run JavaScript. There is a proposal to add some type-annotations to the language, but if you read all issues and even meeting notes is it clear: - Annotations will just be treated as comments. You cannot trust it the type in any way. - Most TypeScript code will never work because it uses TypeScript-specific features. - It does not offer any technical benefits. - Many people are against the proposal, some of those people are inside the committee itself. … ## Problem 1: Encourage named exports Almost every module should export only one thing. There is very few exceptions to this, for example configuration files or a file containing multiple constants. And if you export only one thing, the default export is the obvious candidate. The argument TypeScript lovers usually say is it ease refactoring but it is technically impossible. If you use default exports do you hide the identifier/name of the thing you export. It is by definition better because it remove a dependency, you do not need to know what name of some thing has within a file, only the interface of the thing the file exports. It's great. It means less coupling and less "need to know". Its honor SOLID principles and is great software design. It also encourage the developer to give great names for imported modules and each file has different contexts, so it make clear sense different names/identifiers may be used depending on context. … ## Problem 2: Do not support import by .ts extension ECMAScript/JavaScript import modules by import/dependency specifiers and every plattform I know will convert the dependency specifier to some path or an url it can fetch. If the direct location cannot easily be defined, must the platform perform some kind of traversal (visit many files before it "knowns"). This process is slow, but remarkable slow if it is done over a network. This can be fixed if the developer provide more specific dependency specifiers, for example adding the file-extension. But there is a problem: TypeScript do not support if you add the .ts extension. It does not rename dependency specifiers. Its extremely weird actually: - The TypeScript team do 100% know TypeScript is not supported natively anywhere in the world. So you cannot evaluate a file containing TypeScript code. It must first be converted to JavaScript. - The TypeScript team 100% know the ".ts" extension is their own file-extension and it is extremely likely if someone want to import a ".ts" file from a TypeScript file, is the file very likely a TypeScript file. - The TypeScript transpiler itself will re-write all code, have almost in every case access to all dependencies and will create ".js" files by default. … ### Over-use strict mode. Strict mode do not produce less bugs nor it is more readable. Sure, you may believe so, but science says it's not. You should absolutely use "any" if it makes the code shorter and easy to read, yet there is an insane belief its bad. Its not! ### Overuse of unions types If your function has multiple declarations, use multiple functions. There is a reason why parseFloat and parseInt is different functions instead of just "parseNumber". ### Overuse of types You should not care about the type a specific value has, but the interface the value has. Instead of doing: … ``` const getName = ({ name = '' } = {}) => name; getName() // OK getName(123) // OK getName(true) // OK getName(undefined) // OK getName(Symbol('test')) // OK getName(null) // TypeError, but this is the only case ``` … ``` const add = (...args) => args.reduce(((s, v) => s + v), args.shift()); ```
jeffbruchado.com.br
TypeScript in 2025: Why 38.5% of Devs Can't Live Without Itconsole.log(calculateTotal([ { price: 10 }, { cost: 20 } // Oops! Wrong property ])); // 10 - Silent bug! // And this explodes at runtime console.log(calculateTotal([ { price: 10 }, null // Runtime error! ])); ``` How many hours have you already lost debugging errors that only appear in production? How many bugs were caused by typos in property names? How many `undefined is not a function` have you seen in your life? TypeScript eliminates these problems **even before you run the code**. … const invalidCart = [ { id: '1', name: 'Keyboard', price: 150, quantity: 1 }, { id: '2', name: 'Mouse', cost: 80, quantity: 2 } // ❌ Error: Property 'price' is missing ]; // Compile-time error - null is not allowed const nullCart = null; // ❌ Error: Argument of type 'null' is not assignable to parameter of type 'CartItem[]' ``` The difference? **You discover the bug in 2 seconds in your editor, not in 2 hours debugging production**. … ### Compilation Time TypeScript adds a compilation step to the workflow. In large projects, this can take seconds or minutes. **Solution**: Modern tools like `esbuild`, `swc`, and `vite` drastically reduce build time. ... **Type-Only Imports**: Better tree-shaking and performance **Standard Decorators**: Native ECMAScript decorators **Better Inference**: TypeScript getting smarter and smarter **AI Integration**: IDEs using AI to suggest types automatically
www.spectrocloud.com
Three common Kubernetes challenges — and how to solve themKubernetes has a pretty fearsome reputation for complexity in its own right (as we’ve discussed before). Learning it for the first time and standing up your first cluster, deploying your first application stack… it can be painful. But as any seasoned operator will tell you, it’s when you expand into running Kubernetes in production at scale that you come across the real pain! Let’s delve into three of the most common "growing pains" that we’ve seen in the field: - **Developer productivity** - **Multicluster headaches** - **The edge learning curve** We’ll not only explore the pain, but show you some ways to sidestep these pitfalls. ## Pain 1: Developer Productivity ... Despite the popularity of the term “DevOps,” most developers don’t have the skill set to be cloud native infrastructure or Kubernetes experts. They would much rather be coding features than managing infrastructure (as we have explored in this blog post). Developers just want to consume infrastructure elements such as Kubernetes clusters, and they have little tolerance for delays and hurdles in their way. Unfortunately, it’s not always easy to give them what they want. ... Firing up a new cluster takes work, costs money, and even if you have the capacity to jump right on the request, it also takes time. Which means your developers are kept waiting. … ## Pain 2: Multicluster Headaches Everyone starts with one Kubernetes cluster. But few teams today stay that way. This number quickly grows to three when you split development, staging and production environment clusters. And from there? Well, our research found that already half of those using Kubernetes in production have more than 10 clusters. Eighty percent expect to increase the number or size of their clusters in the next year. … That “future state” description should cover the entire cluster, from its infrastructure to the application workloads that run on top. ... From the data center and cloud, you might start looking even further afield: to the edge. Organizations are increasingly adopting edge computing to put applications right where they add value: in restaurants, factories and other remote locations. But edge presents unique challenges. The hardware is often low power: Your clusters might be single-node devices. The connectivity to the site may be low bandwidth or intermittent, making remote management difficult. There’s a whole new frontier of security to consider, protecting against hardware tampering. And the killer: When we’re talking about restaurant chains or industrial buildings, compute might need to be deployed to hundreds or thousands of sites. There won’t be a Kubernetes expert at each site — or even a regular IT guy — to help onboard new devices or fix any configuration issues locally. These are big challenges, but there are solutions to help you.
www.cncf.io
5. K8s' Impact On Other...Development teams continue to struggle with some aspects of Kubernetes. While they love the scalability, high availability, and fault tolerance it offers, many developers find that setting up, configuring, and managing Kubernetes is time consuming and resource intensive. The latest survey shows some areas where more than half of respondents believe Kubernetes has improved things for devs (CI/CD, deployment in general, auto scaling, and building microservices). There are other areas where it hasn’t helped, however. More than half of respondents shared that K8s had neither improved nor worsened architectural refactoring, security, application modularity, and overall system design. In some areas, notably cost (25%), architectural refactoring (15%), and security (13%), developers think Kubernetes has actually made things worse.
javascript-conference.com
TypeScript's Limitations and Workaroundsdiv TypeScript, while a powerful programming language, has limitations that arise from its type system's attempt to manage dynamically typed JavaScript code. From handling return types and function expressions to the behavior of else statements, developers often encounter challenges when working with TypeScript files. Issues can emerge at compile time, especially when using generic functions, creating an instance, or managing type information. This article explores the blind spots in TypeScript, such as handling function objects, top-level constructs, and dynamically typed scenarios, offering insights into workarounds and practical solutions. … This issue extends beyond people to include their tools and machines. ... TypeScript is no exception: while it can accurately describe 99% of JavaScript features, one percent remains beyond its grasp. This gap doesn’t only consist of reprehensible anti-features. Some JavaScript features that TypeScript doesn’t fully understand can still be useful. Additionally, for some other features, TypeScript operates under assumptions that can’t always align with reality. Like any tool, TypeScript isn’t perfect; and we should be aware of its blind spots. This article addresses three of these blind spots, offers possible workarounds, and explores the implications of encountering them in our code. … In web development, where developers don’t have to manually create every object from a class constructor, this rule is very pragmatic. On one hand, it results in relatively minor semantic errors (Listing 3), but on the other, it can also lead to more significant pitfalls. **Listing 3:** Structural subtyping triggers an error … Regardless of how you approach it, rejecting parameters that are subtypes of a given type or enforcing an exact type at the type level isn’t possible. TypeScript has a blind spot here. But is this truly a problem? … In our case, the key factor is that a *Set* and a *WeakSet* have very different semantics, even though the *WeakSet* API is a subset of the API of *Set*. In TypeScript’s type system, this means that *Set* is evaluated as a subtype of *WeakSet*, leading to the assumption of a relationship and substitutability where none exists. This blind spot in the type system leads us to solve a problem that isn’t actually a problem at all, and which we ultimately can’t resolve, especially at the type level. … Imperative programming doesn’t get any easier than this: you take a bunch of variables and manipulate them until the program reaches the desired target state. But as we all know, this programming style can be error-prone. Every *for* loop is an off-by-one error in training. So it makes sense to secure this code snippet as thoroughly as possible with TypeScript. … ### The problem with the imperative iteration bBefore we add *Combine<K, V>* to the signature of *combine(keys, values)*, we should fire up TypeScript and ask what it thinks of the current state of our function (without return type annotation). The compiler is not impressed (Listing 23). **Listing 23:** Current state of combine() … The truth is, nothing is correct. The operation that *combine(keys, values)* performs is not describable with TypeScript in the way it’s implemented here. The problem is that the result object *obj* mutates from *{}* to *Combine<K, V>* in several intermediate steps during the *for* loop, and that TypeScript doesn’t understand such state transitions. The whole point of TypeScript is that a variable has exactly one type, and it can’t change types (unlike in vanilla JavaScript). However, such type changes are essential in scenarios where objects are iteratively assembled because each mutation represents a new intermediate state on the way from A to B. TypeScript can’t model these intermediate states, and there is no correct way to equip the *combine(keys, values)* function with type annotations. … ### What to do with intermediate states that can’t be modeled?o The TypeScript type system is a huge system of equations in which the compiler searches for contradictions. This always happens for the program as a whole and without executing the program. This means that, by design, TypeScript can’t fully understand various language constructs and features, no matter how hard we try. ... The more pragmatic solution is to accept the possibilities and limitations of our tools and work with what we have. Unmodelable intermediate states are bound to occur when writing low-level imperative code. If the type system can’t represent them, we need to handle them in other ways. Unit tests can ensure that the affected functions do what they’re supposed to do, documentation and code comments are always helpful, and for an extra layer of safety, we can use runtime type-checking if needed.
www.abbacustechnologies.com
Node.js with TypeScript: Should You Make the Switch in 2025?### The Problem: JavaScript in Large-Scale Backend Systems JavaScript’s flexibility is one of its strengths—but also one of its weaknesses in backend development. While the language allows rapid prototyping and development, it can be prone to **runtime errors, inconsistent coding practices, and lack of compile-time safety**. For example: - **No type enforcement:** Functions might receive unexpected parameters, leading to hidden bugs. - **Hard-to-maintain code:** As projects scale, ensuring consistency across a large codebase becomes challenging. - **Refactoring risks:** Without strict type definitions, changing a data structure or function signature can cause unexpected breakages. - **Poor tooling for large teams:** JavaScript lacks some of the safety nets found in strongly typed languages like Java, C#, or Go. In smaller projects, these issues are manageable. But in enterprise-scale systems with **hundreds of thousands of lines of code** and **distributed teams**, the margin for error narrows significantly. This is where TypeScript shines. ... **TypeScript** was designed to address these very pain points by adding: - **Static typing**: Catching errors before code runs. - **Interfaces and generics**: Enforcing contracts between different parts of the application. - **Enhanced tooling**: Better IntelliSense, auto-completion, and refactoring in IDEs. - **Compatibility**: Fully compiles to plain JavaScript, so it works wherever JavaScript does. … ### Common Concerns About the Switch Even with clear advantages, some developers and companies hesitate to adopt TypeScript for Node.js. Common concerns include: - **Learning curve**: Developers new to static typing may find it slower initially. - **Longer development time at the start**: Writing types feels slower for small scripts. - **Refactoring cost**: Migrating a large JavaScript codebase to TypeScript requires careful planning. - **Overhead for small projects**: For quick prototypes, TypeScript might feel like overkill. … ### 10. Common Migration Pitfalls and How to Avoid Them **Pitfall 1:** Trying to achieve 100% perfect types from day one. - **Solution:** Allow temporary any types and refine over time. **Pitfall 2:** Ignoring type coverage metrics. - **Solution:** Use tools like TypeStat to track progress. **Pitfall 3:** Forgetting about performance impact in build times. … #### a) Over-Engineering for Small Projects TypeScript adds a layer of complexity that may be unnecessary for very small, short-lived projects. For quick prototypes, pure JavaScript might still be faster. #### b) Developer Skill Gaps Not all JavaScript developers are comfortable with static typing. Companies may face **longer hiring cycles** when looking for developers proficient in both Node.js and TypeScript. #### c) Tooling Overhead Although TypeScript tooling is mature, it still adds build steps, compilation time, and sometimes complex configurations that can be frustrating for newcomers. … ### 8. Predictions for 2025–2030 Here’s how the next few years might unfold: - **2025–2026:** TypeScript solidifies its dominance in backend development, with most new Node.js frameworks offering TS-first design. - **2027–2028:** AI-powered code generation fully integrates with TypeScript to produce “zero-runtime-error” applications for many use cases.
bitskingdom.com
Common TypeScript Pitfalls & FAQ | 2025No matter how experienced you are with TypeScript, you’ll eventually encounter confusing type errors, tricky edge cases, or design dilemmas. Over time, I’ve run into (and solved) most of these headaches in production code. This chapter is your reference guide for avoiding common anti-patterns, troubleshooting recurring issues, and getting quick answers to frequent questions. ## 1. Pitfall: Misusing any in TypeScript **The Problem:** Using any disables type safety entirely. let data: any = fetchSomething(); data.toUpperCase(); // runtime error if data is a number **The Fix:** Prefer unknown when the type is unclear—it forces narrowing. function handle(input: unknown) { if (typeof input === "string") { return input.toUpperCase(); } } … **3. Pitfall: Forgetting to Narrow Union Types** **The Problem:** Not narrowing types leads to errors. type Result = string | number; function handle(result: Result) { console.log(result.toFixed(2)); // ❌ Error } **The Fix:** Use type guards. function handle(result: Result) { if (typeof result === "number") { console.log(result.toFixed(2)); } } **4. Pitfall: Forgetting to Export Types Across Files** Always export types you intend to reuse. // types.ts export type User = { name: string }; ## 5. Pitfall: Confusing interface vs type **Quick Rule of Thumb:** **interface**→ Use for object shapes, extension, OOP-style patterns. **type**→ Use for unions, primitives, mapped types, aliases, and function signatures. **6. Pitfall: Over-Engineering with Complex Types** Avoid writing overly “smart” or recursive conditional types. - Start simple. - Use runtime helper functions instead of forcing everything into types. - Document trade-offs for clarity. **7. Pitfall: Not Using TypeScript Utility Types** Lean on built-in utilities like: - Partial<T> - Required<T> - Pick<T, K> - Omit<T, K> - Record<K, T> ## 8. Pitfall: Skipping strict Mode Without strict mode, you’ll miss valuable errors. Always enable it: { "compilerOptions": { "strict": true } } Includes checks like noImplicitAny, strictNullChecks, and more. **9. Pitfall: Ignoring .d.ts Files for External JS** … **Quick Recap** - Avoid common mistakes like misusing any, skipping null checks, or over-engineering types. - Use strict mode and utility types to simplify your workflow. - Stick to simple, clear, and maintainable type designs. - Reference FAQs for quick solutions to recurring TypeScript questions. **From Zero to Production: Closing the Strictly Typed Series**
www.dennisokeeffe.com
Composition Over InheritanceWe will be covering a few topics where each is ramping up into the next: 1. The prerequisites - Code volume - Control flow - State management 2. The principles - Composition over inheritance - Parse, don't validate - Never throw errors - Metadata - Define your source of truth - Let controllers tell you everything - Don't emulate network infrastructure - Don't let AI take the driver's seat - Generate as much code as possible - Write to refactor programmatically - Don't go overkill on abstraction layers … Over-bloat and unnecessary abstractions may be easier to imagine. If you're working in the codebase where you need to make jumps to ten different definitions in order to understand the inheritance chain or follow the path of the business logic, then you have probably over-engineered the shit out of it. Principles like "composition over inheritance" and "parse, don't validate" can help mitigate volume creep (which I touch on in their own section), but there are some general guiding principles that I recommend to get around this: … 1. Unpredictable Behavior: When state can be modified from multiple places without clear patterns, applications become unpredictable. Developers can't easily reason about what will happen when code executes. 2. Debugging Nightmares: Without clear state flows, finding the root cause of bugs becomes extremely difficult. A bug might manifest in one component but originate from state modifications elsewhere. 3. Technical Debt Accumulation: Poor state management compounds over time through things like duplicated state, stale state and side-effects. 4. Readability and Maintainability Issues: New developers struggle to understand applications. 5. Performance Problems: Unnecessary re-renders, Memory leaks, Network request redundancy. Although this post won't spend too much time on state management, it is also partly related to these topics: … In the above case, we are throwing errors as stand-ins for what could be handled as expected errors. A non-exhaustive list of problems with this: … . A developer cannot grok from our types what can go wrong in an expected way. In my experience as well, this approach also doesn't really happen in practice. Not all thrown errors are caught and managed correctly, so you end up with hard-to-follow try-catch behavior littered throughout implementation. **Do**: … . 2. There are no try-catch clauses. In the case where an error is thrown from something **unexpected**, we consider this a**defect** and should have systems in place to capture that error and inform the developers (not shown here). 3. Our controller can have an easier time managing responses at the boundary, while our developers working on this can learn a lot about this endpoint and possible responses without diving into the business logic. If you look at the `Data` and expected error classes, you'll known the `_tag` property (which I've adopted from EffectTS). I'll talk more to this on the metadata section. I should finish here by saying that "never" here is a bit strong. I've recently heard an engineering manager use the quote "use exceptions for exceptional circumstances", and I find that to be a useful quote around throwing errors in TypeScript. Do so sparingly and with good reason.
www.holgerscode.com
My Take: Hype Vs. Reality## The Security Question: Server Actions and Beyond ¶ Let's address the elephant in the room: **yes, Next.js has had security concerns**, particularly around Server Actions. In 2024 and 2025, several security researchers highlighted potential vulnerabilities in how Server Actions could be exploited if developers weren't careful about authorization checks. These were real concerns that the Next.js team took seriously, and they've implemented multiple layers of protection:
www.propelauth.com
It's not just you, Next.js is getting harder to use - PropelAuthI wrote a blog post the other day about how Next.js Middleware can be useful for working around some of the restrictions imposed by server components. ... From my perspective, Next.js’ App Router has two major problems that make it difficult to adopt: … While exposing the request/response is very powerful, these objects are inherently **dynamic** and affect the entire route. This limits the framework's ability to implement current (caching and streaming) and future (Partial Prerendering) optimizations. > To address this challenge, we considered exposing the request object and tracking where it's being accessed (e.g. using a proxy). But this would make it harder to track how the methods were being used in your code base, and could lead developers to unintentionally opting into dynamic rendering. > Instead, we exposed specific methods from the Web Request API, unifying and optimizing each for usage in different contexts: Components, Server Actions, Route Handlers, and Middleware. … ... It’s not that it’s necessarily incorrect - it’s unexpected. That original post also mentioned a few other subtleties. One common footgun is in how cookies are handled. You can call `cookies().set("key", "value")` anywhere and it will type-check, but in some cases it will fail at runtime. Compare these to the “old” way of doing things where you got a big `request` object and could do anything you wanted on the server, and it’s fair to say that there’s been a jump in complexity. I also need to point out that the “on-by-default” aggressive caching is a rough experience. I’d argue that way more people expect to opt-in to caching rather than dig through a lot of documentation to figure out how to opt-out. … ## Just because something is recommended, doesn’t mean it’s right for you One of my biggest issues with the App Router was just this: Next.js has officially recommended that you use the App Router since before it was honestly ready for production use. Next.js doesn’t have a recommendation on whether TypeScript, ESLint, or Tailwind are right for your project (despite providing defaults of Yes on TS/ESLint, No to Tailwind - sorry Tailwind fans), but absolutely believes you should be using the App Router. The official React docs don’t share the same sentiment. They currently recommend the Pages Router and describe the App Router as a “Bleeding-edge React Framework.” When you look at the App Router through that lens, it makes way more sense. Instead of thinking of it as the recommended default for React, you can think of it more like a beta release. The experience is more complicated and some things that were easy are now hard/impossible, but what else would you expect from something that’s still “Bleeding-edge?”