Sources
1577 sources collected
blog.openreplay.com
Things to Stop Doing in JavaScript in 2025 - OpenReplay BlogJavaScript moves fast. Code patterns that felt modern three years ago now ship unnecessary bytes, ignore platform improvements, or rely on deprecated APIs. If you’re building production web apps in 2025, here are the JavaScript anti-patterns to avoid—and what to reach for instead. ## Key Takeaways - Deprecated features like `with` statements, `__proto__`, and `String.prototype.substr` should be replaced with modern alternatives. - Legacy libraries such as jQuery, Moment.js, and Lodash can often be replaced with native browser APIs and ES2023–ES2025 features. - Modern CSS now handles many tasks that previously required JavaScript, including container queries, the `:has()` selector, and scroll-linked animations. … - **`with` statements** — Banned in strict mode since ES5. They create ambiguous scope and break optimizations. - **`__proto__`** — Use `Object.getPrototypeOf()` and `Object.setPrototypeOf()` instead. - **`String.prototype.substr`** — Deprecated. Use `slice()` or `substring()`. - **Legacy RegExp statics** like `RegExp.$1` — These are non-standard and unreliable across engines. These aren’t edge cases. Linters flag them for good reason. Modern JavaScript patterns assume you’ve moved on. ## Stop Reaching for Legacy Libraries by Default jQuery, Moment.js, Lodash, and RequireJS solved real problems—in 2015. Today, the platform covers most of their use cases natively. **What to do instead:** - **DOM manipulation** — `querySelector`, `querySelectorAll`, and modern DOM APIs handle what jQuery once did. - **Date handling** — The Temporal API is coming. Until full support lands, use date-fns or native `Intl.DateTimeFormat`. - **Utility functions** — ES2023–ES2025 features like `Object.groupBy()`, new Set methods (`.union()`, `.intersection()`), and iterator helpers (`.map()`, `.filter()` on iterators) replace most Lodash imports. - **Module loading** — Native ESM and `import()` make RequireJS and AMD obsolete. Shipping a 30KB library for functionality the browser provides free is a frontend mistake to stop in 2025. … - **`RegExp.escape()`** — Safely escape strings for regex patterns. - **Import attributes and JSON modules** — `import data from './config.json' with { type: 'json' }`. - **Top-level `await`** — Use it in modules without wrapping everything in async IIFEs. … ## Stop Using JavaScript for What CSS Now Handles Modern CSS has absorbed functionality that once required JavaScript. Using JS for these creates unnecessary complexity and hurts performance. **Let CSS handle:** - **Container queries** — Responsive components without `ResizeObserver` hacks. - **`:has()` selector** — Parent selection without DOM traversal. - **Scroll-linked animations** — `animation-timeline: scroll()` replaces scroll event listeners. - **View transitions** — Native page transition effects. … ## Stop Using Mutation Events and Third-Party Cookie Assumptions **Mutation Events** (`DOMSubtreeModified`, `DOMNodeInserted`) are deprecated and perform poorly. Use `MutationObserver` instead—it’s been stable for over a decade. **Third-party cookies** are effectively dead for tracking and cross-site auth flows. Chrome’s deprecation timeline has shifted, but Safari and Firefox blocked them years ago. Build authentication flows with first-party cookies, tokens, or federated identity. Don’t architect around assumptions that break in half your users’ browsers. ## Stop Starting New Projects with CommonJS If you’re writing browser code in 2025 and reaching for `require()` or heavy webpack configurations, pause. Native ESM works everywhere that matters. Lighter bundlers like Vite and esbuild handle the remaining edge cases with minimal configuration. CommonJS still has its place in Node.js libraries targeting older environments. For new frontend code, it’s legacy baggage. ## Conclusion JavaScript best practices in 2025 aren’t about chasing trends—they’re about recognizing when the platform has caught up to your dependencies. Every deprecated feature you remove, every unnecessary library you drop, and every CSS-native solution you adopt makes your code smaller, faster, and easier to maintain. Audit your current projects. Check your imports. Question whether that utility function needs a library or just a native method you haven’t learned yet. The modern JavaScript patterns are already here—you just have to use them.
news.ycombinator.com
Some features that every JavaScript developer should know in 2025… Swapping variables: Only do this if you don't care about performance (the advice is written like using the array swap hack is categorically better). ... Now then the question is whether it is optimised so. And that’s the problem with categoric statements in a language like JavaScript: if you make arguments about fine performance things, they’re prone to change, because JavaScript performance is a teetering stack of flaming plates liable to come crashing down if you poke it in the wrong direction, which changes from moment to moment as the pile sways.
www.toptal.com
The 10 Most Common JavaScript Issues Developers Face# The 10 Most Common JavaScript Issues Developers Face At first, JavaScript may seem quite simple. Yet the language is significantly more nuanced, powerful, and complex than one would initially be led to believe. Many of JavaScript’s subtleties lead to a number of common problems—10 of which we discuss here—that keep code from behaving as intended. It’s important to be aware of and avoid these pitfalls in one’s quest to become a master JavaScript developer. ... **In fact, many of JavaScript’s subtleties can lead to a number of common problems that keep it from working—10 of which we discuss here. It is important to be aware of and avoid these issues on your journey to become a master JavaScript developer.** ## JavaScript Issue No. 1: Incorrect References to `this` There’s no shortage of confusion among JavaScript developers regarding JavaScript’s `this` keyword. As JavaScript coding techniques and design patterns have become increasingly sophisticated over the years, there’s been a corresponding increase in the proliferation of self-referencing scopes within callbacks and closures, which are a fairly common source of “ … ## JavaScript Issue No. 2: Thinking There Is Block-level Scope As discussed in our JavaScript Hiring Guide, a common source of confusion among JavaScript developers (and therefore a common source of bugs) is assuming that JavaScript creates a new scope for each code block. Although this is true in many other languages, it is … ## JavaScript Issue No. 3: Creating Memory Leaks Memory leaks are almost inevitable issues in JavaScript if you’re not consciously coding to avoid them. There are numerous ways for them to occur, so we’ll just highlight two of their more common occurrences. … ## JavaScript Issue No. 4: Confusion About Equality One JavaScript convenience is that it will automatically coerce any value being referenced in a boolean context to a boolean value. But there are cases in which this can be as confusing as it is convenient. The following expressions, for example, are known to be troublesome for many a JavaScript developer: … ## JavaScript Issue No. 5: Inefficient DOM Manipulation JavaScript makes it relatively easy to manipulate the DOM (i.e., add, modify, and remove elements), but does nothing to promote doing so efficiently. A common example is code that adds a series of DOM elements one at a time. Adding a DOM element is an expensive operation, and code that adds multiple DOM elements consecutively is inefficient and likely not to work well. … ## JavaScript Issue No. 6: Incorrect Use of Function Definitions Inside `for` Loops Consider this code: ``` var elements = document.getElementsByTagName('input'); var n = elements.length; // Assume we have 10 elements for this example for (var i = 0; i < n; i++) { elements[i].onclick = function() { console.log("This is element #" + i); }; ``` … ## JavaScript Issue No. 7: Failure to Properly Leverage Prototypal Inheritance A surprisingly high number of JavaScript developers fail to fully understand, and therefore fully leverage, the features of prototypal inheritance. Here’s a simple example: ``` BaseObject = function(name) { if (typeof name !== "undefined") { this.name = name; } else { this.name = 'default' }; ``` … ## Understanding the basics ### What are the common errors in JavaScript? The common errors that developers make while coding in JavaScript include mistaken thinking about how the “this” keyword works, incorrect assumptions about block scoping, and a failure to avoid memory leaks. JavaScript’s evolution over time has left many pitfalls if old coding patterns are followed.
cerebrix.org
The Case for Less JavaScript in 2025 - CerebrixFor the past decade, JavaScript has eaten the web. From jQuery to Angular, React, Vue, Next, Astro, Svelte, Solid — every year delivered a new framework promising: ✅ Better DX (developer experience) ✅ Faster state updates ✅ Easier component composition The result? Websites loaded with hundreds of kilobytes of JavaScript, shipping *applications* even for trivial brochure pages. ## The Cracks in the Foundation In 2025, more engineers are waking up to a simple truth: **JavaScript is the most fragile part of the stack.** ✅ It depends on the user’s runtime (the browser) ✅ Network failures or partial asset loads break experiences ✅ Third-party scripts (ads, analytics) compete for CPU ✅ Complex hydration chains can cause subtle bugs that break accessibility No matter how advanced our bundlers or frameworks get, **shoving megabytes of JS down the pipe is a user tax** — especially for anyone on slower or older devices. ## The Performance Reality Median page size for JavaScript in 2024 exceeded **630KB**, according to the HTTP Archive. Median mobile Lighthouse performance score was **55/100**in 2024 (web.dev). Largest Contentful Paint delays were correlated directly with JS execution blocking render. Sure, we can blame “bad code,” but the deeper reason is cultural: We default to JavaScript for everything. … ## The Developer’s Cognitive Load Framework-heavy apps add: Complex build pipelines State synchronization Client/server data fetching race conditions TypeScript gymnastics to describe the world Yes, these are manageable — but for many use cases, they’re simply *unnecessary*. 2025 should be about **shipping fewer moving parts**, not more.
centizen.substack.com
What Developers Love and Hate About JavaScript : A Comprehensive Overview**Constant evolution:**JavaScript continues to evolve with regular ECMAScript updates, introducing new features and improvements that keep the language fresh and modern. **What developers hate about javascript** **Browser compatibility issues:**JavaScript works across all modern browsers, but older browsers (like Internet Explorer) can still create compatibility issues that require extra work to resolve. **Dynamic typing can lead to bugs:**While dynamic typing is convenient, it can also lead to hard-to-debug errors, especially in larger codebases where type-related issues may go unnoticed. **Callback hell:**When dealing with multiple nested callbacks, JavaScript code can become unreadable and hard to maintain, leading to the dreaded “callback hell.” **Security vulnerabilities:**JavaScript is vulnerable to attacks like Cross-Site Scripting (XSS), which requires developers to be extra cautious and implement strong security measures. **Performance problems in large apps:**For large-scale applications or data-heavy tasks, JavaScript can experience performance bottlenecks, especially when handling operations on the client side. **Quirky syntax:**JavaScript’s quirks, such as automatic type coercion and implicit global variables, often cause unexpected behavior and confusion for developers. **Inconsistent browser rendering:**Different browsers interpret JavaScript slightly differently, which can lead to inconsistencies in rendering and functionality across platforms. **Scaling issues:**JavaScript can be difficult to scale, especially as applications grow in complexity. Maintaining modularity and performance can become challenging. **Asynchronous programming confusion:**Despite improvements with async/await, asynchronous JavaScript code can still be tricky for beginners to understand and manage, especially in complex use cases. **Too many tools and frameworks:**JavaScript’s vast ecosystem can be overwhelming, with numerous frameworks, libraries, and tools to choose from. It can be hard to decide which one is best suited for the job. **Conclusion**
## Common Challenges Faced by JavaScript Developers: Performance Concerns ### 1. Slow Loading Times One of the most common performance issues faced by JavaScript developers is slow loading times. When a website or web application takes too long to load, users are more likely to abandon it, leading to decreased user engagement and potential revenue loss. According to Google, 53% of mobile users will abandon a website if it takes more than three seconds to load. … ### 2. Memory Leaks Memory leaks are another common performance concern for JavaScript developers. A memory leak occurs when a program fails to release memory that is no longer being used, leading to decreased performance and potential crashes. According to a study by IBM, memory leaks can account for up to 75% of all reported JavaScript errors. To prevent memory leaks, JavaScript developers can use tools such as Chrome DevTools to analyze memory usage and identify potential leaks. ... ### 3. Inefficient Code Writing inefficient code is a common mistake that can significantly impact the performance of a JavaScript application. Inefficient code can lead to increased CPU usage, longer processing times, and decreased overall performance. According to a survey by Stack Overflow, 64.9% of developers consider optimizing code performance to be a top priority. To improve code efficiency, JavaScript developers can use techniques such as code splitting, lazy loading, and reducing the number of DOM manipulations. ... ### 4. Cross-Browser Compatibility Ensuring cross-browser compatibility is another challenge faced by JavaScript developers, as different browsers may interpret code differently and lead to inconsistencies in performance. According to StatCounter, Google Chrome is the most popular web browser, with a market share of over 65%. To address cross-browser compatibility issues, JavaScript developers should use feature detection rather than browser detection, test their code on multiple browsers and devices, and utilize tools like Babel to transpile code for older browsers. Additionally, keeping up to date with the latest web standards and best practices can help ensure optimal performance across different browser environments. … As JavaScript continues to evolve and become more complex, developers must be vigilant in addressing performance concerns to ensure optimal user experience and business success. By proactively addressing common challenges such as slow loading times, memory leaks, inefficient code, cross-browser compatibility, and lack of performance monitoring, JavaScript developers can improve the performance and scalability of their applications. Remember, performance optimization is an ongoing process, and developers should continuously monitor and improve the performance of their code to deliver a seamless and efficient user experience. … ### Lack of Strong Typing One of the biggest challenges faced by JavaScript developers is the lack of strong typing in the language. Unlike languages like Java or C++, JavaScript is dynamically typed, which means that variables do not have a specified data type. This can lead to potential errors and bugs in the code, especially in larger projects where keeping track of data types becomes more important. … ### Browser Compatibility Another common challenge faced by JavaScript developers is browser compatibility. Different browsers may interpret JavaScript code differently, leading to inconsistencies in how a website functions across different browsers. This can be particularly challenging for developers who need to ensure that their websites work seamlessly on various browsers such as Chrome, Firefox, Safari, and Edge. To address this challenge, developers can use tools like Babel, which is a JavaScript compiler that converts the latest ECMAScript features into code that is compatible with older browsers. ... … ## The Top Challenges Faced by JavaScript Developers ### Compatibility Issues One of the biggest challenges faced by JavaScript developers is ensuring compatibility across different browsers and devices. With so many options available to users, from Chrome to Safari to Firefox, developers must test their code rigorously to ensure it works seamlessly on all platforms. According to StatCounter, as of July 2024, Chrome has a market share of over 65%, followed by Safari with nearly 18% and Firefox with around 3%. This means that developers must pay special attention to how their code functions across these browsers to provide a consistent user experience. … ## Comments (15) Yo, one of the biggest challenges we face as JavaScript developers is handling asynchronous code. With callbacks, promises, and async/await, it can get real messy real fast. I feel you, man. And debugging JavaScript can be a nightmare. With its dynamic typing and loose rules, it's like a landmine waiting to explode.
2023.stateofjs.com
Features - State of JavaScript 2023This year we put special emphasis on identifying developer pain points with JavaScript. As one might expect, the **lack of native typing**and **browser inconsistencies**led their respective charts, each affecting nearly a third of developers. ### Link to sectionSyntax Features 0% 20% 40% 60% 80% 100% 1 18,155 2 13,947 3 7,820 4 7,087 5 4,882 6 3,713 7 … ### Browser support 4 ### TypeScript support 5 ### Dates 6 ### Performance 7 ### Error handling 8 ### Choice overload 9 ### Async programming 10 ### Security 0% 20% 40% 60% 80% 100%% of question respondents What are your main pain points regarding the JavaScript language? ### Link to sectionBrowser APIs Pain Points 0% 20% 40% 60% 80% 100% 1 ### Browser support 2 3 ### Safari 4 ### Lack of documentation 5 ### Dates 6 ### Excessive complexity 7 ### Performance 8 ### Choice overload 9
www.wearedevelopers.com
The State of WebDev AI 2025 Results: What Can We Learn?Coding assistants and code generation tools, the biggest pain points were hallucinations and inaccuracies, context and memory limitations, intrusive suggestions and poor code quality (13%). Most developers can relate to the frustration of using a hallucinating AI while it writes poor quality code, or forgets what it did a few moments before and writes nonsensical code, so it’s interesting to see these popping up as pain points here.
www.geeksforgeeks.org
5 Common Mistakes to Avoid When Using AWS S3 - GeeksforGeeksTable of Content - What is AWS S3? - 5 Common Mistakes to Avoid When Using AWS S3 - - 1. Misconfiguration of Bucket Permissions - 2. Poor Management of Storage Classes - 3. Ignoring Data Encryption - 4. Failing to Turn on Versioning - 5. Not Monitoring Costs and Usage … ## 5 Common Mistakes to Avoid When Using AWS S3 S ### 1. Misconfiguration of Bucket Permissionsons One of the most impactful and common errors in the use of **AWS S3** is misconfiguration of permissions in buckets, which basically allows sensitive data to be exposed to the public. S3 buckets may hold a tremendous amount of sensitive information, including client information, financial documents, or private content. In case these permissions are not well set, unauthorized users might get access and cause data leakage and reputation loss. **Common Scenarios:** - **Public Buckets**: Users more often than not leave their S3 buckets open to the world, allowing anyone on the internet to list items and read data from inside. - **Overly permissive access control lists**: Poor configuration to overly permissive ACLs can lead to unnecessary exposure of your data. … ### 2. Poor Management of Storage Classesses **AWS S3 ** contains different storage classes for various use cases-right from most active to archival storage. However, most of its users are wont to optimize their data according to access patterns. If this isn't carefully fine-tuned, it may lead to huge cost inefficiencies. **Common Scenarios:** - By default, AWS S3 places data in the S3 Standard storage class. S3 Standard is a great fit for data in hot use, but it's relatively expensive for infrequently accessed data. - **Using Inappropriate Storage Classes**: Not migrating data to storage classes such as **S3 Intelligent-Tiering, S3 Glacier, and S3 Glacier Deep Archive**, where possible, can lead to unnecessary charges. … - **Intelligent-Tiering**: With S3 Intelligent-Tiering, data is moved automatically in between the frequent and infrequent access tier based on your usage patterns so that you pay only for what you use when you use it, without having to manage data transitions manually. ... ### 5. Not Monitoring Costs and Usageage AWS S3 pricing is flexible, but if not monitored closely, the cost spirals out of control pretty fast. As a matter of fact, most users fail to monitor the use of S3, hence leading to unexpected bills, especially when volumes are big, or traffic is heavy. **Common Scenarios:** - **Avoiding Unnecessary Data Storage**: Keeping infrequently accessed data in S3 for a long period without deleting or relocating to cheaper storage classes can be more costly. - **Expensive in transferring data**: High amounts of data transfers out of S3 are expensive; therefore, high outbound traffic applications incur very high costs.
There are some challenges when using S3 at scale. Cost management requires careful monitoring, especially with data egress, request rates, and long-term storage. From an administrative standpoint, misconfigured permissions or overly permissive policies can pose risks if not governed properly, which makes strong internal controls and regular reviews essential. Review collected by and hosted on G2.com. … What do you dislike about Amazon Simple Storage Service (S3)? The interface is just... too much. If you're not a pro, finding specific settings in the console feels like a puzzle. Also the pricing is super confusing. It's hard to predict how much we're gonna pay at the end of the month because of all those extra fees for data transfer and requests. I wish it was more straightforward for a regular user who just wants to store stuff. Review collected by and hosted on G2.com. … What do you dislike about Amazon Simple Storage Service (S3)? Pricing can get complicated for large-scale deployments, particularly when there are frequent data transfers and retrievals. Additionally, the learning curve for new users may be quite steep if they are not already familiar with AWS services. Review collected by and hosted on G2.com. … What do you dislike about Amazon Simple Storage Service (S3)? While S3 is extremely reliable, managing complex permissions and policies can be challenging, especially for beginners. Costs can also add up quickly if you rely heavily on versioning or perform frequent data transfers. In addition, working with very large buckets containing millions of objects can feel cumbersome without solid organization and well-defined lifecycle policies. Review collected by and hosted on G2.com. … Amazon Simple Storage Service (S3) has a complex permisssion model (ACLs, Bucket polices, IAM) that can be confusing. It has no built in folder level move or rename. it also has unexpected costs if you forget lifecyle policies or have high interest rates Review collected by and hosted on G2.com. … What do you dislike about Amazon Simple Storage Service (S3)? The pricing model, while flexible, can be confusing—especially when dealing with multiple storage classes, data retrieval fees, and transfer costs. Also, managing permissions with bucket policies and IAM roles can get complex and error-prone without clear documentation or experience. A better built-in UI for file management in the AWS Console would also improve user experience. Review collected by and hosted on G2.com. … What do you dislike about Amazon Simple Storage Service (S3)? What I like least about Amazon S3 is the pricing structure, which can become complex, especially for those working with large volumes of data and multiple requests. It would also be interesting to have more detailed usage monitoring dashboards natively, without relying on integrations with other tools. Review collected by and hosted on G2.com.
dev.to
Core S3 Performance...This means that S3 wasn’t designed to handle low-latency, high-frequency access or POSIX-style workloads. It’s missing crucial file system features like atomic renames, file locking, shared caching, and sub-millisecond response times. Even though it’s a common practice, treating S3 like a traditional file system often leads to performance bottlenecks, unpredictable behavior, and the need for engineering workarounds. … 1. **“S3 is a POSIX File System”** — S3 does *not* support POSIX semantics. For starters, it lacks 1) atomic renames, 2) file locking, 3) symbolic links, and 4) directory inodes. Applications that depend on these features are prone to failure or unexpected behavior. To compensate, developers have to build complex coordination layers, custom lock services, and copy-delete hacks, which inevitably undermine performance. 2. **“FUSE Adapters Provide Native Semantics”** — While tools like s3fs and Mountpoint for S3 let you mount a bucket, they don’t guarantee genuine filesystem behavior. They locally buffer and asynchronously replay operations, which can cause problems like timeouts, stale reads, out of order writes, and caching errors with concurrent access. 3. **“Metadata Operations Are Inexpensive”** — Although each individual `LIST`, `GET Bucket` , and object metadata calls may seem inexpensive, these operations add up, involve API call overhead, and potential rate throttling. These S3 calls have to traverse distributed indexes and are not meant for high-frequency use. 4. **“Throughput and IOPS Scale Linearly Without Effort”** — S3 imposes rate limits per prefix and throughput restrictions per connection. Without implementing prefix sharding and parallel streams, exceeding these thresholds can lead to throttling, higher latencies, and request failures. 5. **“Latency is Negligible”** — In reality, object access latencies can vary significantly. If you need fine-grained, random access, then latency can be vastly greater than that of local or block storage. … To prevent this bottleneck, developers need to implement **key-naming strategies** such as hashing or time-based prefixes to distribute requests across partitions. This does, however, introduce additional complexity as developers must build custom logic for prefix distribution. On top of that, read and list operations often require scanning multiple pseudo-directories to rebuild the complete dataset. … ### c. Latency and IOPS S3 operations introduce 10-100ms of round-trip delay per request, which is much slower than local NVMe or even the sub-millisecond latencies of networked block storage. This added delay is due to the HTTP API processing, authentication, and multi-AZ replication. Performing a high frequency of small-object reads or metadata queries causes delays to accumulate and noticeably slow down random-access workflows. S3’s performance is also limited by API rate caps and network capacity. Unlike block storage, you cannot just adjust IOPS in the settings. Instead, you need to distribute requests across multiple prefixes or set up parallel connections. High_IO tasks can quickly hit these limits, leading to throttling or higher error rates. ### d. Lack of POSIX Semantics S3 is not a POSIX-compliant file system. It uses a flat object storage model accessible via HTTPS APIs, lacking the hierarchical structure and system-level primitives expected by applications. It thus omits essential POSIX features, including: - **File Locking:** Without `flock()` or `fcntl()`, concurrent systems can’t coordinate writes or avoid race conditions. - **Atomic Renames:** The `rename()` operation isn’t available. Renaming requires copying it and then deleting the original. - **Symbolic Links:** S3 does not support inodes or links; each object is standalone, identified by its unique key. - **Random Writes:** Because objects are immutable, you can’t modify a specific byte range in place. To update, the entire object must be re-uploaded (or use multipart uploads for larger objects). Applications designed for POSIX semantics, especially data-processing tools, may exhibit *unpredictable* behavior on S3. Without point-in-time consistency, locks, or atomic directory operations, workflows encounter data corruption, dropped files, and subtle errors. This fundamental mismatch makes S3 *unsuitable* for workloads that rely on true filesystem behavior. ### Real-World Impact on Workloads These limitations of S3 can, and do, lead to performance bottlenecks. For example, ML training jobs that handle thousands of small files face high per-request latency and prefix throttling, often resulting in wasted compute resources. ETL pipelines must use custom staging and lock services to compensate for S3’s lack of atomic operations. POSIX-dependent tools and research workflows often face race conditions and missed errors. Teams using spot or ephemeral instances have to create local caches or synchronization layers, which can cause startup delays and increases the risk of outdated data.
news.ycombinator.com
S3 is showing its ageAdding features makes documentation more complicated, makes the tech harder to learn, makes libraries bigger, likely harms performance a bit, increases bug surface area, etc. When it gets too out of hand, people will paper it over with a new, simpler abstraction layer, and the process starts again, only with a layer of garbage spaghetti underneath. … For CAS, one example is backup jobs. You can run backup jobs to S3, but there are some safety issues if you want deduplication and you want to expire old data. > if S3 is too simple CAS isn’t some kind of super complicated, technical thing. It would be nice if S3 had this small, incremental additional feature. ... … making your own object store that is fast, durable and available and has another feature is really really hard to do at scale. Its far easier to put up with s3 than make your own. rowanseymour 9 months ago ... Sadly, Azure's implementation of its blob-store, is kind of underwhelming — especially for any kind of infrastructure-level use-cases. … Sure you could chunk up the files even smaller, but then you hit into access latency (s3 aint that fast. ) Reliability is always your problem not something to be punted to another layer of the stack that lets you pretend stuff doesn't go wrong. yup, which is why relying on devs to engineer it is a pain in the arse. Having online migration is such a useful tool to avoid accidental overloads when doing maintenance, its also a grat tool to have when testing config changes.