Sources

1577 sources collected

The 2025 SO global developer survey results are fresh out, and PostgreSQL has become the most popular, most loved, and most wanted database for the third consecutive year. Nothing can stop PostgreSQL from consolidating the entire database world!

7/31/2025Updated 3/23/2026

wiki.postgresql.org

Usability Challenges

## Core server management and configuration - Too much tuning - Memory management: too complex, there are few useful guidelines, most things could be automated - Vacuum: should be automatic -- yay, autovacuum - Background writer configuration: Who needs that? - Write-ahead log configuration: too complicated, should be automatic - Free-space map: The server knows full well how much FSM it needs; see also memory management. - Managability is lacking - User accounts: still no good way to manage pg_hba.conf from SQL - Statistics: too much data but most people don't know what to make of it - Configuration files: too long, too many options that most people don't need - Plugins: Using external modules is complicated, sometimes risky, hard to manage. - Logging: Logging configurability is great but the default configuration is less than useful for new users. - Tracing: Everything notwithstanding, it is still really hard at times to know what is happening, such as in nested PL/pgSQL calls, in cascaded foreign key actions, and other nested and cascaded contexts.* Clients: … No out-of-band monitoring is supported. If pg_ctl launched the postmaster but the postmaster can't start properly functioning backends, the only diagnostics are free-form text logs. This stinks for people trying to manage and automate PostgreSQL installs. An out-of-band monitoring tool is needed that can report things like the port(s) Pg is listening on, any errors produced when trying to start backends, memory status, running queries (w/o having to start a new backend just to query pg_stat_activity), lock status, etc. … ## Backups, `pg_dump`, `pg_dumpall` and `pg_restore` - The default encodings/locales selected on Windows and Linux (UTF8) systems are incompatible with each other, so running `pg_dump -Fc -f dbname.backup dbname`on Linux then `pg_restore -C --dbname postgres dbname.backup`on Windows (or vice versa) … `template0`can't be connected to, there's no DB they can always connect to by default. This leads to weird command lines like `pg_restore --create --dbname postgres mydb.backup`to *restore to a newly created database, probably but not necessarily called*. If the user omits `mydb`, not to the … `-Fc`mode. This means that *by default PostgreSQL database dumps cannot be restored correctly unless the user dumps additional information separately!*. `pg_dump`should include global objects like roles that are referred to by the database being dumped, so that backups are complete and correct by default. `pg_dumpall`doesn't support the custom format. You can't make an archive containing all databases on a cluster, or have it spit out one dump file per database plus a globals file. This must be done manually using scripting, and that's rather less than user friendly. Backups need to be easy to get right by default! … ## PgAdmin-III (First point of contact for most newbies) - PgAdmin-III usability may be somewhat lacking - Using the "Restore" dialog with PgAdmin-III and pointing it a .sql dump produces an unhelpful error message. It should offer to run the SQL dump against the target database, at least when faced with a … *have to be edited by hand before they can be restored*. - PgAdmin-III uses the unhelpful `.backup`suffix for backups it creates with `pg_dump -Fc`behind the scenes. Backup of *what?*There's nothing in `pg_restore`that says files should have a .backup extension, nor does it encourage them to be created as such, so users who want to restore a backup created from the command line via PgAdmin-III often have to rename the file or change the filter before they can even see it in the file list to restore. … `pg_restore`'s `-C`option. That's really counter-intuitive; you should just be able to select the server you want to restore to, or use the restore item in the menu and be prompted for the target server. - You don't get a choice of the database name to use for the newly created database with PgAdmin-III's Restore, Create Database option, it silently uses the db name in the backup file (and doesn't give you any indication of what it is). This is a … ## Replication - Built-in replication can't replicate only some databases of a cluster; you have to replicate both my-critically-important-10MB-database and my-totally-unimportant-50GB-database with the same settings, same priority, etc. This is a usability challenge because it means people have to create and manage multiple clusters to control replication groups, and multiple clusters are hard to manage and configure. See also Usability reviews

5/1/2012Updated 2/1/2025

One of the biggest challenges when using AWS is choosing the right service for specific needs. For example, having multiple services that can run containers is great, but having a lot of options can also be overwhelming and confusing. While you could read the documentation for each service to understand their differences and optimal use cases, that’s not a practical way to narrow down the choices. … ## Evolving how we use AI to help create AWS Documentation ... Creating new documentation using AI for new AWS features or services is challenging because LLMs may not have been trained on the new concepts. Our writers need to provide the initial content building blocks (“content primitives.”) To do this, our team of writers produce clear, accurate documentation for these new features to ensure the AI tools can understand and provide thorough and reliable responses.

2/13/2025Updated 3/28/2026

# Deploying to Amazon's cloud is a pain in the AWS younger devs won't tolerate ## They have no need to prove their bonafides Recently, I was spinning up yet another terribly coded thing for fun because I believe in making my problems everyone else's problems, and realized something that had been nagging at me for a while: working with AWS is relatively painful. This may strike you as ridiculous, because most of the time in established companies it's not particularly burdensome: you push code to a repo, the CI/CD nonsense (which curiously enough is probably some guy named "Jenkins," who's worked at most of the same places that I have — yet strangely I've never met him in person) fires off, and it winds up in production somehow. But that tooling is exactly my point: without a fair bit of work to set it up, it doesn't exist, at which point working with AWS is a massive pain in the ass. … Starting from zero, if you want to deploy a simple webapp to AWS, you get to create an account, spin up the AWS SSO app (intuitively renamed "IAM Identity Center," and which also requires starting an AWS organization), affiliate a permission set (whatever the hell that is) with an IAM role, log into the SSO panel (which lives at such a hard-to-remember URL that I've built an automatic redirector: for my "shitposting" AWS account I can visit "shitposting.badUX.cloud" and it will direct me to the proper location; … You then either have to do something monstrous with key storage, or set up an OIDC relationship between GitHub (yes, or GitLab, I hear you, please do not email me) and AWS, then prod GitHub Actions if you're sane (or AWS CodeBuild if you're not) into doing the deploy for you. Then you get to figure out what the hell AWS service you deploy this webapp to, whether you integrate with AWS Amplify, whether you use Amazon CodeCatalyst – oh wait, nevermind, it got deprecated recently – and so on. … You carefully read the documentation, which was originally written by a monk in isolation while being slowly crushed to death by a wine barrel, and allow your resources just the permissions they need to talk to one another — which of course doesn't work. You broaden it again, and it still doesn't work. Then you say "oh screw this," grant it permissions to do anything, put a "TODO" in the comments reminding yourself to fix it, and move on with your life. That TODO will remain there until the last copy of your code is lost in the Great Holographic Library Fire of 2351. … So, back to building our code. Next, we get to tag in S3, CloudFront, Route 53, EC2/Fargate/Lambda+API Gateway, RDS/DynamoDB/something else databaselike, and unless you're insane, billing alarms. All of these are different sections of the AWS console, and don't work together out of the box particularly well. And then you push your code and realize that, on balance, baby seals get more hits than your website does because nobody cares about the things we build anymore. Now, let's contrast this with deploying a simple webapp on, say, Vercel? ... This feels generational to me. For folks of a certain age (Gen X and Millenials), AWS and GCP have made their bones. We came of technical age with the platforms and we're used to their foibles. Azure is of course the Boomer Cloud, but Gen Z is using platforms that aren't designed as tests of skill to let customers prove how much they want something. The thing is, increasingly we're deploying things to platforms not based on their merits, but rather based upon what the LLM selects. Recently I was building a demo for an upcoming re:Invent talk via cyberbullying a robot into doing it for me, and it *actively tried to talk me out of using AWS*, citing its complexity. I eventually won the argument, but here's the thing: that AI is going to train the next generation of developers. And those developers aren't going to have the patience, institutional knowledge, or masochistic dedication required to navigate AWS's deliberately Byzantine experience. They're going to build on platforms that don't make them prove their worth through suffering.

11/4/2025Updated 3/19/2026

I remember setting SSO in AWS, where Cognito was involved. It was a mess. It was frustrating. Many bugs. Many parts didn’t even work. It was about 3 years ago. In general all aws works with very slow web admin, everything is complicated with their iam/ roles/permissions and they actually let you program their service instead of providing a clear, intuitive and simple admin and SDK.

5/3/2025Updated 10/28/2025

**Introduction** As more businesses migrate to Amazon Web Services (AWS), they encounter various challenges that can impact efficiency, security, and cost management. Understanding these AWS challenges and implementing effective solutions is crucial for smooth cloud operations. In this blog, we’ll explore common AWS mistakes to avoid, and how businesses can navigate the complexities of AWS migration, implementation, monitoring, and cost management. ... … **Common AWS Migration Challenges & How to Solve Them** **1. Legacy Application Compatibility** **Challenge:** Older applications may not be designed to run in a cloud environment, leading to performance issues and compatibility problems. **Solution:** - Refactor legacy applications to be cloud-native. - Use AWS tools like AWS Lambda for serverless execution. - Implement hybrid cloud strategies to bridge the gap between on-premise and cloud infrastructure. **2. Data Migration Complexity** **Challenge:** Moving large volumes of data to AWS can be time-consuming and costly. **Solution:** - Utilize AWS Database Migration Service (DMS) for seamless database transfers. - Compress and optimize data before migration. - Implement a phased migration strategy to minimize downtime. **3. Security Concerns** … **1. Complex Infrastructure Setup** **Challenge:** Setting up AWS infrastructure can be overwhelming, especially for businesses new to cloud computing. **Solution:** - Leverage AWS CloudFormation templates for automated deployments. - Use AWS Well-Architected Framework for best practices. - Get expert guidance through **AWS training** programs. **2. Scalability Concerns** **Challenge:** Businesses often struggle with scaling their AWS environment efficiently. **Solution:** - Use AWS Auto Scaling to dynamically adjust resources. - Monitor usage patterns and optimize workloads with AWS Compute Optimizer. - Implement microservices architecture to enhance scalability. **3. Cost Management** **Challenge:** Without proper monitoring, AWS costs can spiral out of control. **Solution:** - Use AWS Cost Explorer to analyse spending trends. - Implement cost allocation tags for better visibility. - Set budget alerts to avoid unexpected cost spikes. **AWS Monitoring Challenges & Solutions** **1. Inadequate Visibility into Performance** **Challenge:** Without proper monitoring, identifying performance bottlenecks can be difficult. **Solution:** - Use Amazon CloudWatch for real-time monitoring. - Set up AWS X-Ray for tracing application requests. - Implement performance dashboards for continuous insights. **2. Managing Complex AWS Environments** **Challenge:** Handling multiple AWS services across different regions can be complex. **Solution:** - Utilize AWS Organizations for centralized management. - Leverage AWS Control Tower for governance and security. - Automate routine tasks with AWS Systems Manager. **3. Alert Fatigue and Noise** **Challenge:** Overwhelming alerts can lead to missed critical issues. **Solution:** - Set up actionable alerts using AWS CloudWatch Alarms. - Use machine learning-based anomaly detection to prioritize issues. - Consolidate alerts with AWS EventBridge for better management. **Navigating AWS Costs: Challenges & Best Practices** **1. Unpredictable Cost Spikes** **Challenge:** Fluctuating AWS costs can lead to budget overruns. **Solution:** - Implement Reserved Instances and Savings Plans. - Use AWS Budgets to track and forecast expenses. - Optimize workloads using AWS Compute Savings Plans. **2. Resource Underutilization** **Challenge:** Idle or underused resources can inflate AWS bills. **Solution:** - Conduct regular cost audits to identify unused instances. - Implement auto-scaling to adjust resources based on demand. - Right-size instances to match workload requirements. **3. Complex Pricing Models** **Challenge:** AWS pricing can be difficult to understand.

Updated 3/3/2026

## The hidden cost: cognitive load and velocity loss The part nobody warns you about isn’t the bill. It’s the *brain tax*. AWS doesn’t just charge money it charges attention. Every new service you touch adds a little more surface area you have to remember, reason about, and explain to Future You when something breaks for reasons that feel personal. On paper, **Amazon Web Services** gives you infinite flexibility. In practice, that flexibility shows up as decisions you didn’t know you were signing up for. Networking models. Permission boundaries. Execution roles. Service limits. Quotas you only learn about by hitting them. None of these things are individually terrible. The problem is accumulation. I’ve lost more time than I want to admit debugging IAM policies that were technically correct but emotionally hostile. You start by wanting to give a service access to one thing, and an hour later you’re three tabs deep in docs, muttering “why is this denied” like it’s a personal betrayal. And that time adds up. Infra reviews start taking longer than feature planning. Pull requests get blocked not on logic, but on configuration. You hesitate to ship small changes because you’re not entirely sure which invisible meter they might spin. This is where velocity quietly leaks out. Not because AWS is slow but because *thinking about AWS* is slow. It pulls you out of product mode and into platform-operator mode, even when you never wanted that job.

1/16/2026Updated 1/26/2026

You need to have safe and effective rollback. {ts:251} Also useful no matter what your deployment strategy is. However, with continuous deployment, it becomes a constraint and not a choice. You are signing yourself up to have to be good at testing. You're assigning yourself to be good at, have to be good at mean time to resolution in a way that you are not in other modes of deployment.

12/5/2025Updated 12/8/2025

You say that. But, for the cheap web hosting I bought for a hobby project, the only options are the painfully slow web interface, or ftp. (And not even sftp/ftps. It's unencrypted. And I can't disable the FTP account, either. It's always on, FFS.) So I fired up ncftp and had to remember how to use it. It was like being hit by a cold shower. But it sounds like I got more hits than Corey. (Also, the web panel has an API which sounds like it will support git, when I get it sorted.) … * For example, I have an application from a place I've volunteered. I've got a big archive full of code and I have a backup of a database. The organization doesn't want to run it, which is great, because I would have to guess what to do with this stuff if they did. There's no documentation of what this does, how to install it, what's in the database, and for all I know, there may be components written by the original contractors which aren't in my big archive. I have no way of knowing whether this can run, even though theoretically I do have copies of the files concerned. … AWS specifially has been failing *a lot* for a cloud which claims to be bullet proof. I'm a Gen-er and largely with you. I've always hated the complexity in default AWS and haven't liked the party Azure has followed much better. As an actual fan of JS/TS, I've fought against undue complexity for over half my career. I've seen both spaghetti monoliths and microsecond jungles. … My biggest issue with pointing out Google as a positive is they're likely to just kill the product and workflow you've built on with the next update. I don't trust them. And good luck finding a real person when they disconnect your account from logging in. Lol how long did it take to figure out what region it was in? … One of the things that got me was how difficult it was to set up environments and systems, so that on the dashboard I could see resource usage and thus spend against Dev, Test and Prod(client 1), Prod(client 2)… I’m sure it is doable, as it s”assembler”, but… *"People who are trained up and accomplished at its configuration "* … " People who are trained up and accomplished at its configuration can make a good few quid on the job market." Except there's almost no useful training and experience is the way you learn things. Manuals are useless and/or hopelessy outdated, if they even exist. That applies also to enterprise users, which means that a simple migration project to AWS takes 18 months. With competent enterprise level machine room operators/developers, not just anyone. … AWS have always made life difficult and the security people have consistently put day1 startup mentally at a higher priority than customer success, pulling the wool over the eyes of senior management in the name of security. Whenever I see a message like 'unable to find a policy to allow this' all that I have ever wanted in the past 10 years is a simple button which says "Fix this" so that AWS can go away and create the IAM records it needs for me to continue. … No, speaking as a currently designated security troglodyte who’s spoken with some of the chief IAM architects, I would say legacy drift is not the big problem with IAM. The real problem is twofold. Firstly, the AWS APIs are all random SOAP-style verbs instead of REST and control plane and data actions are in the same service-url namespaces so nothing can be built around object or even object class permissions, only lists of actions that are unpredictably idempotent or mutating or data-exposing or control-plane-exposing. Secondly, the chief design requirement of IAM is performantly deterministic permission evaluation, not actual security or usability or least privilege. This leads to choices like making permissions a list of low-level API actions (because that’s where policies are evaluated), the boolean logic hell that is the deny sandwich and conditions, and strict character limits on policies such that you can’t explicitly list the actions for a single service like EC2 in a policy and are then forced to use wildcards and guess whether they’re restrictive enough and don’t open you up to the new surprise actions AWS adds without warning. The “fixes” they’ve added are just more layers of the same flawed design and implementation via permission boundaries and SCPs. So anyone used to sane CRUD permissions has to relearn everything and then discover that their basic security expectations and requirements are impossible to implement in any auditable way in AWS. No, it's nothing to do with that at all. … ~~victims~~ customers end up paying for stuff they didn't or can't use - and so the capacity can be sold to multiple customers. The bad consultancies like it as well, because it makes them irreplaceable and gives more scope for nickle-and-diming. So basically everyone who isn't already locked in, doesn't want to play there.

11/4/2025Updated 12/6/2025

www.youtube.com

AWS: The Pain Points

one way or the other you know aws aren't going to tell you when the service is ready for you um you know it will it will be released and it will evolve over time and it will get it'll accrue what you need it to or it {ts:393919} won't and then you'll find another way to deal with … easier but if your requirements are too specific not supported unlikely to be supported within the time frame that you need maybe you need to compose your own service not for everything but for this particular thing and to sort of illustrate that i'm going to borrow and adapt some slides that adrian cockcroft presented a couple of years ago when … heavy-handed and it just doesn't it's not consistent with the promise of elasticity in the cloud it is a it is an outlier which is why with so many of these things go looking it's not hard right if you uh read the fine print here if you choose to create a nat gateway in your vpc you were charged for … if you're paying for support trusted advisor it gives you a few tips around how to optimize cost but that assumes your workload's already already live as does cost explorer which will use past information to um to determine a sort of a three-month forecast but again that's lagging lagging information and what happens this month might not be what what happened last month and probably just a final piece of advice for me is if you are doing this profiling to focus on the big ticket items um the things that you know will drive higher cost um trying to get you know something that is three percent of your estimate to a 90 accuracy is probably not not time well spent

8/17/2022Updated 4/2/2025

Today we’ll be covering five of the most challenging topics we identified: Amazon SQS, Elastic Load Balancing, AWS VPCs, AWS Lambda, and Subnets. This list is a subset of dozens of terms and topics we attacked across all three major cloud platforms: AWS, Microsoft Azure, and Google Cloud. You can find our complete walkthrough to Amazon’s thorny topics in the full Cloud Dictionary of Pain. ... You can’t have one subnet across multiple availability zones. You’ll probably hear something along the lines of “one subnet equals one availability zone.” Let’s say you’ve decided to launch a VPC within a particular region, and within that region, AWS offers a set of availability zones. If you’d like to keep some information private— such as a set of customer information in an RDS database—you would launch a private subnet within one availability zone.

6/8/2023Updated 10/11/2025

I do understand that the complexity, including the authorization subsystem, are necessary in the long-term. But when you are just trying to whip something up to test an idea, I find it frustrating. … I asked the group that owns our tools related to AWS if they had a template that follows best practices that I could look at - nope. Ok, then maybe theres a project I can look at as an example that follows our standards - nope. So as someone with mostly a developer background it was a lot of frustrating trial and error to fix an issue that I didn't even create. … In AWS it seems you are stuck managing ARNs for every damn object if you want to have anything less than a free-for-all in the account. This is an incredibly intense level of bureaucracy. I can see how the project abstraction could break down for a proper enterprise, which might really need that arbitrarily complex spaghetti of individual objects connected to individual objects. But it would still be better if the default or happy-path approach favored the better engineering practice of self-contained systems connected over few and well-defined interfaces. ... Dislike: There is no truly safe way to experiment and play around, even in the free tier. I set up billing alerts, but even with that it can be tricky to identify exactly what is costing me money (EBS snapshots, NAT gateways, Route 53 hosted zones, etc) … I would agree on that WebConsole is a little confusing when it comes to using it new_guy on Sept 17, 2021 ALSO they routinely send me a billing reminder telling me the invoice is 'overdue' BEFORE they even send the invoice, which frankly would make me move somewhere else if I had the time. It's maddening. vfulco2 on Sept 17, 2021 fiftyacorn on Sept 17, 2021 padthai on Sept 17, 2021 Dislike: A billion products, most of them half-baked, terrible DX, terrible documentation, pricing all over the place. Examples: regular Sagemaker is much worse than a normal VM, Sagemaker Studio is so so. CloudFormation is not great and only works with AWS. Smaller products are even worse. I try to avoid as much as I can dealing directly with AWS APIs (specially their web) and focus on third party tools like Terraform, Ansible, etc. It makes it tolerable. Jugurtha on Sept 18, 2021 Several parts of the website display a "Create an AWS Account". I am fucking logged in. I have to click on "My Account", just next to a user creation button, for it to display spinning arrows to log me in (again?). Once done, cluster creation took forever in a "Creating" status. There's all that confusion about users and organizations. Root vs. IAM. Adding people or accounts to the "organization" is convoluted as well. Coming from GCP, this fucking blows. I had non-technical people create service accounts and clusters and VMs on GCP and hook them to our product. I'm trying AWS/EKS and Azure/AKS for testing purposes for our product (which hooks to users' clusters, and I have to try this out). I can't find the web console and the docs talk about installing one. … But other than the alpha products, generally it works very well and is highly reliable. jerglingu on Sept 17, 2021 Bad: dumb service names, API’s are not at all easy to learn and much of the documentation is subpar (WorkDocs is the latest pain), feeling some unease with all the downed services this year … Otherwise experimenting on Aws is very risky, particularly if you'd like to use the pay-per-use services. jjice on Sept 17, 2021 Dislike: Due to expansive options, it can be tricky to combine pieces together. codingclaws on Sept 17, 2021 QuinnyPig on Sept 17, 2021 … Cons: It's glaringly obvious that all AWS products are developed by independent teams with little coordination or style guide enforcement. Documentation ranges from excellent to completely unusable, which does not help the fact that AWS services in general has a far steeper learning curve than it should. (Security, for example is a nightmare unless you spend a LOT of time leaning crap almost no one should ever have to know.) Billing is non-transparent and far better billing tools are available for free through AWS partners, but effectively only to big companies.

Updated 1/29/2025