news.ycombinator.com
Ask HN: Is Kubernetes still a big no-no for early stages in 2025?
However, hosted K8s options have improved significantly in recent years (all cloud providers have Kubernetes options that are pretty much self-managed), and I feel like with LLMs, it's become extremely easy to read & write deployment configs. ... At early stage the product should usually be a monolith and there are a LOT of simple ways to deploy & manage 1 thing. Probably not an issue for you but costs will also tend to bump up quite a lot, you will be ingesting way more logs, tons more metrics etc just for the cluster itself, you may find yourself paying for more things to help manage & maintain your cluster(s) . Security add-ons can quickly get expensive. … atmosx 6 months ago The real challenge isn’t setting up the EKS cluster, but configuring everything around it: RBAC, secrets management, infrastructure-as-code, and so on. That part takes experience. If you haven’t done it before, you're likely to make decisions that will eventually come back to haunt you — not in a catastrophic way, but enough to require a painful redesign later. … (I’m not claiming this is a real architecture that makes sense, just an example of how different layers can be chosen to be managed or unmanaged). 2. Not correct, IAM authentication is not the preferred connection method, and it has a performance limit of 200 connections per second. It's intended for access by humans and not by your applications. In my experience I've never seen any organization set it up. The other authentication methods are not AWS specific (Kerberos/password auth). Easy to avoid. 3. Most performance features of RDS have some kind of non-AWS equivalent. AWS isn't reinventing the wheel as a database host. … PS. Link in bio therealfiona 6 months ago The time sink required for the care and feeding just isn't worth it. I pretty much have to dedicate one engineer about 50% of the year to keeping the dang thing updated. The folks who set it all up did a poor job. And it has been a mess to clean up. Not for lack of trying, but for lack of those same people being able to refine their work, getting pulled into the new hotness and letting the clusters rot. Idk your workload, but mine is not even suited for K8s... The app doesn't like to scale. And if the leader node gets terminated from a scale down, or an EC2 fails, processing stops while the leader is reelected. Hopefully not another node that is going down in a few seconds... Most of the app teams stopped trying to scale their app up and down because of this ... … At early stage the product should usually be a monolith and there are a LOT of simple ways to deploy & manage 1 thing. Probably not an issue for you but costs will also tend to bump up quite a lot, you will be ingesting way more logs, tons more metrics etc just for the cluster itself, you may find yourself paying for more things to help manage & maintain your cluster(s) . Security add-ons can quickly get expensive.