What would a Kubernetes 2.0 look like

by Bogdanpon 6/19/2025, 12:00 PMwith 408 comments

by NathanFlurryon 6/19/2025, 7:41 PM

The #1 problem with Kubernetes is it's not something that "Just Works." There's a very small subset of engineers who can stand up services on Kubernetes without having it fall over in production – not to mention actually running & maintaining a Kubernetes cluster on your own VMs.

In response, there's been a wave of "serverless" startups because the idea of running anything yourself has become understood as (a) a time sink, (b) incredibly error prone, and (c) very likely to fail in production.

I think a Kubernetes 2.0 should consider what it would look like to have a deployment platform that engineers can easily adopt and feel confident running themselves – while still maintaining itself as a small-ish core orchestrator with strong primitives.

I've been spending a lot of time building Rivet to itch my own itch of an orchestrator & deployment platform that I can self-host and scale trivially: https://github.com/rivet-gg/rivet

We currently advertise as the "open-source serverless platform," but I often think of the problem as "what does Kubernetes 2.0 look like." People are already adopting it to push the limits into things that Kubernetes would traditionally be good at. We've found the biggest strong point is that you're able to build roughly the equivalent of a Kubernetes controller trivially. This unlocks features more complex workload orchestration (game servers, per-tenant deploys), multitenancy (vibe coding per-tenant backends, LLM code interpreters), metered billing per-tenant, more powerful operators, etc.

by fideloperon 6/19/2025, 9:40 PM

"Low maintenance", welp.

I suppose that's true in one sense - in that I'm using EKS heavily, and don't maintain cluster health myself (other than all the creative ways I find to fuck up a node). And perhaps in another sense: It'll try its hardest to run some containers so matter how many times I make it OOMkill itself.

Buttttttttt Kubernetes is almost pure maintenance in reality. Don't get me wrong, it's amazing to just submit some yaml and get my software out into the world. But the trade off is pure maintenance.

The workflows to setup a cluster, decide which chicken-egg trade-off you want to get ArgoCD running, register other clusters if you're doing a hub-and-spoke model ... is just, like, one single act in the circus.

Then there's installing all the operators of choice from https://landscape.cncf.io/. I mean that page is a meme, but how many of us run k8s clusters without at least 30 pods running "ancillary" tooling? (Is "ancillary" the right word? It's stuff we need, but it's not our primary workloads).

A repeat circus is spending hours figuring out just the right values.yaml (or, more likely, hours templating it, since we're ArgoCD'ing it all, right?)

> As an side, I once spent HORUS figuring out to (incorrectly) pass boolean values around from a Secrets Manager Secret, to a k8s secret - via External Secrets, another operator! - to an ArgoCD ApplicationSet definition, to another values.yaml file.

And then you have to operationalize updating your clusters - and all the operators you installed/painstakingly configured. Given the pace of releases, this is literally, pure maintenance that is always present.

Finally, if you're autoscaling (Karpenter in our case), there's a whole other act in the circus (wait, am I still using that analogy?) of replacing your nodes "often" without downtime, which gets fun in a myriad of interesting ways (running apps with state is fun in kubernetes!)

So anyway, there's my rant. Low fucking maintenance!

by otterleyon 6/19/2025, 8:23 PM

First, K8S doesn't force anyone to use YAML. It might be idiomatic, but it's certainly not required. `kubectl apply` has supported JSON since the beginning, IIRC. The endpoints themselves speak JSON and grpc. And you can produce JSON or YAML from whatever language you prefer. Jsonnet is quite nice, for example.

Second, I'm curious as to why dependencies are a thing in Helm charts and why dependency ordering is being advocated, as though we're still living in a world of dependency ordering and service-start blocking on Linux or Windows. One of the primary idioms in Kubernetes is looping: if the dependency's not available, your app is supposed to treat that is a recoverable error and try again until the dependency becomes available. Or, crash, in which case, the ReplicaSet controller will restart the app for you.

You can't have dependency conflicts in charts if you don't have dependencies (cue "think about it" meme here), and you install each chart separately. Helm does let you install multiple versions of a chart if you must, but woe be unto those who do that in a single namespace.

If an app truly depends on another app, one option is to include the dependency in the same Helm chart! Helm charts have always allowed you to have multiple application and service resources.

by pm90on 6/19/2025, 3:19 PM

Hard disagree with replacing yaml with HCL. Developers find HCL very confusing. It can be hard to read. Does it support imports now? Errors can be confusing to debug.

Why not use protobuf, or similar interface definition languages? Then let users specify the config in whatever language they are comfortable with.

by mrweaselon 6/19/2025, 3:06 PM

What I would add is "sane defaults", as in unless you pick something different, you get a good enough load balancer/network/persistent storage/whatever.

I'd agree that YAML isn't a good choice, but neither is HCL. Ever tried reading Terraform, yeah, that's bad too. Inherently we need a better way to configure Kubernetes clusters and changing out the language only does so much.

IPv6, YES, absolutely. Everything Docker, container and Kubernetes should have been IPv6 only internal from the start. Want IPv4? That should be handle by a special case ingress controller.

by johngossmanon 6/19/2025, 3:04 PM

Not a very ambitious wishlist for a 2.0 release. Everyone I talk to complains about the complexity of k8s in production, so I think the big question is could you do a 2.0 with sufficient backward compatibility that it could be adopted incrementally and make it simpler. Back compat almost always mean complexity increases as the new system does its new things and all the old ones.

by mountainriveron 6/19/2025, 6:20 PM

We have started working on a sort of Kubernetes 2.0 with https://github.com/agentsea/nebulous -- still pre-alpha

Things we are aiming to improve:

* Globally distributed * Lightweight, can easily run as a single binary on your laptop while still scaling to thousands of nodes in the cloud. * Tailnet as the default network stack * Bittorrent as the default storage stack * Multi-tenant from the ground up * Live migration as a first class citizen

Most of these needs were born out of building modern machine learning products, and the subsequent GPU scarcity. With ML taking over the world though this may be the norm soon.

by nunezon 6/19/2025, 6:13 PM

I _still_ think Kubernetes is insanely complex, despite all that it does. It seems less complex these days because it's so pervasive, but complex it remains.

I'd like to see more emphasis on UX for v2 for the most common operations, like deploying an app and exposing it, then doing things like changing service accounts or images without having to drop into kubectl edit.

Given that LLMs are it right now, this probably won't happen, but no harm in dreaming, right?

by jitlon 6/19/2025, 4:07 PM

I feel like I’m already living in the Kubernetes 2.0 world because I manage my clusters & its applications with Terraform.

- I get HCL, types, resource dependencies, data structure manipulation for free

- I use a single `tf apply` to create the cluster, its underlying compute nodes, related cloud stuff like S3 buckets, etc; as well as all the stuff running on the cluster

- We use terraform modules for re-use and de-duplication, including integration with non-K8s infrastructure. For example, we have a module that sets up a Cloudflare ZeroTrust tunnel to a K8s service, so with 5 lines of code I can get a unique public HTTPS endpoint protected by SSO for whatever running in K8s. The module creates a Deployment running cloudflared as well as configures the tunnel in the Cloudflare API.

- Many infrastructure providers ship signed well documented Terraform modules, and Terraform does reasonable dependency management for the modules & providers themselves with lockfiles.

- I can compose Helm charts just fine via the Helm terraform provider if necessary. Many times I see Helm charts that are just “create namespace, create foo-operator deployment, create custom resource from chart values” (like Datadog). For these I opt to just install the operator & manage the CRD from terraform directly or via a thin Helm pass through chat that just echos whatever HCL/YAML I put in from Terraform values.

Terraform’s main weakness is orchestrating the apply process itself, similar to k8s with YAML or whatever else. We use Spacelift for this.

by bencedon 6/19/2025, 6:51 PM

I found Kubernetes insanely intuitive coming from the frontend world. I was used to writing code that took in data and made the UI react to that - now I write code that the control panel uses reconciles resources with config.

by ziggureon 6/21/2025, 2:06 AM

What I've learned comments here is that k8s is essentially already perfect, YAML is actually awesome, and that any criticism of k8s just proves the ignorance of the critic. I guess this means k8s 2.0 should look exactly like k8s 1.0.

by liampulleson 6/20/2025, 9:39 AM

The whole reason that Kubernetes is driven off YAML (or JSON) is that the YAML is a source of truth of what the user intention is. Piping HCL, which has dynamically determined values, directly to the k8s API would make it harder to figure out what the desired state was at apply time when you are troubleshooting issues later.

The easy solution here is to generate the YAML from the HCL (or from helm, or whatever other abstraction you choose) and to commit and apply the YAML.

More broadly, I think Kubernetes has a bit of a marketing problem. There is a core 20% of the k8s API which is really good and then a remaining 80% of niche stuff which only big orgs with really complex deployments need to worry about. You likely don't need (and should not use) that cloud native database that works off CRDs. But if you acknowledge this aspect of its API and just use the 20%, then you will be happy.

by bigcat12345678on 6/20/2025, 12:36 AM

》Ditch YAML for HCL

I maintained borgcfg 2015-2019

The biggest lesson k8s drew from borg is to replace bcl (borgcfg config language) with yaml (by Brian Grant)

Then this article suggests to reverse

Yep, knowledge not experienced is just fantasy

by geoctlon 6/19/2025, 3:19 PM

I would say k8s 2.0 needs: 1. gRPC/proto3-based APIs to make controlling k8s clusters easier using any programming language not just practically Golang as is the case currently and this can even make dealing with k8s controllers easier and more manageable, even though it admittedly might actually complicates things at the API server-side when it comes CRDs. 2. PostgreSQL or pluggable storage backend by default instead of etcd. 3. Clear identity-based, L7-aware ABAC-based access control interface that can be implemented by CNIs for example. 4. Applying userns by default 5. Easier pluggable per-pod CRI system where microVMs and container-based runtimes can easily co-exist based on the workload type.

by darkwateron 6/19/2025, 3:20 PM

I totally dig the HCL request. To be honest I'm still mad at Github that initially used HCL for Github Actions and then ditched it for YAML when they went stable.

by jillesvangurpon 6/20/2025, 6:55 AM

> What would a Kubernetes 2.0 look like

A lot simpler hopefully. It never really took off but docker swarm had a nice simplicity to it. Right idea, but Docker Inc. mismanaged it.

Unfortunately, Kubernetes evolved into a bit of a monster. Designed to be super complicated. Full of pitfalls. In need of vast amounts of documentation, training, certification, etc. Layers and layers of complexity, hopelessly overengineered. I.e. lots of expensive hand holding. My mode with technology is that if the likes of Red Hat, IBM, etc. get really excited: run away. Because they are seeing lots of dollars for exactly the kind of stuff I don't want in my life.

Leaving Kubernetes 2.0 to the people that did 1.0 is probably just going to lead to more of the same. The people behind this require this to be a combination of convoluted and hard to use. That's how they make money. If it was easy, they'd be out of business.

by zdwon 6/19/2025, 2:09 PM

Related to this, a 2020 take on the topic from the MetalLB dev: https://blog.dave.tf/post/new-kubernetes/

by mdanielon 6/19/2025, 3:26 PM

> Allow etcd swap-out

From your lips to God's ears. And, as they correctly pointed out, this work is already done, so I just do not understand the holdup. Folks can continue using etcd if it's their favorite, but mandating it is weird. And I can already hear the butwhataboutism yet there is already a CNCF certification process and a whole subproject just for testing Kubernetes itself, so do they believe in the tests or not?

> The Go templates are tricky to debug, often containing complex logic that results in really confusing error scenarios. The error messages you get from those scenarios are often gibberish

And they left off that it is crazypants to use a textual templating language for a whitespace sensitive, structured file format. But, just like the rest of the complaints, it's not like we don't already have replacements, but the network effect is very real and very hard to overcome

That barrier of "we have nicer things, but inertia is real" applies to so many domains, it just so happens that helm impacts a much larger audience

by rwmjon 6/19/2025, 3:12 PM

Make there be one, sane way to install it, and make that method work if you just want to try it on a single node or single VM running on a laptop.

by ra7on 6/19/2025, 11:40 PM

The desired package management system they describe sounds a lot like Carvel's kapp-controller (https://carvel.dev/kapp-controller/). The Carvel ecosystem, which includes its own YAML templating tool called 'ytt', isn't the most user friendly in my experience and can feel a bit over-engineered. But it does get the idea of Kubernetes-native package management using CRDs mostly right.

by akdor1154on 6/19/2025, 11:26 PM

I think the yaml / HCL and package system overlap..

I wouldnt so much go HCL as something like JSONnet, Pkl, Dhall, or even (inspiration not recommendation) Nix - we need something that allows a schema for powering an LSP, with enough expressitivity to void the need for Helm's templating monstrosity, and ideally with the ability for users to override things that library/package authors haven't provided explicit hooks for.

Does that exist yet? Probably not, but the above languages are starting to approach it.

by pjmlpon 6/21/2025, 5:46 AM

Hopefully written in a more developer friendly language like Rust.

Kubernetes is one of the few reasons I need to care about Go.

by jcastroon 6/19/2025, 3:01 PM

For the confusion around verified publishing, this is something the CNCF encourages artifact authors and their projects to set up. Here are the instructions for verifying your artifact:

https://artifacthub.io/docs/topics/repositories/

You can do the same with just about any K8s related artifact. We always encourage projects to go through the process but sometimes they need help understanding that it exists in the first place.

Artifacthub is itself an incubating project in the CNCF, ideas around making this easier for everyone are always welcome, thanks!

(Disclaimer: CNCF Staff)

by bionhowardon 6/20/2025, 3:24 PM

Would rust be a lot better than HCL to replace yaml? Just learning Kubernetes and want to say I hope Kubernetes 2.0 uses a “real” programming language with a decent type system.

A big benefit could be for the infrastructure language to match the developer language. However, knowing software, reinventing something like Kubernetes is a bottomless pit type of task, best off just dealing with it and focusing on the Real Work (TM), right?

by aranwon 6/19/2025, 9:27 PM

YAML and Helm are my two biggest pain points with k8s and I would love to see them replaced with something else. CUE for YAML would be really nice. As for replacing Helm, I'm not too sure really. Perhaps with YAML being replaced by CUE maybe something more powerful and easy to understand could evolve from using CUE?

by hoshon 6/19/2025, 10:08 PM

While we're speculating:

I disagree that YAML is so bad. I don't particularly like HCL. The tooling I use don't care though -- as long as I can stil specify things in JSON, then I can generate (not template) what I need. It would be more difficult to generate HCL.

I'm not a fan of Helm, but it is the de facto package manager. The main reason I don't like Helm has more to do with its templating system. Templated YAML is very limiting, when compared to using a full-fledged language platform to generate a datastructure that can be converted to JSON. There are some interesting things you can do with that. (cdk8s is like this, but it is not a good example of what you can do with a generator).

On the other hand, if HCL allows us to use modules, scoping, and composition, then maybe it is not so bad after all.

by dijiton 6/19/2025, 3:00 PM

Honestly; make some blessed standards easier to use and maintain.

Right now running K8S on anything other than cloud providers and toys (k3s/minikube) is disaster waiting to happen unless you're a really seasoned infrastructure engineer.

Storage/state is decidedly not a solved problem, debugging performance issues in your longhorn/ceph deployment is just pain.

Also, I don't think we should be removing YAML, we should instead get better at using it as an ILR (intermediate language representation) and generating the YAML that we want instead of trying to do some weird in-place generation (Argo/Helm templating) - Kubernetes sacrificed a lot of simplicity to be eventually consistent with manifests, and our response was to ensure we use manifests as little as possible, which feels incredibly bizzare.

Also, the design of k8s networking feels like it fits ipv6 really well, but it seems like nobody has noticed somehow.

by d4mi3non 6/19/2025, 7:06 PM

I agree with the author that YAML as a configuration format leaves room for error, but please, for the love of whatever god or ideals you hold dear, do not adopt HCL as the configuration language of choice for k8s.

While I agree type safety in HCL beats that of YAML (a low bar), it still leaves a LOT to be desired. If you're going to go through the trouble of considering a different configuration language anyway, let's do ourselves a favor and consider things like CUE[1] or Starlark[2] that offer either better type safety or much richer methods of composition.

1. https://cuelang.org/docs/introduction/#philosophy-and-princi...

2. https://github.com/bazelbuild/starlark?tab=readme-ov-file#de...

by nikisweetingon 6/19/2025, 7:44 PM

It should natively support running docker-compose.yml configs, essentially treating them like swarm configurations and "automagically" deploying them with sane defaults for storage and network. Right now the gap between compose and full-blown k8s is too big.

by ExoticPearTreeon 6/20/2025, 9:31 AM

I wish for k8s 2.0 to be less verbose when it comes to deploying anything on it.

I want to be able to say in two or five lines of YAML:

- run this as 3 pods with a max of 5

- map port 80 to this load balancer

- use this environment variables

I don't really care if it's YAML or HCL. Moving from YAML to HCL it's going to be an endless issue of "I forgot to close a curly bracket somewhere" versus "I missed an ident somewhere".

by moondevon 6/20/2025, 2:44 AM

I was all ready to complain about HCL because of the horrible ergonomics of multiline strings, which would be a deal breaker as a default config format. I just looked though and it seems they now support it in a much cleaner fashion.

https://developer.hashicorp.com/terraform/language/expressio...

This actually makes me want to give HCL another chance

by dhorthyon 6/20/2025, 4:29 PM

Overall I like this but confused about this part

“Yaml doesn’t enforce types but HCL does”

Is the same schema-based validation that is 1) possible client-side with HCL and 2) enforced server-side by k8s not also trivial to enforce client side in an ide?

by lukaslalinskyon 6/20/2025, 2:54 AM

What I'd like to see the most is API stability. At small scale, it's extremely hard to catch up with Kubernetes releases and the whole ecosystem is paced around those. It's just not sustainable running Kubernetes unless you have someone constantly upgrading everything (or you pay someone/something to do it for you). By the years, we should have a good idea for a wide range of APIs that are useful and stick with those.

by smetjon 6/20/2025, 11:13 AM

Kubernetes has all the capabilities to address the needs of pretty much every architecture/design out there. You only need one architecture/design for your particular use case. Nonetheless, you will have to cary all that weight with you, although you will never use it.

by solaticon 6/19/2025, 8:32 PM

I don't get the etcd hate. You can run single-node etcd in simple setups. You can't easily replace it because so much of the Kubernetes API is a thin wrapper around etcd APIs like watch that are quite essential to writing controllers and don't map cleanly to most other databases, certainly not sqlite or frictionless hosted databases like DynamoDB.

What actually makes Kubernetes hard to set up by yourself are a) CNIs, in particular if you both intend to avoid cloud-provider specific CNIs, support all networking (and security!) features, and still have high performance; b) all the cluster PKI with all the certificates for all the different components, which Kubernetes made an absolute requirement because, well, prpduction-grade security.

So if you think you're going to make an "easier" Kubernetes, I mean, you're avoiding all the lessons learned and why we got here in the first place. CNI is hardly the naive approach to the problem.

Complaining about YAML and Helm are dumb. Kubernetes doesn't force you to use either. The API server anyway expects JSON at the end. Use whatever you like.

by mikeocoolon 6/19/2025, 10:40 PM

How about release 2.0 and then don’t release 2.1 for a LONG time.

I get that in the early days such a fast paced release/EOL schedule made sense. But now something that operates at such a low level shouldn’t require non-security upgrades every 3 months and have breaking API changes at least once a year.

by jerry1979on 6/20/2025, 4:21 AM

Not sure where buildkit is at these days, but k8s should have reproducible builds.

by dzongaon 6/19/2025, 4:20 PM

I thought this would be written along the lines of an lllm going through your code - spinning up a railway file. then say have tf for few of the manual dependencies etc that can't be easily inferred.

& get automatic scaling out of the box etc. a more simplified flow rather than wrangling yaml or hcl

in short imaging if k8s was a 2-3 max 5 line docker compose like file

by brikymon 6/20/2025, 3:09 AM

The bit about Helm templating resonated with me. Stringly typed indentation hell.

by zug_zugon 6/19/2025, 2:40 PM

Meh, imo this is wrong.

What Kubernetes is missing most is a 10 year track record of simplicity/stability. What it needs most to thrive is a better reputation of being hard to foot-gun yourself with.

It's just not a compelling business case to say "Look at what you can do with kubernetes, and you only need a full-time team of 3 engineers dedicated to this technology at tho cost of a million a year to get bin-packing to the tune of $40k."

For the most part Kubernetes is becoming the common-tongue, despite all the chaotic plugins and customizations that interact with each other in a combinatoric explosion of complexity/risk/overhead. A 2.0 would be what I'd propose if I was trying to kill kuberenetes.

by woileon 6/19/2025, 8:19 PM

What bothers me:

- it requires too much RAM to run in small machines (1GB RAM). I want to start small but not have to worry about scalability. docker swarm was nice in this regard.

- use KCL lang or CUE lang to manage templates

by fragmedeon 6/19/2025, 7:11 PM

Instead of yaml, json, or HCL, how about starlark? It's a stripped down Python, used in production by bazel, so it's already got the go libraries.

by nukeron 6/20/2025, 2:12 PM

No one yet mentioned AWS ECS and Fargate?

by cyberaxon 6/19/2025, 8:26 PM

I would love:

1. Instead of recreating the "gooey internal network" anti-pattern with CNI, provide strong zero-trust authentication for service-to-service calls.

2. Integrate with public networks. With IPv6, there's no _need_ for an overlay network.

3. Interoperability between several K8s clusters. I want to run a local k3s controller on my machine to develop a service, but this service still needs to call a production endpoint for a dependent service.

by Dedimeon 6/19/2025, 6:20 PM

From someone who was recently tasked with "add service mesh" - make service mesh obsolete. I don't want to install a service mesh. mTLS or some other from of encryption between pods should just happen automatically. I don't want some janky ass sidecar being injected into my pod definition ala linkerd, and now I've got people complaining that cilium's god mode is too permissive. Just have something built-in, please.

by vbezhenaron 6/20/2025, 11:17 AM

I have some experience with Kubernetes, both managed and self-maintained.

So here are my wishes:

1. Deliver Kubernetes as a complete immutable OS image. Something like Talos. It should auto-update itself.

2. Take opinionated approach. Do not allow for multiple implementations of everything, especially as basic as networking. There should be hooks for integration with underlying cloud platform, of course.

3. The system must work reliably out of the box. For example, kubeadm clusters are not set up properly, when it takes to memory limits. You could easily make node unresponsive by eating memory in your pod.

4. Implement built-in monitoring. Built-in centralized logs. Built-in UI. Right now, kubeadm cluster if not usable. You need to spend a lot of time, intalling prometheus, loki, grafana, configure dashboards, configure every piece of software. Those are very different softwares from different vendors. It's a mess. It requires a lot of processing power and RAM to work. It should not be like that.

5. Implement user management, with usernames and passwords. Right now you need to set up keycloak, configure oauth authentication, complex realm configuration. It's a mess. It requires a lot of RAM to work. It should not be like that.

6. Remove certificates, keys. Cluster should just work, no need to refresh anything. Join node and it stays there.

So basically I want something like Linux. Which just works. I don't need to set up Prometheus to look at my 15-min load average or CPU consumption. I don't need to set up Loki to look at logs, I have journald which is good enough for most tasks. I don't need to install CNI to connect to network. I don't need to install Keycloak to create user. It won't stop working, because some internal certificate has expired. I also want lower resources consumption. Right now Kubernetes is very hungry. I need to dedicate like 2 GiB RAM to master node, probably more. I don't even want to know about master nodes. Basic Linux system eats like 50 MiB RAM. I can dedicate another 50 MiB to Kubernetes, rest is for me, please.

Right now it feels that Kubernetes was created to create more jobs. It's a very necessary system, but it could be so much better.

by 0xbadcafebeeon 6/19/2025, 9:00 PM

> Ditch YAML for HCL

Hard pass. One of the big downsides to a DSL is it's linguistic rather than programmatic. It depends on a human to learn a language and figuring out how to apply it correctly.

I have written a metric shit-ton of terraform in HCL. Yet even I struggle to contort my brain into the shape it needs to think of how the fuck I can get Terraform to do what I want with its limiting logic and data structures. I have become almost completely reliant on saved snippet examples, Stackoverflow, and now ChatGPT, just to figure out how to deploy the right resources with DRY configuration in a multi-dimensional datastructure.

YAML isn't a configuration format (it's a data encoding format) but it does a decent job at not being a DSL, which makes things way easier. Rather than learn a language, you simply fill out a data structure with attributes. Any human can easily follow documentation to do that without learning a language, and any program can generate or parse it easily. (Now, the specific configuration schema of K8s does suck balls, but that's not YAML's fault)

> I still remember not believing what I was seeing the first time I saw the Norway Problem

It's not a "Norway Problem". It's a PEBKAC problem. The "problem" is literally that the user did not read the YAML spec, so they did not know what they were doing, then did the wrong thing, and blamed YAML. It's wandering into the forest at night, tripping over a stump, and then blaming the stump. Read the docs. YAML is not crazy, it's a pretty simple data format.

> Helm is a perfect example of a temporary hack that has grown to be a permanent dependency

Nobody's permanently dependent on Helm. Plenty of huge-ass companies don't use it at all. This is where you proved you really don't know what you're talking about. (besides the fact that helm is a joy to use compared to straight YAML or HCL)

by darqison 6/20/2025, 10:11 AM

Sorry, but k8s is not low maintenance. You have to keep the host systems updated and you have to keep the containers updated. And then you have paradigm changes like ingress falloff and emergence of gateway API. k8s is very time intensive, that's why I am not using it. It adds complexity and overhead. That might be acceptable for large organizations, but not for the single dev or small company.

by donperignonon 6/20/2025, 9:13 AM

Probably like docker swarm

by recursivedoubtson 6/19/2025, 4:17 PM

please make it look like old heroku for us normies

by brunoborgeson 6/20/2025, 6:56 AM

I'm not surprised, but somewhat disappointed that the author did not mention Java EE Application Servers. IMO one of the disadvantages of that solution compared to Kuberentes was that they were Java-specific and not polyglot. But everything else about installation, management, scalling, upgrading, etc, was quite well done especially for BEA/Oracle WebLogic.

by Tooon 6/20/2025, 5:26 AM

> "Kubernetes isn't opinionated enough"

yes please, and then later...

> "Allow etcd swap-out"

I don't have any strong opinions about etcd, but man... can we please just have one solution, neatly packaged and easy to deploy.

When your documentation is just a list of abstract interfaces, conditions or "please refer to your distribution", no wonder that nobody wants to maintain a cluster on-prem.

by Melatonicon 6/19/2025, 3:15 PM

MicroVM's

by smetjon 6/20/2025, 11:03 AM

Hard no to HCL!

Yaml is simple. A ton of tools can parse and process it. I understand the author's gripes (long files, indentation, type confusions) but then I would even prefer JSON as an alternative.

Just use better tooling that helps you address your problems/gripes. Yaml is just fine.

by AtlasBarfedon 6/20/2025, 4:32 PM

In my opinion we should start at the command line.

If you want to run a program on a computer, the most basic way is to open a command line and invoke the program.

And that executes it on one computer number CPUs TBD.

But with modern networking primitives and foundations, why can I not open a command line and have a concise way of orchestrating and execution of a program across multiple machines?

I have tried several times to do this writing utility code for Cassandra. I got in my opinion very temptingly close to being able to do this.

Likewise with docker, vagrant, and yes, kubernetes, with their CLI interfaces for running commands on containers, the CLI fundamentals are also there.

Others taking a shot at this are salt, stack, ansible and those types of things, but they really seem to be concerned mostly about Enterprise contracts and at the core of pure CLI execution.

Security is really a pain in the ass when it comes to things like this. Your CLI prompt has a certain security assurance with it. You've already logged in.

That's a side note. One of the frustrations I started running to as I was doing this is the Enterprise obsession with getting a manual login / totp code still access anything. Holy hell do I have to jump through hopes in order to automate things across multiple machines when they have totp barriers for accessing them.

The original kubernetes kind of handwaved a lot of this away by forcing the removal, jump boxes, a flat network plane, etc.

by znpyon 6/19/2025, 6:54 PM

I'd like to add my points of view:

1. Helm: make it official, ditch the text templating. The helm workflow is okay, but templating text is cumbersome and error-prone. What we should be doing instead is patching objects. I don't know how, but I should be setting fields, not making sure my values contain text that are correctly indented (how many spaces? 8? 12? 16?)

2. Can we get a rootless kubernetes already, as a first-class citizen? This opens a whole world of possibilities. I'd love to have a physical machine at home where I'm dedicating only an unprivileged user to it. It would have limitations, but I'd be okay with it. Maybe some setuid-binaries could be used to handle some limited privileged things.

by tayo42on 6/19/2025, 3:07 PM

> where k8s is basically the only etcd customer left.

Is that true. No one is really using it?

I think one thing k8s would need is some obvious answer for stateful systems(at scale, not mysql at a startup). I think there are some ways to do it? Where I work there is basically everything on k8s, then all the databases on their own crazy special systems to support they insist its impossible and costs to much. I work in the worst of all worlds now supporting this.

re: comments about k8s should just schedule pods. mesos with aurora or marathon was basically that. If people wanted that those would have done better. The biggest users of mesos switched to k8s

by jeffrallenon 6/20/2025, 5:04 AM

Simpler?

A guy can dream anyway.

by singularity2001on 6/19/2025, 4:42 PM

More like wasm?

by fragmedeon 6/20/2025, 3:00 AM

Kubernetes 2.0 would just be kubernetes with batteries included

by anonfordayson 6/20/2025, 2:15 AM

So odd seeing all the HCL hate here. It's dead simple to read, much more so than YAML. It grew out of Mitchell's hatred for YAML. If your "developers" are having problems with HCL, it's likely a skills issue.

by jonenston 6/19/2025, 3:51 PM

What about kustomize and kpt ? I'm using them (instead of helm) but but:

* kpt is still not 1.0

* both kustomize and kpt require complex setups to programatically generate configs (even for simple things like replicas = replicasx2)

by fatbirdon 6/19/2025, 3:12 PM

How many places are running k8s without OpenShift to wrap it and manage a lot of the complexity?

by 1oooqooqon 6/20/2025, 12:33 AM

systemd, but distributed. and with config files redone from scratch (and ideally not in yaml)

by rcarmoon 6/19/2025, 9:27 PM

One word: Simpler.

by moominon 6/19/2025, 4:12 PM

Let me add one more: give controllers/operators a defined execution order. Don’t let changes flow both ways. Provide better ways for building things that don’t step on everyone else’s toes. Make whatever replaces helm actually maintain stuff rather than just splatting it out.

by mootodayon 6/19/2025, 8:52 PM

Why containers when you can have Wasm components on wasmCloud :-)?!

https://wasmcloud.com/