So well put, my good sir, this describes exactly my feelings with k8s. It always starts off all good with just managing a couple of containers to run your web app. Then before you know it, the devops folks have decided that they need to put a gazillion other services and an entire software-defined networking layer on top of it.
After spending a lot of time "optimizing" or "hardening" the cluster, cloud spend has doubled or tripled. Incidents have also doubled or tripled, as has downtime. Debugging effort has doubled or tripled as well.
I ended up saying goodbye to those devops folks, nuking the cluster, booted up a single VM with debian, enabled the firewall and used Kamal to deploy the app with docker. Despite having only a single VM rather than a cluster, things have never been more stable and reliable from an infrastructure point of view. Costs have plummeted as well, it's so much cheaper to run. It's also so much easier and more fun to debug.
And yes, a single VM really is fine, you can get REALLY big VMs which is fine for most business applications like we run. Most business applications only have hundreds to thousands of users. The cloud provider (Google in our case) manages hardware failures. In case we need to upgrade with downtime, we spin up a second VM next to it, provision it, and update the IP address in Cloudflare. Not even any need for a load balancer.
People use Kubernetes for way too small things, and it sounds like you don't have the scale for actually running Kubernetes.
My app is fairly simple node process with some side car worker processes. k8s enables me to deploy it 30 times for 30 PRs, trivially, in a standard way, with standard cleanup.
Can I do that without k8s? Yes. To the same standard with the same amount of effort? Probably not. Here, I'd argue the k8s APIs and interfaces are better than trying to do this on AWS ( or your preferred cloud provider ).
Where things get complicated is k8s itself is borderline cloud provider software. So teams who were previously good using a managed service are now owning more of the stack, and these random devops heros aren't necessarily making good decisions everywhere.
So you really have three obvious use cases:
a) You're doing something interesting with the k8s APIs, that aren't easy to do on a cloud provider. Essentially, you're a power user. b) You want a cloud abstraction layer because you're multi-cloud or you want a lock-in bargaining chip. c) You want cloud semantics without being on a cloud provider.
However, if you're a single developer with a single machine, or a very small team and you're happy working through contended static environments, you can pretty much just put a process on a box and call it done. k8s is overkill here, though not as much as people claim until the devops heros start their work.
k8s appears to be a corporate welfare jobs program where trillion dollar multinational monopolistic companies are the only ones who can collectively spend 100s of millions sustaining. Since most companies aren't trillion dollar monopolies, adopting such measures seems extremely poor.
All it signals to me is that we have to stop letting SV + VC dictate the direction of tech in our industry, because their solutions are unsustainable and borderline useless for the vast majority of use cases.
I'll never forget the insurance companies I worked at that orchestrated every single repo with a k8s deployment whose cloud spend was easily in the high six figures a month to handle a work load of 100k/MAU where the concurrent peak never went more than 5,000 users, something the company did know with 40 years of records. Literally had a 20 person team whose entire existence was managing the companies k8s setup. Only reason the company could sustain this was that it's an insurance company (insurance companies are highly profitable, don't let them convince you otherwise; so profitable that the government has to regulate how much profit they're legally allowed to make).
Absolute insanity, unsustainable, and a tremendous waste of limited human resources.
Glad you like it for your node app tho, happy for you.
Is it complex? Yes, but so is the problem it's trying to solve. Is its complexity still nicer and easier to use than the previous generation of multimachine deployment systems? Also yes.
It really confuses me how someone can argue for cloud providers over a decent open solution without realising their argument is simply they don't want to be managing the thing.
And that's fine, most teams shouldn't be neck deep in managing a platform. But that doesn't make the solution bad.
You're going to want most of what K8s has anyway: blue-green deployments, some way to specify how many replicas you want, health checks, etc.
The initial setup cost is annoying if you've never done it before, but in terms of maintenance it's very very easy.
That being said I love exe.dev and have been a happy customer since launch. It's a different use case but they do an amazing job at it. Very, very easy personal cloud dev box. But K8s is very very good too, just for production workloads rather than personal ones!
I run it at home and at work, and while I do hate installing it, once that part is done I've never run into these problems that people claim requires a 20 person(!) team to baby sit it. Maybe my scale is too small or whatever, but its hard not to think that maybe they are just "holding it wrong"...
Coolify is full of features, but the UX suffers and they had a nasty breaking bug at one point (related to Traefik if you want to search it.) Dockge is just a simple interface into your running Docker containers and Komodo is a bit harder to understand/come up with a viable deployment model, and has no built-in support for things like databases.
Dokploy is more Heroku-styled: while you can deploy third-party apps (it's just Docker after all), it seems really geared towards and intended for you to be deploying your own apps that you developed, alongside a "managed" database (meaning, the DB is exposed in the UI, includes backup functionality, and can even be temporarily exposed publicly on the internet for debugging.)
Coolify feels a bit like a mix of the two deployment models, while Dockge is "bring your own deployment" and Komodo offers to replace Terraform/Ansible/docker-compose through its own declarative GitOps-style file-based config but lacks features like managed databases, or built-in subdomain provisioning.
Whether it's a worthy mention or not, I'm not sure. I'd like to think its worthy :)
Disclaimer: I am the maintainer.
Point being, it's not the tools the causes the probem.
Or that guy is just a really bad programmer.
But the point was it was in a comparble situations without the microservices / k8s / whatever pet tech you want to hate on.
I think Amazon ECS is within striking distance, at least. It does less than K8S, but if it fits your needs, I find it an easier deployment target than K8S. There's just a lot less going on.
The deployment files / structure were mostly equivalent with the main differences being I can't shell into ECS and I lose kubectl in favour of looking at the AWS GUI ( which for me is a loss, for others maybe not ).
The main difference is k8s has a lot of optionality, and folks get analysis paralysis with all the potential there. You quickly hit this in k8s when you have to actually need the addon to get cloudwatch logs.
This is also where k8s has sharp edges. Since amazon takes care of the rest of the infrastructure for you in ECS, you don't really need to worry about contention and starving node resources resulting in killing your logging daemon, which you could technically do in k8s.
However, you'll note that this is a vendor choice. EKS Auto Mode does away with most of the addons you need to run yourself, simplifying k8s, moving it significantly closer to a vendor supported solution.
Is there a specific reason why you can't shell into ECS? IIRC, I was able to do so by following the guide [0].
[0] https://aws.amazon.com/blogs/containers/new-using-amazon-ecs...
The last 20 years has given us a lot of great primitives for folks to plug in, I think that lots of people don't want to wrangle those primitives, they just want to use them.
This is well put and it's very similar to the arguments made when comparing programming languages. At the end of the day you can accomplish the same tasks no matter which interface you choose.
Personally I've never found kubernetes that difficult to use[1]. It has some weird, unpredictable bits, but so does sysvinit or docker, that just ends up being whatever you're used to.
[1] except for having to install your own network mesh plugin. That part sucked.
So having everyone use the same deployment model (and that’s typically k8s) saves effort. I don’t like it for sure
I like to think if we had a K8s environment a lot of this would be built out within it. Having that functionality abstracted away from the developer would be a huge win in my opinion.
This is certainly the case from all the third person accounts I hear. Online. I never actually met a single one that is like that, if anything, those same people are the ones that are first to tell me about their Hetzner setups.
The trouble is that we are literally expected to do this everywhere we go. I've personally advocated for approaches which use say, a pair of dedicated servers, or VMs as in GPs example. If you want it outside of AWS/GCP/Azure, you're regarded as a crazy person. If you don't adopt "best practices" (as defined by vendors) then management are scared. Management very often trust the sales and marketing departments of big vendors more than their own staff. Many of us have given up fighting this, because what it comes down to is a massive asymmetry of information and trust.
The challenge is convincing people that "golden images" and containers share a history, and that kubernetes didn't invent containers: they just solved load balancing and storage abstraction for stateless message architectures in a nice way.
If you're doing something highly stateful, or that requires a heavy deployment (game servers are typically 10's of GB and have rich dynamic configuration in my experience) then kubernetes starts to become round-peg-square-hole. But people buy into it because the surrounding tooling is just so nice; and like GP says: those cloud sales guys are really good at their jobs, and kubernetes is so difficult to run reliably yourself that it gets you hooked on cloud.
There's a literal army of highly charismatic, charming people who are economically incentivised to push this technology and it can be made to work so- the odds, as they say, are against you.
I think this is the crux of the matter. Also, "everybody is doing it, so they must be right" is also a very common way of thinking amongst this population.
Around the time of the pandemic, a company wanted to make some Javascript code do a kind of transformation over large number of web-pages (a billion or so, fetched as WARC files from the web archive). Their engineers suggested setting up SmartOS VMs and deploying Manta (which would have allowed the use of the Javascript code in a totally unmodified way -- map-reduce from the command-line, that scales with the number storage/processing nodes) which should have taken a few weeks at most.
After a bit of googling and meeting, the higher ups decided to use AWS Lambdas and Google Cloud Functions, because that's what everyone else was doing, and they figured that this was a sensible business move because the job-market must be full of people who know how to modify/maintain Lambda/GCF code.
Needless to say, Lambda/GCF were not built for this kind of workload, and they could not scale. In fact, the workload was so out-of-distribution, that the GCP folks moved the instances (if you can call them that) to a completely different data-center, because the workload was causing performance problems, for _other_ customers in the original data-center.
Once it became clear that this approach cannot scale to a billion or so web-pages, it was decided to -- no, not to deploy Manta or an equivalent -- but to build a custom "pipeline" from scratch, that would do this. This system was in development for 6 months or so, and never really worked correctly/reliably.
This is the kind of thing that happens when non-engineers can override or veto engineering decisions -- and the only reason they can do that, is because the non-engineers sign the paychecks (it does not matter how big the paycheck is, because market will find a way to extract all of it).
One of the fallacies of the tech-industry (I do not mean to paint with too broad a brush, there are obviously companies out there that know what they are doing) is that there are trade-offs to be made between business-decisions and engineering-decisions. I think this is more a kind of psychological distortion or a false-choice (forcing an engineering decision on the basis of what the job market will be like some day in the future -- during a pandemic no less -- is practically delusional). Also, if such trade-offs are true trade-offs, then maybe the company is not really an engineering company (which is fine, but that is kind of like a shoe-store having a few podiatrists on staff -- it is wasteful, but they can now walk around in white lab-coats, and pretend to be a healthcare institution instead of a shoe-store).
Personally, I believe that the tech industry sustains itself via technical debt, much like the real economy sustains itself on real debt. In some sense, everyone is trying to gaslight everyone else into incurring as much technical debt as possible, so that a way to service the debt can be sold. Most of the technical debt is not necessary, and if people were empowered to just not incur it, I suspect it would orient tech companies towards making things that actually push the state of the art forward.
A lot of criticism of k8s is always centered about some imagined perfect PaaS, or related to being in very narrow goldilocks zone where the costs of "serverless" are easier to bear...
This feels like a reminder that everything "Cloud" is still basically the same as IBM's ancient business model. We've always just been renting time on someone else's computers, and those someone else people are always trying to rent more time. The landlords shift, but the game stays the same.
…IBM
…Microsoft
…AWS.”
They're getting kickbacks from cloud vendors. Prove me wrong.
better than nothing, I don't blame em.
Money ain't got no owners, only spenders.
But yeah, let's just spin-up a shadow IT VM with Debian like GP said, it's easy!
That’s literally how they sold AWS in the beginning.
Cloud won not because of costs or flexibility but because it allowed teams to provision their own machines from their budget instead of going through all the red tape with their IT departments creating… a bunch of shadow IT VMs!
Everything old is new again, except it works on an accelerated ten year cycle in the IT industry.
And yes, a dev that's able to do that properly (stress on properly) is indeed a signal of a better overall developer but they are a minority and anyway as orgs scale up there is just too much of "side salad" that it becomes a separated dish.
If you'd know Kubernetes, you know not to use it. I say that as someone who used to do consulting for it.
The reality is that yet again "making money" completely collides with efficient, quality, sane productive work.
For me one of the main reasons to leave that space is that I couldn't really deal with the fact that my work collides with a client's success. That said I have helped to get off that stuff and other things that they thought they needed, that just wasted time and money. It just feels odd going into a company that hired you to consult on a topic only to end up telling them "The best approach for you is not doing that at all". Often never. Like some people thought "Well, if we have hundreds of thousands or even millions of users" and the reality was that even in these scenarios if you went away from that abstract thought and discussed a hypothetical based on their product they realized that they'd still be better off without it. Besides the fact that this hypothetical often was in a future that made it likely that they said they'd likely have completely different setup so preparing for that didn't even make sense.
I think a big thing related to that was/is the microservice craze where people end up moving to a complex architecture for not many good reasons and then they increase complexity way faster than what they actually deliver in terms of the product, because it somehow feels good. I know it does, I've been there. When in reality the outcome often is just a complex mess with what could have been a relatively simple monolith. And these monoliths do work. And in the vast majority of cases they are easy to scale, because your problem switches from "how do we best allocate that huge amount of very different services across our infrastructure" to (for the most part) "how do we spin up our monolith on one more server" which tends to be a way easier to tackle service.
And nothing stops you from still using everything else if you want. Just because it's a monolith doesn't mean you need to skip on any of the cloud offerings, etc. For some reason there seems to be that idea that if you write a monolith you are somehow barred from using modern tooling, infrastructure, services, etc. Not sure where that comes from.
imo this should achieve a nice balance?
SOAs have most utility in scaling teams, not software: creating independent services allows autonomy to independent teams if they apply a few simple patterns for good SOA.
Though imagining the unholy existence of an init system who's only job is to spin up containers, that can contain other inits, OS images, or whatever ..... turtles all the way down.
Tech like Flatpak or Snap is closer to Docker than "machines" are — except that Docker has local, virtualized networking built-in as the IPC layer.
I'm not surprised even in the slightest that DevOps workers will slap k8s on everything, to show "real industry experience" in a job market where the resume matches the tools.
Using new technology in something small and unimportant like a setup script is a perfect way to experiment and learn. It would be irresponsible to build something important as the first thing you do in a new language.
But if you're working with others, you should default to using standard industry tools (absent a compelling reason not to) because your work will be handed off to others and passed on to new team members. It's unreasonable to expect that a new Windows or Linux sysadmin or desktop support tech must learn Rust to maintain a workstation setup workflow.
(Apologies to Cake. And coders.)
I mean, I worked with people who were suprised that you can run more applications inside ec2 vm than just 1 app.
To be fair though, that's true for every profession or skill.
> I mean, I worked with people who were suprised that you can run more applications inside ec2 vm than just 1 app.
I've seen something similar where people were surprised that you can use an object storage (so effectively "make HTTP requests") from every server.
Every company used to have a bespoke collection of build, deployment, monitoring, scaling, etc concerns. Everyone had their own practices, their own wikis to try to make sense of what they had.
I think we critically under-appreciate that k8s is a social technology that is broadly applicable. Not just for hosting containers, but as a cloud-native form of thinking, where it becomes much easier to ask: what do we have here, and is it running well, and to have systems that are helping you keep that all on track (autonomic behavior/control loops).
I see such rebellion & disdain for where we are now, but so few people who seem able to recognize and grapple with what absolute muck we so recently have crawled out of.
This is a problem I've run into enterprise deployments. K8s is often the lowest common denominator semi small platform engineering teams arrive on. At my current employer, a platform managed K8s namespace is the only thing we got in terms of PaaS offering, so it is what we use. Is it overpowered? Yes. Is it overly complex for our usecase? Definitely. Could we basically get by hosting our services on a few cheap mini computers with no performance penalty? Also yes.
Value is not that I got job done at a day's notice. It is black mark that I couldn't package it as per industry best practices.
Not doing would mean out of job/work. Whether it is happening correctly is not something decision makers care as long it is getting done anyhow.
Everyone else is communicating they are doing Agile while being very far away from it ;)
That's basically why k8s is so compelling. It's tech is fine but it's a social technology that is known and can be rallied behind, that has consistent patterns that apply to anything you might dream of making "cloud native". What you did to get this script available for use will closely mirror how anyone else would also get any piece of software available.
Meanwhile conventional sys-op stuff was cobbling together "right sized" solutions that work well for the company, maybe. These threads are overrun with "you might not need k8s" and "use the solution that fits your needs", but man, I pity the companies doing their own frontiers-ing to explore their own bespoke "simple" paths.
I do think you are on to something with there not being food taste making, with not good oversight always.
No Kubernetes whatsoever.
I agree with you.
My personal strategy has always been to start off in docker compose, and break out to a k8s configuration later if I have to start scaling beyond single box.
You don't set up k8s because your current load can't be handled, you do for future growth. Sometimes that growth doesn't pan out and now you're left with a complex infrastructure that is expensive to maintain and not getting any of the benefit.
and then also package this so that you and other developers can get the infrastructure running locally or on other machines.
This is why you get many folks over-thinking the solution and picking the most hyped technologies and using them to solve the wrong problems without thinking about what they are selling.
You don't need K8s + AWS EC2 + S3 just to host a web app. That tells me they like lighting money on fire and bankrupting the company and moving to the next one.
But given how I always see "you don't need k8s because you're not going to scale so fast" I am feel like even professional k8s operators have missed the fundamental design goals of it :/ (maximizing utilization of finite compute)
All of those other tools are complicated and fragile
1. People expect k8s to be an opinionated platform and it's very happy to let you make a mess
2. People think k8s is supposed to be a cross platform portability layer and ... it maybe can be if you're very careful, but it's mostly not that
3. People compare k8s/cloud/etc to some monolithic application with admin permissions to everything and they compare that to the "difficulty" of dealing with RBAC/IAM/networking/secrets management
4. People don't realize how much more complicated vanilla Linux tooling and how much more accidental complexity is involved
Nomad has neither of these problems.
But if its use was confined to this use case, pretty much nobody would be using it (unless as a customer of the organization's infra) and barely would be talking about it (like how there isn't too much talk about Borg).
The reason k8s is a thing in the first place is because it's being used by way too many people for their own goods. (Most people having worked in startups have met too many architecture astronauts in our lives).
If I had to bet, I'd wager that 99% of k8s users are in the “spin a few containers to run your web app” category (for the simple reason that for one billion-dollar tech business using it for legit reasons, there's many thousands early startups who do not).
Teams are free to use EKS internally.
But to quote someone: "you are not Google".
Maybe those devops folks only pay attention to k8s clusters and you're flying under their radar with your single debian VM + Kamal. But the same thinking that results in an overtly complex, impossible to debug, expensive to run k8s cluster can absolutely result in the same using regular VMs unless, again, you are just left to your own devices because their policies don't apply to VMs, yet.
The problem usually is you're one mistake away from someone shoving their nose in it. "What are you doing again? What about HA and redundancy? slow rollout and rollback? You must have at least 3 VMs (ideally 5) and can't expose all VMs to the internet of course. You must define a virtual network with policies that we can control and no wireguard isn't approved. You must split the internet facing load balancer from the backend resources and assign different identities with proper scoping to them. Install these 4 different security scanners, these 2 log processors, this watchdog and this network monitor. Are you doing mtls between the VMs on the private network? what if there is an attacker that gains access to your network? What if your proxy is compromised? do you have visibility into all traffic on the network? everything must flow throw this appliance"
And I’m building and happily using Uncloud (https://github.com/psviderski/uncloud) for this (inspired by Kamal). It makes multi-machine setups as simple as a single VM. Creates a zero-config WireGuard overlay network and uses the standard Docker Compose spec to deploy to multiple VMs. There is no orchestrator or control plane complexity. Start with one VM, then add another when needed, can even mix cloud VMs and on-prem.
If you have an app and you want to run a single app yeah silly to look for K8s.
If you have a beefy server or two you want to utilize fully and put as many apps on it without clashing dependencies you want to use K8s or docker or other containers. Where K8s enables you to go further.
I bet you can do it in some other way but that's built in feature of k8s.
There are no benefits to scaling down in this case. And scaling up won't help handle more load if you've already allocated all resources to running replicas. You need more machines, not more replicas on the existing machine(s).
It all comes down to simple, boring capacity planning and static resource allocation. Fewer moving parts results in fewer failure modes, hence more robust infra and less ops and maintenance work.
You have apps A, B and C (you have N teams and N products) each developed by different teams - that you want to run on that one server, when app A doesn't have much traffic apps B and C can use more of compute. Then having deployment management aligned for all teams/products.
Radboud University recently announced they're rolling it out for managing containers across the faculty which is the most "serious install" I know about, but there could be other: https://cncz.science.ru.nl/en/news/2026-04-15_uncloud/
https://uncloud.run/docs/getting-started/install-cli/#instal...
Spend some time learning it, using it to deploy simple apps, and you won't go back to deploying in a VM again imo.
This only gets better with ai-assisted development, any model is going to produce much better results for k8s given the huge training set vs someone's bespoke build rube-goldberg machine.
how could k8s improve my deployment process?
split deployments -- perhaps you want to see how an update impacts something: if error rates change, if conversion rates change, w/e. K8s makes this pretty easy to do via something like a canary or blue green deployment. Likewise, if you need to rollback, you can do this easily as well from a known good image.
Perhaps you need multiple servers -- not for scale -- but to be closer to your users geographically. 1 server in each of -5-10 AZs makes the updates a bit more complicated, especially if you need to do something like a db schema update.
Perhaps your traffic is lumpy and peaks during specific times of the year. Instead of provisioning a bigger VM during these times, your would prefer to scale horizontally automatically. Likewise, depending on the predictable-ness of the distribution of traffic, running a larger machine all the time might be very expensive for only the occasional burst of traffic.
To be very clear, you can do all of this without k8s. The question is, is it easier to do it with or without? IMO, it is a personal decision, and k8s makes a lot of sense to me. If it doesn't make a ton of sense for your app, don't use it.
Also, Kubernetes uses immutable images and containers so you don't have to worry about dependencies or partial deploys.
when in reality, you can go very bare-bones with k8s, but people pretend like only the most extreme complexity is what's possible because it's not easy to admit that k8s is actually quite practical in a lot of ways, especially for avoiding drift and automation
that's my take on it
knowing when and when not to use k8s, is also a skill
This feels like what us Brits would call "damning with faint praise".
Windows 95 was terrible. Really bad. If you really mean to say that Kubernetes is revolutionary and well-engineered, Windows 2000 would be a much better example.
Scale vertically until you can't because you're unlikely to hit a limit and if you do you'll have enough money to pay someone else to solve it.
Docker is amazing development tooling but it makes for horrible production infrastructure.
Docker Compose is good for running things on a single server as well.
Docker Swarm and Hashicorp Nomad are good for multi-server setups.
Kubernetes is... enterprise and I guess there's a scale where it makes sense. K3s and similar sort of fill the gap, but I guess it's a matter of what you know and prefer at that point.
Throw on Portainer on a server and the DX is pretty casual (when it works and doesn't have weird networking issues).
Of course, there's also other options for OCI containers, like Podman.
IS that a thing still?
> Kubernetes is... enterprise
I would contest that. Its complex, but not enterprise.
Nomad is a great tool for running processes on things. The problem is attaching loadbalancers/reverse proxies to those processes requires engineering. It comes for "free" with k8s with ingress controllers.
Yeah, using it in production. If you don't need the equivalent of CRDs or other complex stuff like network meshes, it's stable and pretty okay! My ingress is just a regular web server image, for example.
> It comes for "free" with k8s with ingress controllers.
Ingress Controllers will keep working but the API is frozen, I think nowadays you're supposed to use Gateway instead: https://gateway-api.sigs.k8s.io/
I'd also contest the k8s is enterprise. Unless by enterprise you just mean over engineered in which case I agree.
Show me a Docker in use where build caching was solved optimally for development builds (like eg. make did for C 40 or 50 years ago)?
Perhaps you consider Docker layers one of the "rough edges", but I believe instant, iterative development builds are a minimum required for "great development tooling".
I did have great fun optimizing Docker build times, but more in the "it's a great engineering challenge to make this shitty thing build fast" sense.
Something like the following works well in practice:
1) pinned base image (e.g. Ubuntu LTS)
2) your own custom base image in a registry rebuilt whenever you want (e.g. with tools you need for debugging or available across all of your images)
3) your own runtime-specific base image, like a JDK one, can be used later both as a basis for development images with additional tooling, as well as for runtime images of your app
4) your own runtime-specific development images, like one that's based on the JDK image above + Maven, alongside any other development tooling you need
5) your multi-stage application image, where the first stage uses the development image to COPY in the dependency description files you need and then pull the dependencies, then does the build (layer cache takes care of reusing things where possible), and then the second stage is based on the runtime image (e.g. JDK) where you just copy your finished artifact (e.g. .jar file)
If you don't need or want to build your own images, you can fold steps 1-4 into just using upstream images off of Docker Hub or whatever you prefer, but in practice it works pretty okay across numerous stacks. Of course, it's also possible to easily have very high standards in regards to what you mean as "optimal", so Docker probably won't live up to that.Let's say you're a team of 1-3 technical people building something as an MVP, but don't necessarily want to throw everything away and rewrite or re-architect if it gets traction.
What are your day 1 decisions that let you scale later without over-engineering early?
I'm not disagreeing with you btw. I genuinely don't know a "right" answer here.
I use k3s/Rancher with Ansible and use dedicated VMs on various providers. Using Flannel with wireguard connects them all together.
This I think is reasonable solution as the main problem with cloud providers is they are just price gouging.
Even if you just run on 2 nodes with k3s it seems worth it to me for the standardized tooling. Yes, it is not a $5 a month setup but frankly if what you host can be served by a single $5 a month VM I don't particularly care about your insights, they are irrelevant in a work context.
Do you have experience with Kubernetes solving these issues? Would love to hear more if so.
Currently running podman containers at work and trying to figure out better solutions for monitoring, alerting, etc. Not so worried about scale (my simple python scripts don't need it) but abstracting away the monitoring, alerting, secure secret injection, etc. seems like it'd be a huge win.
I have been building https://github.com/openrundev/openrun, which provides a declarative solution to deploy internal web apps for teams (with SAML/OAuth and RBAC). OpenRun runs on a single-machine with Docker or it can deploy apps to Kubernetes.
All of this just adds so much extra complexity. If I'm running Amazon.com then sure, but your average app is just fine on a single VM.
I took it to its maximum: every service is a piece that can break ---> fewer pieces, fewer potential breakages.
When I can (which is 95% of the time, I add certain other services inside the processed themselves inside the own server exes and make them activatable at startup (though I want all my infra not to drift so I use the same set of subservices in each).
But the idea is -- the fewer services, the fewer problems. I just think, even with the trade-offs, it is operationally much more manageable and robust in the end.
If you have actual need to deploy few dozen services all talking with eachother k8s isn't bad way to do it, it has its problems but it allows your devs to mostly self-service their infrastructure needs vs having to process ticket for each vm and firewall rules they need. That is saying from perspective of migrating from "old way" to 14 node actual hardware k8s cluster.
It does make debugging harder as you pretty much need central logging solution, but at that scale you want central logging solution anyway so it isn't big jump, and developers like it.
Main problem with k8s is frankly nothing technical, just the "ooh shiny" problem developers have where they see tech and want to use tech regardless of anything
* Built the app (into a self contained .jar, it was a JVM shop)
* Put the app into a Ubuntu Docker image. This step was arguably unnecessary, but the same way Maven is used to isolate JVM dependencies ("it works on my machine"), the purpose of the Docker image was to isolate dependencies on the OS environment.
* Put the Docker image onto an AWS .ami that only had Docker on it, and the sole purpose of which was to run the Docker image.
* Combined the AWS .ami with an appropriately sized EC2.
* Spun up the EC2s and flipped the AWS ELBs to point to the new ones, blue green style.
The beauty of this was the stupidly simple process and complete isolation of all the apps. No cluster that ran multiple diverse CPU and memory requirement apps simultaneously. No K8s complexity. Still had all the horizontal scaling benefits etc.
So I guess I'm a fan. I use a monolith for most of my stuff if I have the choice, but if I'm working somewhere or on something where I have to manage a bunch of services I'm most certainly going to reach for k8s.
There are situations where a single VM, no matter how powerful is, can do the job.
I don't work that closely with k8s, but have toyed with a cluster in my homelab, etc. Way back before it really got going, I observed some OpenStack folks make the jump to k8s.
Knowing what I knew about OpenStack, that gave me an inkling that what you describe would happen and we'd end up in this place where a reasonable thing exists but it has all of this crud layered on top. There are places where k8s makes sense and works well, but the people surrounding any project are the most important factor in the end result.
Today we have an industry around k8s. It keeps a lot of people busy and employed. These same folks will repeat k8s the next time, so the best thing people that who feel they have superior taste is to press forward with their own ideas as the behavior won't change.
I'm not familiar with kubernetes, but doesn't it already do SDN out of the box?
Yes and no. Kubernetes defines specification about network behavior (in form of CNI), but it contains no actual implementation. You have to install the network plugin basically as the first setup step.
The irony is that "DevOps" was supposed to be a culture and a set of practices, not a job title. The tools that came with it (=Kubernetes) turned out to be so complex that most developers didn't want to deal with them and the DevOps became a siloed role that the movement was trying to eliminate.
That's why I have an ick when someone uses devops as a job title. Just say "System Admin" or "Infrastrcutre Engineer". Admit that you failed to eliminate the siloes.
I am primarily a backend developer but I do a lot of ops / infra work because nobody else wants to do it. I stay as far away from k8s as possible.
Absolutely brilliant. Love it.
FROM scratch
COPY my-static-binary /my-static-binary
ENTRYPOINT “/my-static-binary”
Having multiple processes inside one container is a bit of an anti-pattern imo
I once tried to build a simple setup using VM images and the complexity exploded to the point where I'm not sure why anyone should bother.
When building a container you can just throw everything into it and keep the mess isolated from other containers. If you use a VM, you can't use the OCI format, you need to build custom packages for the OS in question. The easiest way to build a custom package is to use docker. After that you need to build the VM images which requires a convoluted QEMU and libvirt setup and a distro specific script and a way to integrate your custom packages. Then after all of this is done you still need to test it, which means you need to have a VM and you need to make it set itself up upon booting, meaning you need to learn how to use cloud-init.
Just because something is "mature" doesn't mean it is usable.
The overhead of docker is basically insignificant and imperceptible (especially if you use host networking) compared to the day to day annoyances you've invited into your life by using VM images. Starting a a VM for testing purposes is much slower than starting a container.
Do you pair it with some orchestration (to spin up the necessary VM)?
It's obvious to you, me and the other 2 presumably techie people who've responded within 15 mins that you shouldn't have been using Kubernetes. But you probably work in a company of full of techie people, who ended up using Kubernetes.
We have HN, an environment full of techie people here who immediately recognise not to use k8s in 99% of cases, yet in actually paid professional environments, in 99% of cases, the same techie people will tolerate, support and converge on the idea they should use k8s.
I feel like there's an element of the emperors new clothes here.
Most companies aren't "web scale" ™ and don't need an orchestrator built for google level elasticity, they need a vm autoscaling group if anything.
Most apps don't need such granular control over fs access, network policies, root access, etc, they need `ufw allow 80 && ufw enable`
Most apps don't need a 15 stage, docker layer caching optimized, archive promotion build pipeline that takes 30 minutes to get a copy change shipped to prod, they need a `git clone me@github.com:me/mine.git release_01 && ln -s release_01 /var/www/me/mine/current`
This is coming from someone who has had roles both as a backend product engineer and as a devops/platform engineer, who has been around long enough to remember "deploy" to prod was eclipse ftping php files straight to the prod server on file save. I manage clusters for a living for companies that went full k8s and never should have gone full k8s. ECS would have worked for 99% of these apps, if they even needed that.
Just like the js ecosystem went bat shit insane until things started to swing back towards sanity and people started to trim the needless bloat, the same is coming or due for the overcomplexity of devops/backend deployments
"ECS would have worked for 99% of these apps, if they even needed that."
I used to agree with that but is EKS really that much more complicated? Yes you pay for the k8s control plane but you gain tooling that is imho much easier to work with than IaC.
Not long after, I found that the pods were CONSTANTLY getting into some weird state where K8s couldn't rebuild, so I had to forcibly delete the pods and rebuild. I blamed myself, not knowing much about K8s, but it also was extremely frustrating because, as I understood/understand it, the entire purpose of Kubernetes is to ensure a reliable deployment of some combination of pods. If it couldn't do that and instead I had to manually rebuild my cluster, then what was the point?
In the end, I ended up nuking the entire project -- K8s, Docker containers, Python, and Dask -- and instead went with a single Rust binary deployed to an Azure Function. The result was faster (by probably an order of magnitude), less memory, cheaper (maybe -80% cost), and much more reliable (I think around four nines).
That is not what kube is designed for.
This is one of the main fuckups of k8s, the networking is batshit.
The other problems is that secrets management is still an afterthought.
The thing that really winds me up is that it doesn't even scale up that much. 2k nodes and it starts to really fall apart.
Similarly, I suspect (based on your "hardening" grievance) that a lot of your tedium is just that cloud APIs generally push you toward least-privileges with IAM, which is tedious but more secure. And if you implement a comparably secure system on your single VM (isolating different processes and ensuring they each have minimal permissions, firewall rules, etc) then you will probably have strictly more incidents and debugging effort. But you could go the other way and make a god role for all of your services to share and you will spend much less time debugging or dealing with incidents.
Even with a single VM, you could throw k3s on it and get many of the benefits of Kubernetes (a single, unified, standardized, extensible control plane that lots of software already supports) rather than having to memorize dozens of different CLI utilities, their configuration file formats, their path preferences, their logging locations, etc. And as a nice bonus, you have a pretty easy path toward high availability if you decide you ever want your software to run when Google decides to upgrade the underlying hardware.
The tools in this space can really help get a few containers in dev/staging/production much more manageable.
As a devops/cloud engineer coming from a pure sysadmin background (you've got a cluster of n machines running RHEL and that's it) i feel this.
The issues i see however are of different nature:
1. resumeè-driven development (people get higher-paying job if you have the buzzwords in your cv)
2. a general lack of core-linux skills. people don't actually understand how linux and kubernetes work, so they can't build the things they need, so they install off-the-shelf products that do 1000 things including the single one they need.
3. marketing, trendy stuff and FOMO... that tell you that you absolutely can't live without product X or that you must absolutely be doing Y
to give you an example of 3: fluxcd/argocd. they're large and clunky, and we're getting pushed to adopt that for managing the services that we run inside the cluster (not developer workloads, but mostly-static stuff like the LGTM stack and a few more things - core services, basically). they're messy, they add another layer of complexity, other software to run and troubleshoot, more cognitive load.
i'm pushing back on that, and frankly for our needs i'm fairly sure we're better off using terraform to manage kubernetes stuff via the kubernetes and helm provider. i've done some tests and frankly it works beautifully.
it's also the same tool we use to manage infrastructure, so we get to reuse a lot of skills we already have.
also it's fairly easy to inspect... I'm doing some tests using https://pkg.go.dev/github.com/hashicorp/hcl/v2/hclparse and i'm building some internal tooling to do static analysis of our terraform code and automated refactoring.
i still think kubernetes is worth the hassle, though (i mostly run EKS, which by the way has been working very good for me)
> Traditional Cloud 1.0 companies sell you a VM with a default of 3000 IOPS, while your laptop has 500k. Getting the defaults right (and the cost of those defaults right) requires careful thinking through the stack.
I wish them a lot of luck! I admire the vision and am definitely a target customer, I'm just afraid this goes the way things always go: start with great ideals, but as success grows, so must profit.
Cloud vendor pricing often isn't based on cost. Some services they lose money on, others they profit heavily from. These things are often carefully chosen: the type of costs that only go up when customers are heavily committed—bandwidth, NAT gateway, etc.
But I'm fairly certain OP knows this.
There's not enough redundancy. You could raid1 those NVME's when before they get attached to a VM and that helps with hardware failures, but you get less of them to attach. Even if you RAID them, there's not a good way to move that VM to another host if there's a RAM or CPU or other hardware issue on that host.
These VM's with NVME's directly attached have to basically be treated as bare metal servers and you have to do redundancy at the application layer (like database replication).
But again, all of the major cloud services offer these types of machines if you NEED NVME IO speed. There are quirks though. For example, in Azure it seems like you have to be able to expect the VM to be moved whenever Azure feels like it and expect that ephemeral data to be wiped. Whereas in Openstack, we would do local block level migrations if we HAD to move the VM to another host. That block level migration required the VM to be turned off but it did copy the local NVME data to another host. If this happened it was all planned and the particular application had app level redundancy built in so it was not a problem. If the host crashed, that particular VM would just be down till the host was fixed and came back online.
This is the critical point. All hardware fails eventually. The CPU and RAM are, in a real sense, also ephemeral. The only relevant question is what the risk tolerance of the use-case is. If restoring from async backup is sufficient, then embrace ephemerality and keep backups. If you need round-the-clock availability, pick an architecture that lets you fall over gracefully to another machine, and embrace the ephemerality when you inevitably need to do so.
So build resiliency into your application layer.
Using fio
Hetzner (cx23, 2vCPU, 4 GB) ~3900 IOPS (read/write) ~15.3 MB/s avg latency ~2.1 ms 99.9th percentile ≈ ~5 ms max ≈ ~7 ms
DigitalOcean (SFO1 / 2 GB RAM / 30 GB Disk) ~3900 IOPS (same!) ~15.7 MB/s (same!) avg latency ~2.1 ms (same!) 99.9th percentile ≈ ~18 ms max ≈ ~85 ms (!!)
using sequential dd
Hetzner: 1.9 GB/s DO: 850 MB/s
Using low end plan on both but this Hetzner is 4 euro and DO instance is $18.
RS 1000 G12 AMD EPYC™ 9645 8 GB DDR5 RAM (ECC) 4 dedicated cores 256 GB NVMe
Costs 12,79 €
Results with the follwing command:
fio --name=randreadwrite \ --filename=testfile \ --size=5G \ --bs=4k \ --rw=randrw \ --rwmixread=70 \ --iodepth=32 \ --ioengine=libaio \ --direct=1 \ --numjobs=4 \ --runtime=60 \ --time_based \ --group_reporting
IOPS Read: 70.1k IOPS Write: 30.1k IOPS ~100k IOPS total
Throughput Read: 274 MiB/s Write: 117 MiB/s
Latency Read avg: 1.66 ms, P99.9: 2.61 ms, max 5.644 ms Write avg: 0.39 ms, P99.9: 2.97 ms, max 15.307 ms
IOPS: read 325k, write 139k
Throughput: read 1271MB/s, write 545MB/s
Latency: read avg 0.3ms, P99.9 2.7ms, max 20ms; write: 0.14ms, P99.9 0.35ms max 3.3ms
so roughly 100 times iops and throughput of the cloud VMs
Using a Netcup VPS 1000 G12 is more comparable.
read: IOPS=18.7k, BW=73.1MiB/s
write: IOPS=8053, BW=31.5MiB/s
Latency Read avg: 5.39 ms, P99.9: 85.4 ms, max 482.6 ms
Write avg: 3.36 ms, P99.9: 86.5 ms, max 488.7 ms
Here are some "Regular Performance" shared resource stats
Hetzner CPX11 (Ashburn, 2 CPUs, 2GB, 5.49€ or $6.99/month before VAT)
read: IOPS=36.7k, BW=144MiB/s, avg/p99.9/max 2.4/6.1/19.5ms
write: IOPS=15.8k, BW=61.7MiB/s, avg/p99.9/max 2.4/6.1/18.7ms
Hetzner CPX22 (Helsinki, 2 CPUs, 4GB, 7.99€ or $9.49/month before VAT)
read: IOPS=48.2k, BW=188MiB/s, avg/p99.9/max 1.9/5.7/10.8ms
write: IOPS=20.7k, BW=80.8MiB/s, avg/p99.9/max 1.8/5.8/10.9ms
Hetzner CPX32 (Helsinki, 4 CPUs, 8GB, 13.99€ or $16.49/month before VAT)
read: IOPS=48.3k, BW=189MiB/s, avg/p99.9/max 1.9/6.2/36.1ms
write: IOPS=20.7k, BW=81.0MiB/s, avg/p99.9/max 1.8/6.3/36.1ms
Edit: I posted this before reading, and these two are the same he points out.
And yes, IO typically happens in 4kb blocks, so you need a decent amount of IOPS to get the full bandwidth.
That latter part is a big deal, too. If I buy 1PB of block storage, I’m decently likely to be running a fancy journaled or WAL-ed or rollback-logged thing on top, and that thing might be completely unable to read from a read only snapshot. So actually reading from a PIT snapshot is a pain regardless of what I paid for it. Even using EBS or similar snapshots is far from being an amazing experience.
If that's true, I wonder if this is a deliberate decision by cloud providers to push users towards microservice architectures with proprietary cloud storage like S3, so you can't do on-machine dbs even for simple servers.
Instead they make the default "meager IOPS" and then charge more to the people who need more.
I remember my worked laptop's IOPS beating a single VM on the first SSD based SAN I deployed as well. Of course, the SAN scaled well beyond it with 1,000 VMs.
Business 101 teaches us that pricing isn't based on cost. Call it top down vs bottom up pricing, but the first principles "it costs me $X to make a widget, so 1.y * $X = sell the product for $Y is not how pricing works in practice.
The price is what the customer will pay, regardless of your costs.
For example I calculated the cost of a solar install to be approximately: Material + Labour + Generous overhead + Very tidy profit = 10,000€
In practice I keep getting offers for ~14,000€, which will be reduced to 10,000€ with a government subsidy and my request for an itemized invoice is always met with radio silence.
Which it won't be, if at every turn you choose the hyperscaler.
It kinda is, but obscured by GP's formula.
More simply; if it costs you $X to produce a product and the market is willing to pay $Y (which has no relation to $X), why would you price it as a function of $X?
If it costs me $10 to make a widget and the market is happy to pay $100, why would I base my pricing on $10 * 1.$MARGIN?
But that is an equilibrium result, and famously does not apply to monopolies, where elasticity of substitution will determine the premium over the rental rate of capital.
I see the same thing happen with Kubernetes. I've run clusters from various sizes for about half a decade now. I've never once had an incident that wasn't caused by the product itself. I recall one particular incident where we had a complete blackout for about an hour. The people predisposed to hating Kubernetes did everything they could to blame it all on that "shitty k8s system." Turns out the service in question simply DOS'd by opening up tens of thousands of ports in a matter of seconds when a particular scenario occurred.
I'm neither in the k8s is the future nor k8s is total trash. It's a good system for when you genuinely need it. I've never understand the other two sides of the equation.
Usually they go hand in hand.
By the time I left, the developers didn't really know anything about how the underlying infrastructure worked. They wrote their Dockerfiles, a tiny little file to declare their deployment needs, and then they opened a platform webpage to watch the full lifecycle.
If you're a single service shop, then yeah, put Docker Compose on it and run an Ansible playbook via GitHub Actions. Done. But for a larger org moving off cloud to bare-metal, I really couldn't see not having k8s there to help buffer some of the pain.
I agree that Kubernetes can help simplify the deployment model for large organizations with a mature DevOps team. It is also a model that many organizations share, and so you can hire for talent already familiar with it. But it's not the only viable deployment model, and it's very possible to build a deployment system that behaves similarly without bringing in Kubernetes. Yes, including automatic preview deployments. This doesn't mean I'm provided a VM and told to figure it out. There are still paved-path deployment patterns.
As a developer, I do need to understand the environment my code runs in, whether it is bare-metal, Kubernetes, Docker Swarm, or a single-node Docker host. It impacts how config is deployed and how services communicate with each other. The fact that developers wrote Dockerfiles is proof that they needed to understand the environment. This is purely a tradeoff (abstracting one system, but now you need to learn a new one.)
Also, those simpler deployments usually burn more money per utilized compute, or involve reinventing 80% of kids, often badly
It's up to the individual to choose how much knowledge they want to trade away for convenience. All the containers are just forms of that trade.
You surely meant "much less efficient than"
There also seems to be confusion about what I meant by "bare-metal." I wasn't intending to refer to the server ownership model, but rather the deployment model where you deploy software directly onto an operating system.
When all you have is a hammer, every problem starts to look like a nail. And the people with axes are wondering how (or indeed even why) so many people are trying to chop wood with a hammer. Further, some axewielders are wondering why they are losing their jobs to people with hammers when an axe is the right tool for the job. Easy to hate the hammer in this case.
And the end result is often that you have two tribes that have totally incorrect idea of even what tools they are using themselves and how, and it's like you swapped them an intentionally wrong dictionary like in a Monthy Python sketch.
We run k8s with several VMs in a couple different cloud providers. I’d love it if I could forget about the VMs entirely.
Is there a simpler thing than k8s that gets you all that? Probably. But if you don’t use k8s, aren’t you doomed to reimplement half of it?
Like these things:
- Service discovery or ingress/routing (“what port was the auth service deployed on again?”)
- Declarative configuration across the board, including for scale-out
- Each service gets its own service account for interacting with external systems
- Blue/green deployments, readiness checks, health checks
- Strong auditing of what was deployed and mutated, when, and by whom
I ended up buying a cheap auctioned Hetzner server and using my self-hostable Firecracker orchestrator on top of it (https://github.com/sahil-shubham/bhatti, https://bhatti.sh) specifically because I wanted the thing he’s describing — buy some hardware, carve it into as many VMs as I want, and not think about provisioning or their lifecycle. Idle VMs snapshot to disk and free all RAM automatically. The hardware is mine, the VMs are disposable, and idle costs nothing.
The thing that, although obvious, surprised me most is that once you have memory-state snapshots, everything becomes resumable. I make a browser sandbox, get Chromium to a logged-in state, snapshot it, and resume copies of that session on demand. My agents work inside sandboxes, I run docker compose in them for preview environments, and when nothing’s active the server is basically idle. One $100/month box does all of it.
Thank you for sharing!
At home, I've done that with a Zellij session (everything is tied to the session, and quitting Zellij completely means "I'm done with this". Merely disconnecting keeps it running).
My only feedback so far is that a lot of the documentation, though thorough and useful, looks clearly AI-written. That's not bad in and of itself, but it could be more concise. I especially love the "design decisions" section as I learned something new already.
Have you posted it on "Show HN" already? If not, you should.
I am aware of the documentation, it’s what I have been focusing on before I can post on HN. I want to make it a delight to read for other people!
As for the design decisions, I have tried keeping all the plans I made in the repo too. I wouldn’t have been able to make bhatti in a month without LLMs.
Out of interest, what sandboxing solution do you use?
There is already so much software out there, which isn't used by anyone. Just take a look at any appstore. I don't understand why we are so obsessed with cranking out even more, whereas the obvious usecase for LLMs should be to write better software. Let's hope the focus shifts from code generation to something else. There are many ways LLMs can assist in writing better code.
I believe right now we are still in the phase of “how can AI help engineers write better software”, but are slowly shifting to “how can engineers help AI write better software.” This will bring in a new herd of engineers with completely different views on what software is, and how to best go about building computer interactions.
That said, if you look at the apps on your phone, I wager a large proportion don't have these moats. Translation, passwords, budget, reminders, email, to do, project management, messaging, browser, calendar, fitness, games, game tracking, etc.
Jevons paradox would be if despite software becoming cheaper to produce the total spend on producing software would increase because the increase in production outruns the savings
Jevons paradox applies when demand is very elastic, i.e. small changes in price cause large changes in quantity demanded. It's a property of the market.
He's saying that agents make code much cheaper, therefore there will be a large increase in demand for code. This appears to be exactly what you're describing.
I honestly think this is ideal. Video games aside, I think one day we'll look back and realize just how insane it was that we built software for millions or even billions of users to use. People can now finally build the software that does exactly what they've wanted their software to do without competing priorities and misaligned revenue models working against them. One could argue this kind of software, by definition, is higher quality.
I could see maybe more customization of said software, but not totally fresh. I do agree that people will invent more one-off throwaway software, though.
Tinkering? Even today, people don’t need to understand software. They just need to be able to describe their problems and goals to create an app.
> I mean, are you envisioning that everyone would have their own custom messaging app, for example? Or email?
Well first I think there’s a good chance that most apps as we know them today won’t even exist, and most “apps” will be tool use on APIs. But even then, shopping apps, for example, could be so highly personalized that no two people have the same one.
> I mean, I think most people's demands for those things are all extremely homogenous.
They aren’t, as evidenced by the fact there are many dozens of popular messaging apps with millions of users. Despite the network effects for a messaging app to even be viable.
Also, I’m not talking one-off throwaway apps… these are living, breathing pieces of production-grade software users will mold to fit their needs and evolve with them for years.
I’m not sure what “totally fresh” means
My view is actually the opposite. Software now belongs to cattle, not pet. We should use one-offs. We should use micro-scale snippets. Speaking language should be equivalent to programming. (I know, it's a bit of pipe dream)
In that sense, exe.dev (and tailscale) is a bit like pet-driven projects.
Vibe coding or LLM accelerated development is going to turn this on its head. Everyone will be able to afford custom software to fit their specific needs and preferences. Where Salesforce currently has 150,000 customers, imagine 150,000 customers all using their own customised CRM. The scope for software expansion is unbelievably large right now.
In the 70s, it was called "time-sharing". Instead of buying a mainframe, you got a CICS application instance on a mainframe and used that. (tangentially, spare time on these built-out nation-wide dialup-supported networks is what gave birth to CompuServe and GEnie).
In the dot-com era, it was called "application service providers". Salesforce and actually started in this era (1999). So did NetSuite. This was the first attempt to be browser-based but bandwidth and browsers sucked then.
I think PaaS is a more recent software paradigm, albeit a far less successful one.
As for the average quality: it’s unclear.
My intuition is that agents lift up the floor to some degree, but at the same time will lead to more software being produced that’s of mediocre quality, with outliers of higher quality emerging at a higher rate than before.
If you wanted to, you could make an argument about the principal-agent problem - that as hunter-gatherers or subsistence, farmers, our quality versus quantity decisions only affected us, whereas in a market economy, you could argue that one person’s quality versus quantity decision affects someone else.
But dismantling capitalism will not solve this problem. It just moves the decision-making to a different group of people. Those people will face the same trade-offs and the same incentives. After the Revolution, even the most loyal comrade will have to contend with the fact that they can choose to provide the honourable working class with more of a thing if they drop the quality.
If you're doing anything complicated, Excel just doesn't make sense anymore. it'll still the be data exchange format (at least, something more advanced than csv), but it's no longer the only frontend.
"No one uses" is no longer the insult it once was. I don't need or want to make software for every last person on the world to use. I have a very very small list of users (aka me) that I serve very well with most of the software that I generate these days outside of work.
It certainly is for lots of businesses, otherwise they go out of business.
There is something called 'revenue' which they need to make from customers which are their 'users', and that revenue pays for the 'operating costs' which includes payroll, office rent, infrastructure etc.
This just means that it is important than ever to know what to build just as how it is built. It is unrealistic for a business to disregard that and to build anything they want and end up with zero users.
No users, No revenue. No revenue, No business.
I agree there is opportunity in making LLM development flows smooth, paired with the flexibility of root-on-a-Linux-machine.
> Time and again I have said “this is the one” only to be betrayed by some half-assed, half-implemented, or half-thought-through abstraction. No thank you.
The irony is that this is my experience of Tailscale.
Finally, networking made easy. Oh god, why is my battery doing so poorly. Oh god, it's modified my firewall rules in a way that's incompatible with some other tool, and the bug tracker is silent. Now I have to understand their implementation, oh dear.
No thank you.
I hope this wasn't interpreted towards exe.dev. That really is a cool service!
Tags permanently erase the user identity from a device, and disable things like Taildrop. When I tried to assign a tag for ACLs, I found that I then could not remove it and had to endure a very laborous process to re-register a Tailscale device that I added to Tailscale for the express purpose of remotely accessing
But yes I don’t think you can ACL based o the hostname
Part of the reason that we don't (currently) let you do this is that a hostname is a user-reported field, and can change over time; it's not a durable form of identity that you can write ACLs on. One could imagine, for example:
1. Creating an ACL rule that allows hostname "webserver" to hostname "db".
2. (time passes)
3. Hostname "webserver" is deleted/changed to "web"/etc.
4. Someone can now register a user device with the system hostname set to "webserver"
Should they be allowed to inherit the pre-existing ACL rule?
However, you can accomplish something very close to what you're asking for, I think, by defining a "host" in the policy file (https://tailscale.com/docs/reference/syntax/policy-file#host...) that points to a single Tailscale IP. Since we don't allow non-admins to change their Tailscale IP, this uniquely identifies a single device even if the hostname changes, and thus you can write a policy similar to:
"hosts": {
"myhost": "100.64.1.2",
},
"grants": [
{
"src": ["myhost"],
"dst": ["tag:db"],
},
]Could you rephrase that / elaborate on that? Isn't Tailscale's selling point precisely that they do identity-based networking?
EDIT: Never mind, now I see the sibling comment to which you also responded – I should have reloaded the page. Let's continue there!
I think that's startup-thinking, at least in my experience. Maybe in a small company the DevOps guy does all infra.
In my experience, especially in financial services, who runs the show are platform engineering MDs - these people want maximum control for their software engineers, who they split up into a thousand little groups who all want to manage their own repos, their own deployments, their own everything. It's believed that microservices gives them that power.
I guarantee you devops people hate complexity, they're the ones getting called at night and on the weekend, because it's supposedly always an "infrastructure issue" until proven otherwise.
Also the deployment logs end up in a log aggregation system, and god forbid software developers troubleshoot their own deployments by checking logs. It's an Incident.
Are microservices a past fad yet?
Everything which cloud companies provide just cost so much, my own postgres running with HA setup and backup cost me 1/10th the price of RDS or CloudSQL service running in production over 10 years with no downtime.
i directly autoscales instances off of the Metrics harvested from graphana it works fine for us, we've autoscaler configured via webhooks. Very simple and never failed us.
i don't know why would i even ever use GCP or AWS anymore.
All my services are fully HA and backup works like charm everyday.
Does a regular 20-something software engineer still know how to turn some eBay servers & routers into a platform for hosting a high-traffic web application? Because that is still a thing you can do! (I've done it last year to make a 50PiB+ data store). I'm genuinely curious how popular it is for medium-to-big projects.
And Hetzner gives you almost all of that economic upside while taking away much of the physical hassle! Why are they not kings of the hosting world, rather than turning over a modest €367M (2021).
I find it hard to believe that the knowledge to manage a bunch of dedicated servers is that arcane that people wouldn't choose it for this kind of gigantic saving.
Managing servers is fine. Managing servers well is hard for the average person. Many hand-rolled hosting setups I've encountered includes fun gems such as:
- undocumented config drift.
- one unit of availability (downtime required for offline upgrades, resizing or maintenance)
- very out of date OS/libraries (usually due to the first two issues)
- generally awful security configurations. The easiest configuration being open ports for SSH and/or database connections, which probably have passwords (if they didn't you'd immediately be pwned)
Cloud architecture might be annoying and complex for many use-cases, but if you've ever been the person who had to pick up someone else's "pet" and start making changes or just maintaining it you'll know why the it can be nice to have cloud arch put some of their constraints on how infra is provisioned and be willing to pay for it.
Hetzner is an oldschool German company, it is not surprising to see them act this way. They are very profitable (165M Euro in 2024) and have very little debt. They also seem to be mostly bootstrapped and are not VC funded
https://www.northdata.com/Hetzner%20Online%20GmbH,%20Gunzenh...
Whether or not cloud is viable for a company is very individual. It's very hard to pin point a size or a use case that will always make cloud the "correct" choice.
OP is not saying they push new versions at such a high frequency they need checks every one minute.
The choice of one minute vs 15 minute is implementation detail and when architected like this costs nothing.
I hope that helps. Again this is my own take.
Maybe you meant to say "automatically" instead of "immediately"? Because if you really mean "immediately" then there is still plenty of low-hanging fruit to be had.
But I came across Mythic Beasts (https://www.mythic-beasts.com/) yesterday, similar idea, UK based. Not used them yet but made the account for the next VPS.
It is like 4 lines of config for Postgres, the only line you need to change is on which path Postgres should store the data.
Maybe change the filesystem?
You can use block storage if data matters to you.
Many services do not need to care about data reliability or can use multiple nodes, network storage or many other HA setups.
But there is middleground in form of VPS, where hardware is managed by the provider. It's still way way cheaper than some cloud magic service.
I am sure it's luck but we have few hetzner VPSes in both German locations and in last 5 years afaik they've never been down. On our http monitor service they have 100s of days uptime only because we restarted them ourselves.
An employee is going to cost anywhere between 8k and 50k per month. Hiring an employee to save 200/month on servers by using a shitty VPS provider is not saving you any money.
If you're looking to invest im fine with only $5M :)
I don't want to make that public, it's my way of an isolated dev environment and it runs on my private raspberry behind my tv. Costs me nothing.
I hope you have a good success with your service.
Running Shellbox 24/7 is ~25% cheaper than Exe, with 2x storage but 50% of RAM. Exe seems to provide additional features (which I don't need). Not presenting this information upfront and in an easily digestible format makes me suspicious.
I dig the overall aesthetic and may give Shellbox v2 a try.
`ssh you/repo/branch@box.clawk.work` → jump directly into Claude Code (or Codex) with your repo cloned and credentials injected. Firecracker VMs, 19€/mo.
POC, please be kind.
If you want to try it: code `HNPRELAUNCH` on checkout, first month free, then 19€/mo (cancel anytime from your Stripe receipt). Limited to the first 20 redemptions, expires in a week.
Honest feedback on what breaks would mean a lot.
at 19€/mo are you subsidizing it given the sharp rise of LLM costs lately?
or are you heavily restricting model access. surely there is no Opus?
Just shows I'm the Dropbox commentator. I have what exe provides on my own and am shocked by the value these abstractions provide everyone else!! One off containers on my own hardware spin up spin down run async agents, etc, tailscale auth, team can share or connect easily by name.
The technology itself in its current form is not valuable
Almost every VC rejected us when we went to get seed funding for Tailscale, we knew none of them. Friends of friends of acquaintances got us meetings. Fundraising is very possible for you if you are committed to building a business. Most important thing is don't think of fundraising as the goal, it is just a tool for building a business. (And some businesses don't need VC funding to work. Some do.)
The biggest challenge is personal: do you want to build a business or do you want to work with cool tech? Sometimes those goals are aligned, but usually they are not. Threading the needle and doing both is difficult, and you always have to prioritize the business because you have to make payroll.
Ha! This made me smile :)
Lean software -> missing features users want -> add features over time -> bloated mess -> we need a smaller rewrite -> Lean software -> ...
Not sure we can move away from cpu/memory/io budgeting towards total metal saturation because code isn't what it used to be because no one handles malloc failure any more, we just crash OOM
The key point is the partner companies. Almost nobody is actually running their own clouds the way they would with various 365 products, AWS or Azure. They buy the cloud from partners, similar to how they used to (and still do) buy solutions from Microsoft partners. So if you want to "sell cloud" you're probably going to struggle unless you get some of these onboard. Which again would probably be hard because I imagine a lot of what they sell is sort of a package which basically runs on VM's setup as part of the package that they already have.
International visitors might tell us more about benefits of non EU, US or UK nexus companies/legal/rights.
> Finally, clouds have painful APIs. This is where projects like K8S come in, papering over the pain so engineers suffer a bit less from using the cloud. But VMs are hard with Kubernetes because the cloud makes you do it all yourself with lumpy nested virtualization. Disk is hard because back when they were designing K8S Google didn’t really even do usable remote block devices, and even if you can find a common pattern among clouds today to paper over, it will be slow. Networking is hard because if it were easy you would private link in a few systems from a neighboring open DC and drop a zero from your cloud spend. It is tempting to dismiss Kubernetes as a scam, artificial make work designed to avoid doing real product work, but the truth is worse: it is a product attempting to solve an impossible problem: make clouds portable and usable. It cannot be done.
Please learn from Unix's mistakes. Learn from Nix. Support create-before-destroy patterns everywhere. Forego all global namespaces you can. Support rollbacks everywhere.
If any cloud provider can do that, cloud IaC will finally stop feeling so fake/empty compared to a sane system like NixOS.
Fine, their UI is different, but I don't see any real difference from other providers.
On that machine you can (easily) make an arbitrary number of VMs.
Each VM has their own URL that you can share (or make private).
See features: https://exe.dev/docs/customization
We're thinking about switching to this pricing model for our own startup[1] (we run sandboxed coding agents for dev teams). We run on Daytona right now for sandboxes. Sometimes I spin up a sandboxed agent to make changes to an app, and then I leave it running so my teammate can poke around and test the running app in the VM, but each second it's running we (and our users) incur costs.
We can either build a bunch of complicated tech to hibernate running sandboxes (there's a lot of tricky edge cases for detecting when a sandbox is active vs. should be hibernated) or we can just provision fixed blocks of compute. I think I prefer the latter.
Oh, that’s too kind. More like 100x to 1000x. Raw bandwidth is cheap.
I need to fix our transfer pricing. (In fact I'm going to go look at it now.) I set that number when we launched in December, and we were still considering building on top of AWS, so we put a conservative limit based on what wouldn't break the bank on AWS. Now that we are doing our own thing, we can be far more reasonable.
Another one could be Bitwarden, although I don't host my own password manager personally. Or netbird. You get the point
dedicated servers, as hinted by others here, addresses the vast majority of issues one may face for any non-enterprise needs. if you know about IOPS and care about them, odds are that running a simple open-source project [1] on top of one is all you need to do to move on with your day.
need redundancy, etc.? can complement with another one in another provider/region or put CF in front of your box. this is clearly working well enough for some of the commenters who are able to sell their own service on top of this approach.
Running a cloud data center could be a business like operating a self-storage facility or a car wash. Small investors love this kind of operation.
VMs have a built-in gateway to cloud providers with a fixed url with no auth. You can top that in via the service itself. No need for your own keys.
So likely a good tool for managing AI agents. And "cloud" is a bit of a stretch, the service is very narrow.
The complete lack of more detailed description of the regions except city name makes it really only suitable for ephemeral/temporary deployments. We don't know what the datacenters are, what redundancy is in place, no backups or anything like that.
The shell command to start a new vm, has a --prompt flag to get an LLM to configure the VM for you.
VM's have no public ipv4 IP, and the ipv6 IP doesn't seem to allow incoming connections.
The only supported inbound connections are via their HTTP proxy.
There is no private networking.
At first I interpreted the complaint about cloud providers not offering nested-virtualization, as something he intends to address by offering it as a feature, but no, instead he means that exe.dev's VM abstraction eschews the need for it.
I'm very curious how they deal with subscription levels/noisy neighbors.
You can see their base docker image here - https://github.com/boldsoftware/exeuntu
52.35.87.134 <- Amazon Technologies Inc. (AT-88-Z)
Our exe.dev web UI still runs on AWS. We also have a few users left on our VM hosts there, as when we launched in December we were considering building on AWS. Now almost all customer VMs are on other bare metal providers or machines we are racking ourselves. We built our own GLB with the help of another vendor's anycast network. You can see that if you try any of the exe.xyz names generated for user VMs.
We would move exe.dev too, but we have a few customers who are compliance sensitive going through it, so we need to get the compliance story right with our own hardware before we can. It is a little annoying being tied to AWS just for that, but very little of our traffic goes through them, so in practice it works.
Hey wait a minute!
Checking the current offering, it's just prepaid cloud-capacity with rather low flexibility. It's cheap though, so that is nice I guess. But does this solve anything new? Anything fly.io orso doesn't solve?
What is the new idea here? Or is it just the vibes?
As another user notes in this thread, exe.dev isn't that cheap. Their bandwidth pricing is £7/100gb. The lowest compute tier is £20/mo (Fly.io machines/sprites can go for less than £2/mo).
> Anything fly.io also doesn't solve?
exe.dev is comparable to sprites.dev Fly.io launched recently; but with a different pricing model.
David, by the way of Tailscale, themselves were among early users of Fly.io. I read some of David's commentary on "Cloud 1.0" as taking a dig at their friends at Fly.io, too. This is going to be interesting...
* Insistence on adding costly abstractions to overcome the limitations of non-fungible resources
* Deliberate creation of over or under-sized resource "pieces" instead of letting folks consume what they need
* Deliberate incompatibility with other vendors to enforce lock-in
I pitched a "Universal Cloud" abstraction layer years ago that never got any traction, and honestly this sounds like a much better solution anyhow. When modern virtualization is baked into OS kernels, it doesn't make a whole lot of sense to enforce arbitrary resource sizes or limits other than to inflate consumption.
Kubernetes without all the stuff that makes it a bugbear to administrate, in other words. Let me buy/rent a pool of stuff and use it how I see fit, be it containers or VMs or what-have-you.
In my experience, K8s is a million times better than legacy shit it is usually replacing. The Herokus, the Ansible soup, the Chef/Puppet soup before that etc. The legacy infra that was held together by glue and sweat that everybody was afraid to touch.
Human nature, really.
I've found the quality and simplicity to be an attractive solution for lazy devops when I need to reach for a second computer
One thing I'm confused with is how to create a shared resources like e.g. a redis server and connect to it from other vms? It looks now quite cumbersome to setup tailscale or connect via ssh between VMS. Also what about egress? My guess is that all traffic billed at 0.07$ per GB. It looks like this cloud is made to run statefull agents and personal isolated projects and distributed systems or horizontal scaling isn't a good fit for it?
Also I'm curious why not railway like billing per resource utilization pricing model? It’s very convenient and I would argue is made for agents era.
I did setup for my friends and family a railway project that spawns a vm with disk (statefull service) via a tg bot and runs an openclaw like agent - it costs me something like 2$ to run 9 vms like this.
The main reason clouds offer network block devices is abstraction.
EC2 provides the *d VMs that have SSDs with high IOPS at much lower cost than network SSDs. They are ephemeral, but so is laptop and its SSD - it can loose the data. From AWS docs "If you stop, hibernate, or terminate an instance, data on instance store volumes is lost.".
> Finally, clouds have painful APIs. This is where projects like K8S come in, papering over the pain so engineers suffer a bit less from using the cloud.
K8s's main function isn't to paint over existing cloud APIs, that is just necessity when you deploy it in cloud. On normal hardware it's just an orchestration layer, and often just a way to pass config from one app to another in structured format.
> But VMs are hard with Kubernetes because the cloud makes you do it all yourself with lumpy nested virtualization.
Man discovered system designed for containers is good with containers, not VMs. More news at 10
> Disk is hard because back when they were designing K8S Google didn’t really even do usable remote block devices, and even if you can find a common pattern among clouds today to paper over, it will be slow.
Ignorance. k8s have abstractions over a bunch of types of storage, for example using Ceph as backend will just use KVM's Ceph backend, no extra overhead. It also supports "oldschool" protocols used for VM storage like NFS or iSCSI. It might be slow in some cases for cloud if cloud doesn't provide enough control, but that's not k8s fault.
> Networking is hard because if it were easy you would private link in a few systems from a neighboring open DC and drop a zero from your cloud spend.
He mistakes cloud problems with k8s problems(again). All k8s needs is visibility between nodes. There are multiple providers to achieve that, some with zero tunelling, just routing. It's still complex, but no more than "run a routing daemon".
I expect his project to slowly reinvent cloud APIs and copying what k8s and other projects did once he starts hitting problems those solutions solved. And do it worse, because instead of researching of why and why not that person seems to want to throw everything out with learning no lessons.
Do not give him money
(Percentages cited above are tongue-in-cheek, actual numbers are probably different)
Then I started to realize most people who complain are rolling their own which is also not bad since there are products like k3s that are very simple to use.
It seems things start to fall apart when they try to stuff it with all kinds of crazy idiotic controllers and the favorite of the month CNI and CSI. I always shake my head when I see people creating sand castles by setting up stuff like Ceph from within the cluster.
If you want to play with it keep things simple and have all the persistent data outside of the cluster. Use good old NFS instead of the latest longceph horngluster version. Keep databases and the container registry out. Treat it like a compute pool not a virtual datacenter. Stop recursing chickens inside eggs.
A service offering VMs for $20 is a long way from AWS, but I see how it makes sense as a first step. AWS also started with EC2, but in a completely different environment with no competition.
But I don't want to be either of those customers. It means the whole system has an extra layer of abstraction, so they can juggle VMs around. It's why you need slow EBS instead of just getting a flash drive in the same case as the CPU, with 0.01x the latency.
The key to scaling up is to have big-enough hardware on the backend. If Hetzner is renting out bare metal instances then they can only rent out the sizes that they have. If a cloud provider invests in really big single systems, they can offer fractions of those systems to multiple tenants, some of whom scale up to use the entire system, and some who don't. I think that is a win-win.
A fractional VM is also a fungible VM. If the tenant calls to spin up a certain size VM, then the backend can find suitable hardware for it from a menu of sizes. Smaller VMs can slot in anywhere there is room, not just on a designated bare-metal system.
A cloud provider is always going to want to maximize their rack space, wattage/heat, and resource usage. So they will invest in high-density systems at every chance. On the other hand, cloud tenants will have diverse needs, including some fraction of those big computers.
"That must be worst website ever made"
Made me love the site and style even more
I don’t care about how the backend works. Superbase requires magical luck to self host.
A lot of cloud providers have very generous free tiers to hook you and then the moment things take off , it’s a small fortune to keep the servers on.
Starting a digital ocean droplet is a single curl call. Starting a hetzner server is as well. Their api’s are completely fine and known to llm’s.
Why would agents learn exe’s way of setting up / deploying / binding to ports / auth, rather than just ssh’ing into a vm..?
Cloud is bad?
every time i've had an issue or question, it's been the same sympathetic people helping me out. over email, in plain text.
> The standard price for a GB of egress from a cloud provider is 10x what you pay racking a server in a normal data center.
From the exe.dev pricing page:
> additional data transfer $0.07/GB/month
So at least on the network price promise they don't seem to deliver, still costs an arm and a leg like your neighbourhood hyperscaler.
Overall service looks interesting, I like simplicity with convenience, something which packet.net deliberately decided not to offer at the time.
if we go back to the principle that modern computers are really fast, SSDs are crazy fast
and we remove the extra cruft of abstractions - software will be easier to develop - and we wouldn't have people shilling 'agents' as a way for faster development.
ultimately the bottleneck is our own thinking.
simple primitives, simpler thinking.
https://github.com/hetzneronline/community-content/blob/mast...
It also has a CLI, hcloud. Am I getting any value with exe.dev I couldn't get with an 80 line hcloud wrapper?
For agents, declarative plans are still valuable because they are reviewable. The interesting question is whether exe.dev changes the primitive: resource pools for many isolated VM-like processes, or just nicer VPS provisioning.
One of my friends was told to come to a sex party that was all male and he is straight. It soured his relationship with the firm so much he ended up winding down the business.
but i know nothing about what the comment says, just answering your question.
Jokes aside: - k8s is insane peace of software. A right tool for a big problem. Not for your toys. Yes, it is crazy difficult to setup and manage. Then what?
- cloud has bad and slow disk. BS. They have perfectly fast NVME.
Something else? That’s it.
Why I am so confident? I used to setup and manage kubernetes for 2 years. I have some experience. Do I use it more? Nope. Not a right tool for me. Ansible with some custom Linux tools fits better for Me.
I also build my own cloud. But if I say it less loud: hosting to host websites for https://playcode.io. Yea, it is hard and with a lot of compromises. Like networking, yes I want to communicate between vms in any region. Or disks and reliability. What about snapshots? And many bare metal renters gives only 1Gbt/s. Which is not fine. Or they ask way more for 10Gbt uplink. So it is easy to build some limited and unreliable shit or non scalable.
>One price, no surprises. You get 2 CPUs, 8 GB of RAM, and 25 GB of disk—shared across up to 25 VMs.
This might sounds like a good thing compared to the current state of clouds, but what’s better than that is having your own. The other day I got a used optiplex for $20, it had 2TB hdd, 265gb ssd, 16gb, and corei7. This is a one time payment, not monthly. You can setup proxmox, have dozens of lxc and vm, and even nest inside them whatever more lxc too, your hardware, physically with you, backed up by you, monitored by you, and accessed only by you. If you have stable internet and electricity, there’s really no excuse not to invest on your own hardware. A small business can even invest in that as well, not just as a personal one. Go to rackrat.net and grab a used server if you are a business, or a good station for personal use.
> That must be worst website ever made.
the level of confidence (this is a second time founder after all) to put that on their website gives me confidence that they can make this work
"In some tech circles, that is an unusual statement. (“In this house, we curse computers!”) I get it, computers can be really frustrating. But I like computers. I always have. It is really fun getting computers to do things. Painful, sure, but the results are worth it. Small microcontrollers are fun, desktops are fun, phones are fun, and servers are fun, whether racked in your basement or in a data center across the world. I like them all."
The reality: Everyone reading his blog or this HN entry loves computers.
- I'm building a server farm in my homelab.
- I'm doing a small startup to see if this idea works.
- We're taking on AWS by being more cost effective. Funding secured.
I like the way you can tell it what you want and it makes it. Very cool.
Perhaps the VM idea is old. The unit is a worker encapsulated in some deployable container.
In the world of Cloudflare workers - especially durable objects that are guaranteed to have one of them running in the world with a tightly bound database.
The way I think of apps has changed.
My take is devs want a way to say “run this code, persist this info, microsecond latency, never go down, scale within this $ budget”
It’s crazy how good a deal $5/mo cloudflare standard plan is.
Obviously many startups raise millions and they gotta spend millions.
However the new age of scale to zero, wake up in millisecond, process the request and go back to sleep is a new paradigm.
Vs old school of over provision for max capacity you will ever need.
Google has a similar, scale to zero container story but their cold startup time is in seconds. Too slow.
And what it has to do with the "cloud"? Cloud means one use cloud-provided services - security, queue, managed database, etc. and that's their selling point. This exe.dev is a bare server where I can install what I want, this is fine, but this is not a cloud and, frankly speaking, nothing new.
Is there a name for this style of writing? I come across it regularly.
I'd describe it as forcefully modest, "I'm just a simple guy" kind of thing. With a dash of "still a child on the inside". I always picture it as if the guy from the King of Queens meme wrote it.
"I guess I'm just really into books, heh" - Bezos (obviously non-real, hypothetical quote, meant to illustrate the concept)
This style is also very prevalent in Twitter bios.
Since it's a "literary" style that is quite common, I'm sure it has been characterized and named.
GPT says it's "aw-shucks", but I think that's a different thing.
If you want to run a website in the cloud, you start with an API, right? A CRUD API with commands like "make me a VPC with subnet 1.2.3.4/24", "make me a VM with 2GB RAM and 1 vCPU", "allow tcp port 80 and 443 to my VM", etc. Over time you create and change more things; things work, everybody's happy. At some point, one of the things changes, and now the website is broken. You could use Terraform or Ansible to try to fix this, by first creating all the configs to hopefully be in the right state, then re-running the IaC to re-apply the right set of parameters. But your website is already down and you don't really want to maintain a complex config and tool.
You can't avoid this problem because the cloud's design is bad. The CRUD method works at first to get things going. But eventually VMs stop, things get deleted, parameters of resources get changed. K8s was (partly) made to address this, with a declarative config and server which constantly "fixes" the resources back to the declared state. But K8s is hell because it uses a million abstractions to do a simple thing: ensure my stuff stays working. I should be able to point and click to set it up, and the cloud should remember it. Then if I try to change something like the security group, it should error saying "my dude, if you remove port 443 from the security group, your website will go down". Of course the cloud can't really know what will break what, unless the user defines their application's architecture. So the cloud should let the user define that architecture, have a server component that keeps ensuring everything's there and works, and stops people from footgunning themselves.
Everything that affects the user is a distributed system with mutable state. When that state changes, it can break something. So the system should continuously manage itself to fix issues that could break it. Part of that requires tracking dependencies, with guardrails to determine if a change might break something. Another part requires versioning the changes, so the user (or system) can easily roll back the whole system state to before it broke. This abstraction is complicated, but it's a solution to a complex problem: keeping the system working.
No cloud deals with this because it's too hard. But your cloud is extremely simple, so it might work. Ideally, every resource in your cloud (exe.dev) should work this way. From your team membership settings, to whether a proxy is public, the state of your VM, your DNS settings, the ssh keys allowed, email settings, http proxy integration / repo integration settings / their attachments, VM tags & disk sizes, etc. Over time your system will add more pieces and get more complex, to the point that implementing these system protections will be too complex and you won't even consider it. But your system is small right now, so you might be able to get it working. The end result should be less pain for the user because the system protects them from pain (fixing broken things, preventing breaking things), and more money for you because people like systems that don't break. But it's also possible nobody cares about this stuff until the system gets really big, so maybe your users won't care. It would be nice to have a cloud that fixes this tho.
> $160/month
50 VM
25 GB disk+
100 GB data transfer+
100GB/mo is <1mbps sustained
lmaoThese are nice declarative statements but have almost no meaningful substance.
> Setup scripts have a maximum size. Use indirection. [What's the maximum size?] > Shelley is a coding agent. It is web-based, works on mobile. [Cool model bro. Any details you want to share?]
> $20 a month
2025 or 2005, what's the difference?
For that money I can get 5 big bare metal boxes on OVH with fast SSDs, put k0s on them, fast deploy with kluctl, cloudflare tunnels for egress. Backups to a cheap S3 bucket somewhere. I'll never look at another cloud provider.