Is the Future “AWS for Everything”?
A theme running through my book is the idea that efficiency improvements, and the various methods for making products cheaper over time, have historically been dependent on some degree of repetition, on running your production process over and over again. Higher production volume means larger, more efficient factories. It means more opportunities to use dedicated, high-speed, continuous process production equipment, or to implement efficiency-improving methods like Design for Manufacturing or Statistical Process Control. It means more incentive to develop new, better production technology. It means more opportunities to fall down the learning curve. The list goes on.
If you’re only going to run your process once, or just a handful of times, these opportunities are considerably narrowed. It’s obviously hard to justify the time and effort it takes to design a really efficient production process or invent some new manufacturing equipment if that process is constantly changing.
An example of this playing out in practice is the different cost trends of cars vs. car repair. In inflation-adjusted terms, cars have steadily gotten cheaper over time. The cost of car repair, on the other hand, has steadily gotten more expensive, rising mostly at the rate of overall wages (and recently, even faster).
Much of this difference comes down to the nature of the processes at work. Cars are manufactured via a repetitive, high-volume process that spits out nearly identical models by the hundreds of thousands or millions. Car manufacturers can justify spending billions of dollars designing a new model of car and the process for making it, because that cost will get spread out over a huge number of cars. Repairing a damaged car, on the other hand, is different: for a given model, any given repair process will be run a much smaller number of times, or maybe only once (since cars might get damaged in accidents in unique ways). A repair facility will need to accommodate a huge number of different models and model years, each damaged in different ways. There’s much less opportunity to design an efficient, highly automated repair process.
There are some complications to this basic pattern — the Toyota Production System and its descendents were designed to get mass-production-style benefits for a much more variable production process by making that process more flexible — but they don’t change the fundamental logic.
Thus, for things that we can repetitively produce in very large volumes — cars, transistors, LCD screens, corn — we’ve gotten good at making them very cheaply. Things produced in much smaller volumes, or where we need to adapt our process on the fly based on the specific situation, are much harder to produce cheaply. One way of thinking about services, which tend to get more expensive in inflation-adjusted terms over time, is that they’re things which generally require a lot of situation-specific adaptation, and can’t be produced via some high-volume, highly repetitive process.
An important aspect of this is automation. I’m fond of pointing out that it’s generally possible to build a machine to perform any particular task (and it has been for quite some time). If you’re going to do some task thousands or millions of times, it’s long been possible to automate that task with some sort of dedicated machine. (People skeptical of humanoid robots are very fond of pointing out how this sort of hard automation is far more efficient than a human-shaped robot at doing some task.) The challenge with automation has historically been flexibility: creating a machine that can make adjustments on the fly, perhaps changing the sequence of tasks completely as the situation changes, the way a human can. Even if the hardware itself can be used to perform a variety of different tasks, information processing capabilities have been limited; it has taken a lot of time and effort to get any particular automated process working, which could only be justified if those costs could be amortized over a sufficiently large volume. This is why the car industry has by far been the biggest user of industrial robots historically, as they have the right combination of very high production volumes, and frequent (but not too frequent) process changes (since models change yearly).
But this is changing: automation technology is getting more and more flexible. Computer vision has advanced, billions of dollars are being poured into developing humanoid robots, and a panoply of AI technologies are making it possible for an automated system to flexibly respond in a highly variable environment. Self-driving cars are one example. Being able to drive between any given two points, responding to situations or disruptions as they appear — traffic lights, pedestrians, other cars — is the exact sort of thing that automation historically has been very bad at, but that technology is now chipping away at.
As automation technology gets better and better, I have been thinking about how it will get pushed into areas requiring low-volume production or situation-specific adaptation that previously have been resistant to it. One potential trajectory is that with better, more flexible automation, “minimum efficient scale” — the size of an operation you need to be competitive — shrinks. With sufficiently capable robots, for instance, it might become possible to efficiently produce things in really small-footprint, low-overhead factories. The idea of “microfactories” is something people are enthusiastic about: you often see it in various prefab construction startups, but that excitement has spread elsewhere. The premise of the (now-defunct) EV startup Arrival was building cars using these sorts of highly flexible microfactories.
But another possible trajectory is in the opposite direction: large-scale, highly efficient production operations which capture significant economies of scale, but which produce a very wide range of outputs. Factories producing millions of different products in low volumes, or even quantities of one. I’m tentatively calling this idea “AWS for everything.”
AWS and flexible automation
AWS (Amazon Web Services) is Amazon’s cloud computing business. The idea of it (and of other similar offerings like Microsoft Azure and Google Cloud Platform) is that instead of needing to set up your own computing infrastructure to do things like host a website or store large amounts of data, you can just rent it from Amazon. Amazon builds the data centers, sets up the servers, and creates the software tools and infrastructure that other people can use to set up and manage their computing needs.
Making this work as a business demands a huge amount of expensive infrastructure; even before AI, Amazon and other cloud computing companies spent a huge amount of money building data centers in various regions. But as Ben Thompson notes, AWS “benefits tremendously from economies of scale.” The more customers AWS has, the more efficiently it can use its infrastructure, similar to how electric utilities wanted lots of customers to reduce demand variability and achieve higher utilization rates. Thus with AWS you get a highly variable output — millions of different websites and computing tasks — supported by large-scale infrastructure investments. You can very quickly use Amazon’s infrastructure to perform whatever computing task you’re interested in, from hosting a small website to processing terabytes of data, without needing to build or operate any of that infrastructure yourself.
This same basic logic applies to physical automation. If you have machinery or equipment that can perform different sorts of tasks or produce a variety of different goods, and an effective software control layer that can tell each piece of equipment what it should be doing and where material should be routed, you can automatically produce a very large variety of different things. And the larger your operations, the lower your marginal costs of production: the more you produce, the greater your equipment utilization rate, and the more you can capture other economies of scale, such as using more efficient high-volume equipment.
Historically setting up this sort of highly automated, highly flexible production operation has been limited by the fact that setting up any particular automated process took a great deal of time and effort, and the technology didn’t exist for that automation to respond flexibly to a highly variable environment. So automated production lines, even ones that used flexible technology like robotics, could only be justified for high-volume production, and the range of variation they could accommodate was fairly limited.
But as automation and AI get better, this becomes much less true. If your software is smart enough, and your equipment flexible enough, you can set up some new process to produce some new widget on the fly, automatically working out what the process steps need to be and how to route the material through the various machines, without needing to take the time and effort to dial it in that was required historically. And if your volumes are high enough — if you’re producing enough different widgets, each with its route through a sequence of machines, sharing processing steps where possible — your costs for each individual unit of production might be very low indeed, even as you produce a wide variety of different things. So I can imagine having very large-volume production operations, which obtain large economies of scale and produce a wide variety of different outputs. Huge warehouses filled with all sorts of different machines, materials, parts, and components being routed between them, paths and tasks changing on the fly, a panoply of different goods rolling off the equivalent of the assembly line, each one sent to its final destination by low-cost, small-scale delivery vehicles like drones or Austin Vernon’s pallet EVs. Customers could spin up production on this rented equipment and start producing whatever they wanted without having to build their own factory. These sorts of operations wouldn’t displace traditional mass-production style processes (which will still have a substantial cost advantage), but would exist alongside them.
(You probably don’t even need to completely automate the hardware side, so long as you have a sufficiently intelligent control layer. Uber’s mapping software can direct a driver to where they need to go, leaving the driver to actually turn the wheel and work the controls. Amazon has similar software that tells its distribution center workers where to pick up and bring packages. So you can imagine humans acting as much of the “connective tissue” in this sort of production process, being directed by software telling them where to go and what to do to maximize utilization.)
AWS for everything
You can see the seeds of this “AWS for everything” concept in some businesses that exist today. In manufacturing there are fabricators that specialize in high-mix production like SendCutSend, OSH Cut, or JLCPCB. You send your part design to SendCutSend: their software automatically checks to see if it can be fabricated using their equipment (laser cutters, CNC machines, etc.), and they send you back the part a few days later. According to SendCutSend’s founder Jim Belosic, this model only works because of economies of scale, being able to efficiently spread the costs of their millions of dollars of equipment. As he said on Tool or Die:
The key with high mix is that it actually works at scale. The larger volume of high mix, the easier things get...Especially with sheet cutting. With sheet cutting, the software side of us, it allows us to take hundreds of different customers, with a quantity of one part each, and put them onto a sheet, like tetris, nested all together, and run it all at once. So we only do one setup, for potentially dozens or hundreds of customers, we do one load into the machine, we only retrieve the material once. And we have really good sheet utilization, we have almost no scrap. It’s probably one of the lowest in the industry.
It doesn’t work though, when you only do a few. If I was to run one of those customers at a time, we’d be bankrupt.
SendCutSend has grown rapidly — founded in 2018, they recently passed $100 million in annual revenue — but they still work hard to maintain flexibility, using equipment that doesn’t require months of downtime to reprogram or configure when processes change. They’re also expanding their offerings. They started with laser cutting, later added CNC machining, and now offer welding of single parts. They’ve also gradually expanded the range of materials that they offer. You can imagine that as automation gets better and better, this sort of business model could continue to be extended, going to multi-part welding, assembly, and eventually entirely finished goods.
And it’s not just manufacturing where this sort of production model might emerge. I was inspired to write this essay after reading a really great essay about lab automation at Owl Posting, speculating that various lab automation startups might converge on being “AWS for biotech”: large, automated labs that can spread the costs of their automation over a large number of experiments run for different customers. Right now much of this sort of lab work isn’t automated, not because it’s not possible to automate but because it’s not repeated enough to be worth it in any particular lab. Centralize all those experiments in one place, and maybe that changes:
If you are to accept that lab centralization (as in, cloud labs) means you can most efficiently use lab robotics—which feels like a pretty uncontroversial argument—it also means that the further you lean into this, the more able you are to vertically integrate upstream. If you’re running enough experiments such that your robots are constantly humming, you can justify producing your own reagents. If you’re producing your own reagents, your per-experiment costs drop. If your per-experiment costs drop, you can offer lower prices. If you offer lower prices, you attract more demand. If you attract more demand, your robots stay even busier. If your robots stay even busier, you can justify producing even more of your own inputs. And so on, ad infinitum, until you devour the entirety of the market, and the game of biology becomes extraordinarily cheap and easy for everyone to play in.
I’m not a scientist, but I can imagine how this sort of model could apply to other areas of scientific research as well — chemistry, materials research, etc.
How far could this model be pushed? I opened this essay talking about car repair, which has risen in cost far faster than the actual production of cars. I’ve been in car accidents where the damage was relatively minor, but that nevertheless cost a large fraction of the entire value of the car to repair, due to the un-optimized, un-automated, labor-intensive repair process. Could we have some sort of large, centralized car repair facility, spreading the cost of its automated equipment (heavy industrial robot arms, lifts, welding robots, perhaps even metal fabrication equipment) across a huge number of repaired cars?
It’s not obvious to me whether this would work for car repair. Whether “AWS for everything” will work in a given industry will depend on the specifics of that industry, the costs and capabilities of the equipment available, and what scaling effects look like. If equipment is relatively inexpensive, and there aren’t substantial economies of scale at work, I wouldn’t expect this sort of production arrangement to necessarily make sense. A few years ago people were very enthusiastic about this sort of model for cooking, with “ghost kitchens:” commercial kitchens without any sort of dine-in option, preparing food for delivery-only restaurants. Some of the supposed advantage of ghost kitchens was that they required much smaller amounts of space that could be located outside of expensive, high-traffic areas (since you didn’t need any sort of dine-in option). But ghost kitchens were also expected to have economies of scale. Multiple different “restaurants” could be served from the same facility, possibly taking advantage of batch ingredient prep or high-capacity equipment. But while ghost kitchens are still around, they don’t seem to have been the enormous success they were originally predicted to be. (Possibly this will change if food prep automation gets much better, but that’d be somewhat surprising to me.)
So for many industries the “AWS for everything” model won’t work. But I nevertheless think there’s a good chance that certain kinds of production — manufacturing, certain sorts of scientific research, other capital-intensive services — will be organized this way in the future.
Thanks to Austin Vernon for reading a draft of this. All errors are my own.


Well I generally agree with your article, I think you must note that car manufacturers have optimized the manufacturing processes at the expense of maintenance processes. When you have to remove body parts in order to replace a headlight the efficiency of the repair process breaks down completely. I have seen cars like the Ford explorer where it takes about 6 hours of the mechanic time to remove all the parts replace the headlight assembly and replace the rest of the parts.
A second Factor is the manufacturers build things in a way that you can't replace a simple part, such as a light bulb. Instead you have to replace the whole headlight assembly. This is particularly evident if you've ever had to replace a valve body on a Subaru transmission.
Great article. Just one request: don't use AWS as your analogy. I get that it's very well known and therefore a great hook but the economics just don't match out.
AWS is far too expensive to be justified based on economies of scale (in hardware at least). The joke is that with AWS, companies buy the hardware every few months. The real benefit of AWS is that it works around two serious market failures:
1/ Company CTOs congenital unwillingness to pay for software. This happens in large part because they are incapable of differentiating between good and bad software. And one slice of said software happens to be extremely bad but also nominally free and that short-circuits people's brains.
2/ large companys and their entrenched bureaucracies that gate-keep basic IT services like new machines and storage drives.
AWS fixes both these problems. It pretends companies are paying for hardware when they are really paying for management software. With products like managed databases, this pretension is even thinner. We can see also that services which offer only colocation (e.g. Hetzner) are only able to charge a tiny fraction of what the major cloud providers charge.
It fixes the second problem because it very loudly advertises it's abilities. When the underlying system is capable of provisioning a new cloud bucket in seconds, it is much harder for the resident incompetent IT department to claim that they need seven pages of docs and four weeks to provide the same service.
Which is not to say that the cloud providers haven't innovated in hardware. They have! But this nets them only around a 2x reduction in cost. And their prices are more like 10x _higher_ than just the underlying hardware.