Here’s the abstract to the study, which gives a good sense of what MIT found. I guess that’s literally what an abstract is supposed to do, isn’t it? Anyway, here: Holy crap, that’s an alarming reference, as much emission output as all data centers? We’ll get to that more in a bit. I mean, think about it: All cars today are driven using energy-hungry computing power, it’s just that the computer in question there lives in your skull and is powered by fistfuls of Pizza Rolls and lots of caffeine. And, if we think about driving in terms of computational efficiency, human driving is incredible, because you’re just re-purposing your brain-computer to drive the car, and you’d need to keep that thing running anyway, so the computational cost is really zero. But once we move to automated driving, then we end up having to add a completely separate computational system to drive the car, and that computer is doing a hell of a lot of work, and that sort of computation takes a lot of energy. How much energy? Well, look at this: So, if AVs really catch on in a big way, the advanced, deep neural-network-running computers that drive them could create emissions equal to all the computer data centers currently running, which produces as much greenhouse gas emissions as Argentina! Holy crap. And, that doesn’t really factor in other consequences, like how the constant demand for power from computational hardware would necessarily reduce the amount of available range from a battery in an electric automated vehicle, possibly spurring the need for larger batteries, which means more weight, which means less efficiency and greater vehicle demand energy, and so on. These data centres currently produce around 0.14 gigatonnes of greenhouse gas emissions per year, equivalent to the entire output of Argentina or around 0.3 per cent of global emissions, according to the researchers. Like I said, nothing is free, If you want your car to drive itself, that takes energy. To compute the amount of energy needed and emissions produced, the study used four main variables: number of AVs operating on Earth, how many hours the vehicle operates, the power of each vehicle’s computer, and the amount of emissions produced per measured unit of electricity used to power the car and its systems. The study doesn’t just point out this issue, it does also suggest some possible ways to mitigate it, mostly via the development of specialized hardware optimized for just the sorts of tasks AVs require. It’s also worth noting that nothing is certain here by a long shot. Even the basic premise that there will be vast numbers of Level 5-ish AVs on the road is by no means guaranteed, for example, and, if there are that many AVs deployed, it’s very possible they could have positive environmental effects too, as they could lessen private car ownership and be more efficient in how trips are taken in ways not possible today. Or, they could end up driving as much or even more, even if there’s fewer cars on the road. The point is nothing is really known yet, and while the study brings up a fundamental excellent point – computation in these cars demands significant energy – how the overall deployment of widespread AVs will affect the environment really isn’t clear at all. I personally think that AVs could be less computationally demanding if there’s more infrastructural assistance for their use, and if the focus is on Level 4 – that is, restricted to a particular operating domain – as opposed to the near-magical Level 5, which can operate anywhere, anytime. But infrastructure assistance requires has its own significant hurdles, including getting actual standards in place and buy-in from government agencies, all of which are highly energy demanding in different ways. It really is amazing that this has been glossed over for as long as it has been; currently deployed Level 2 systems like Tesla FSD Beta may be taking a toll on range, but it doesn’t seem that a real careful test of this has been undertaken. It seems a lot of the computation happens regardless of whether the system is active or not (operating in an observational “shadow mode”) but I can’t completely confirm this. The more AVs that get deployed into the world, the more demand for power. It’s also possible that AVs may encourage more driving, since cars could hypothetically do things like drop you off at work and return home, then return to pick you up at work later. This could ease demand for parking places, but increase driving. There’s so many unknowns here. But we do know that computation takes power, often nontrivial amounts of it. No way around that.   Humans (when paying attention) can anticipate changing circumstances better than our current AVs can. Take the scenario where the car in front of you is braking because a truck in front of them is turning. A human can recognize the scenario, braking less and coasting into the space they’ve left between them and the car in front, anticipating that car to resume it’s previous speed. You’d then regain your following distance by accelerating appropriately. In my experience the AV has to “play it safe” and brake as much as the car in front of them is and maintain their following distance at all times. Funny how a certain EV millionaire is also developing ways to leave the planet. And hydroelectric… hooboy do I know hydro. Thank you, day job and curiosity. Hydroelectric dams invariably and without exception have absolutely massive ecological impacts. Dozens to hundreds of square miles of land is flooded and downstream water delivery is severely curtailed and sometimes even completely eliminated. Which destroys vital wetlands, shoals, wildlife habitats, and the like. Which is why many perfectly serviceable old hydroelectric plants instead of being refit are being very carefully deconstructed. (Because you can’t just tear them down, especially after a bunch of people built homes in dried up wetlands.) Which is also why nobody except China (holy shit just look at the real history of Three Gorges) and India is aggressively building hydroelectric, and overhauls are being very, very heavily scrutinized. And India’s tearing up treaties building theirs. (Water rights: don’t even google that shitshow. Seriously.) Any new hydroelectric dam in sane countries is looking very long and hard first and foremost at the ecological impacts, not just “hey we can make zappies for cheap.” And this is despite the facts that is not just ‘clean,’ but also economical, by far the most reliable, and incredibly efficient. Properly constructed and operated turbines can run continuously for decades. They’re extremely affordable compared to many other turbines. It’s extremely low maintenance. Hydroelectric is genuinely really really good stuff. Hell, OPP’s 15 generator plant (Horseshoe falls, not Niagara) was built in 1905, didn’t get upgraded to 60Hz until 1972-1976, and kept running until 1999. How’s that for reducing waste? But none of those benefits are realized if drying up a river results in billions of dollars in storm damage on an annual basis. Or if you cause the extinction of several species that turn out to be vital to the ecosystem. Or you just look at Lake Mead; Hoover Dam’s capacity dropped by over 40% from low water levels, and Lake Powell got so low upstream that Glen Canyon had to be reduced to almost nothing. I see immense value in them for use cases like industrial areas and the like, no doubt about that. But, to expect people to be able to only half-ass drive and then be able to take over in a split second is insane. By insane, I mean that there is not any training to prove a person is capable of responding correctly. It should be a whole separate category of licensing to be able to “operate”, which requires hours of in-class and in-the-field testing. My 70-year-old neighbor should not be able to operate any level of AV without some sort of update to their 54-year-old license. Telsa, Supercruise, or whatever else is out there… it’s a Trolley, I’m talking about a Trolley…. I’m sorry. i’m going to need a more comprehensible measurement. How about a number of ’73 Beetles? *resist – second bad pun, still not sorry. I recall a NYC traffic engineer talking about how taxis and “disruptive” services like Uber travel about 1.6 miles for every mile a car driven by the occupant drives. Even if you reduce the number of cars on the road, you increase the number of miles driven. This applies equally to the self driving taxis that will likely be more profitable and thus be pushed in favor of car ownership when/if self driving cars become popular. 60% more miles driven will almost certainly dwarf any power usage by the onboard computers. An EV (or any car, really) absolutely obliterates my power hungry desktop in power usage, and this thing is perfectly capable of training computer vision models. Given that we can train neural networks that can then run on a smartphone, I think it is a bit naive to assume that future cars will use anywhere near 60% more energy to power their onboard computers. Even 10 or 20% being sent to the onboard computer sounds like a major engineering failure. Here’s the thing though: if you don’t want to worry about driving and just want to get to your destination safe, unharmed, and comfortable, we already have a solution. (You can even get a sleeper car, or, for example, aThey are called trains and street cars. Unfortunately, no one wants to invest in public transportation this side of the pond, so here we are. And that’s not even considering any of the potential energy usage changes from widespread adoption of autonomous vehicles. For example, people could buy fewer cars. It takes 10,000 to 20,000 kWh of energy to produce 1 car (largely depending on weight), so that could be a substantial offset. Given this wide range of possible inputs, there is zero utility in the conclusions made in this article. The handwringing over the energy cost of the computing power belies a fair lack of understanding of how much energy’s actually available in an EV as well – a Tesla’s got an 80kWh battery. They’re saying the compute budget’s gotta stay under 1.2kWh, which is an absolutely gobsmacking amount of juice to play with for dedicated hardware, and rounds to about a 10% hit on range if you drive the car for 7 hours. You could power your house for a week on what your car uses to ferry you around for half a day. I recently talked a friend out of a gas engine for his next vehicle and got him into a 17 A8L TDI and he is blown away and now a diesel convert. Keeps telling me how he gets 42mpg on the highway with a huge barge of an AWD car and simply cant believe how sporty 466ftlbs of torque is at low rpm for “only 240hp”. We need more actual efficient vehicles. News flash, efficiency is literally always going to mean more emissions. Even the fancy new homogenous charge combustion gas engines claiming diesel like efficiency started emitting diesel-like levels of of NOx. EVs are using rare earth minerals, AI and datacenter pollution, plus batteries internal resistance means all our hardwork to charge them is bleeding away as self-discharge which no one talks about (but would if gas evaporated as quickly). I just will go against the fad and say I dont think 9000lb tanks going 0-60 in 2 seconds with self-discharging fire-starting rare earth batteries are doing anything to save our environment. Small aluminum non-rusting 2 seater diesel commuter cars are what we need but nobody will build them. But we can get aluminum frames for batteries. And we can put 7 L diesels into trucks no problem. We are misguided. Is this not yet happening? Should I check my FTX balance? Blockchain and NFTs on the other hand have no proven use case. Idiots like Elon misuse machine learning because they don’t understand the current limitations, and don’t know want to hear anything but “yes” from their employees. That’s the kind of box doing the hard work behind the scenes, figuring out what collection of pixels is a kid and which is a cone, no matter how poorly it does it. 11 kilowatts, continuous, for each one. That’s 264kWh, per machine, per day. Requiring dozens to hundreds of them to run image classification in order to construct inference structures that the relative equivalent of a broken solar powered pocket calculator in your car can handle. So in addition to all the other waste, you absolutely have to include multiple datacenters in the 1-9MW and 10MW+ bands at minimum, and more likely in the 100MW+ class. Yes, megawatts. Do the math. Just 4 H100’s is a megawatt of demand. 32 of them is is 8.5MW continuous not including necessary storage (~7-20kW per cabinet,) networking (+1-2kW per cabinet,) and ancillary systems (anywhere from 4kW to 50kW per cabinet.) And we haven’t even talked about cooling. East Cermak (Lakeside) is now leasing over 120MW of power for just 1.1 million square feet. Lots of that space is taken up by 50+ generators, 20+ 30,000 gallon tanks of diesel, large gothic hallways, and four fiber vaults. The “iDataCenter” (Maiden, NC) is about 500k square feet and has over 80MW of supporting solar alone. You really have no concept at all of just how much power these facilities actually use without having first-hand experience at that scale. The best way that I can give you some idea as to the scale? The combined output of the two largest nuclear power plants in the US produce 62,973GWh annually. This would only fulfill 30% of global datacenter power usage for 2018. “The rate of increase in hardware efficiency needed in many scenarios to contain emissions is faster than the current rate.” Which is a colossal understatement. MIT’s vastly underestimated the necessary gains (it’s not 1.1x unless you ignore necessary precision gains. With those, it’s likely 10x.) And this ain’t coming. Period. Everyone’s rushing headlong into ARM as the savior, but in fact, ARM’s only trick is offering lower performance at lower per-watt cost. The world’s “greenest” supercomputer is the Lenovo ThinkSystem SR670V2 at Flatiron. Period. It achieves 65.091 GFLOPS/watt. It is powered by Intel Xeon Platinum 8362’s and 80 Nvidia H100’s. Each node is 10kW. That’s every single node. Number two is ORNL’s HPE/Cray Frontier TDS at 62.684 using EPYC’s and MI250X’s. 74 cabinets of them. Eating over 21MW continuous, excluding cooling. ARM doesn’t even make the list. No, I don’t mean “doesn’t make the top 10.” I mean it does not make the list, at all. And no, Fugaku doesn’t count. The A64FX is a Fujitsu vector processor core with an ARMv8.2 (circa 2016) tacked on to be the interface. Moore’s Law is long dead, just like the company it’s author founded and the company it’s author was CEO of later on! (But not according to Gelsinger because talk about a reality distortion field. Putting two dies on one package does not count, Pat.) It’s been dead over a decade. And the net performance difference between an AMD EPYC (Zen1) from 2017 and an AMD EPYC Genoa (2022) on a per core basis is not that big. EPYC 7601 (32c/64t 2.2/2.6GHz) to EPYC 7502 (32c/64t 2.5/3.35GHz) saw best case numbers of about 20% or 0.20x, mostly from clock and cache. And 7502 to 7513 (32c/64t 2.6/3.65GHz) saw another ~18% typical or 0.18x. So in a 5 year span, the greatest leap in CPU performance is not “X + Y + Z.” That’s not how it works. It’s actually about 25% average, or 0.25x; the average difference between the 7601 and the 7513. Certain workloads got much much higher benefit (as much as 40%,) others not as much (as little as 5%.) That’s nowhere near doubling. And the power increased from 180W to 200W. And as MIT succinctly points out: what is required at minimum is doubling efficiency, every year. This can be split between performance and power equally for simplicity. So to meet these requirements, what actually would have had to happen would be the 7601 to 7513 would need not 500% performance gains, but a 1,600% performance gain over 5 years. There is not a single class or piece of silicon out there that has doubled in performance or increased in performance and efficiency at that rate in many decades. And no, GPUs don’t even come close to meeting it either. The A100 to H100 would have needed 38,800 GFLOPS (DP-FMA) at 250W; it got 33,500 at 700W. And like it or not? This isn’t fixable. Because physics – not just a suggestion, it’s the LAW! You want Moore’s law with double the transistors every year, then you get double the power every year. You want to shrink the process to offset it? You can’t. Source-to-drain leakage and materials long ago hit the wall, and publicized process ‘shrinks’ are marketing bullshit. TSMC’s “3 nanometer” has zero relation to any physical measurement. It’s actually 48nm gate pitch, 24nm minimum theoretical, if EUV multi litho gets sorted – and that’s a big if. 5nm was supposed to be 42nm/24nm; instead it ended up 51-57nm/28-36nm – not even close to the IRDS roadmap. “3nm” doesn’t even hit the IRDS “7nm” roadmap that was supposed to be volume by 2019! Like I said. They’re underselling and understating just how screwed we are, and just how permanently unfixable it is. Because it’s not about “oh nobody’s investing.” Everybody’s pouring cash into this. Mountains of it. It’s not more profitable to be behind the curve, because smaller gates mean smaller dies mean selling more parts per fixed-size wafer. Even worse, immersion (pre-7nm) is double the throughput of wafers per day over EUV until you hit 4+ pattern layer. So the new processes mean much, much lower output potential. Samsung’s 7nm capped at 1500 wafers/day, TSMC N7+ at 1000 wafers/day. TSMC is still hoping to get up to 2,000 wafers/day on 5nm versus 6,000+ on immersion. Which is why everyone in the know has been increasingly concerned for years now. You can spend a million a month on lawyers to bankrupt the competition, but you can’t sue physics out of existence. And you can’t spend physics into submission either. Trust them. They’ve tried. It didn’t work. Still, the technology to make “self driving” safe enough to not require human interaction (in other words, what would need to happen to make it useful, outside of driver aids to make up for our lack of attention, rather than replace it) just isn’t there yet. When a system has a 0.5% false positive rate, that sounds good. So does one with a 0.5% false negative rate. When applied across the literally trillions of miles Americans drive every year, that would add up to a hell of lot of accidents. And that’s not just a quote. Sorry, but the concern just doesn’t sound valid to me. The study even points out that improvements in computing efficiency (pretty certain to occur) will eliminate this problem. So, yeah, automotive news doesn’t say it, but tech news sometimes does. And it is almost entirely based on faulty assumptions.

MIT Study Finds Something No One Mentions About Self Driving Cars  They re Lousy For The Environment - 91MIT Study Finds Something No One Mentions About Self Driving Cars  They re Lousy For The Environment - 92