> A diesel fuel tank for 400 litres of diesel weighs roughly 350 kg (the tank itself is relatively light; diesel is 0.84 kg/L). A battery pack storing equivalent energy would weigh on the order of 16 tonnes at current lithium-ion energy densities. That’s not just additional weight. It’s 16 tonnes of payload that disappears.
And yet, electrical semis exist that come without 16 ton batteries. The fallacy here is that most of the diesel is used to heat the universe rather than move the truck. Truck engines are relatively efficient but it's still a combustion engine. EV trucks are now a reality.
Mercedes-Benz’s eActros 600, one of the flagship battery-electric long-haul trucks now in series production, uses *three 207 kWh LFP battery packs for a total of 621 kWh of installed capacity, and under realistic conditions can deliver about 500 km of range on a full charge with a 40-ton gross combination weight, with opportunity charging in driver breaks enabling well over 1 000 km of daily travel. That's 4-6 tons of battery, not 16.
Volvo Trucks’ current flagship, the FH Electric, has 360–540 kWh of batteries (four to six packs) and achieves up to ~300 km of range for typical heavy-duty operation, and its forthcoming FH Aero Electric long-haul variant is being announced with ~780 kWh battery capacity targeting ~600 km of range. That's around 3 tons of battery.
The weight goes at the cost of the useful load. Though the EU allows an extra 2 tons for new energy trucks. And of course a lot of trucks aren't fully loaded typically. Also, the weight limitations have a lot to do with safety issues related to diesel trucks and their brake systems that electrical trucks have much less. Regenerative braking and lots of torque at low speeds mean that they could move a lot more weight safely than currently allowed. And adding more axles to distribute the weight can address any road damage concerns.
With mandatory 45 minute breaks every 4.5 hours, trucks can just top up as needed. With normal truck driver hours that's 1 or 2 breaks in a working day. There's a growing amount of chargers all over Europe and these things routinely drive all over Europe from Scandinavia to Iberia to Balkans and everything in between. There are of course still many places where more/better chargers are needed but these ranges are usable and practical enough that you can get loads from A to B in most of Europe with only minimal delays relative to diesel trucks in terms of charging time losses. It's early days and charging infrastructure is rapidly being improved. But the point is, that electrical trucks work just fine today. There are no fundamental real load or distance limitations here. But of course more infrastructure is needed to scale.
Lighter batteries will make trucks slightly more efficient. But price and longevity matter much more. Sodium ion with its well over a million mile lifespan looks like it should revolutionize trucking over the next decade. LFP is commonly used today already. NMC is lighter but has a lower lifespan.
I started adding cli's for a few things last week. Initially just for myself but it didn't take me long to figure out that codex / claude code / etc. are pretty good at figuring out cli's as well. And creating them. If you have APIs, generating a usable cli for them is pretty straightforward. With lots of nice features, documentation, bash/zsh autocomplete support and other bells and whistles. Doing that manually is a lot of repetitive work. Having that stuff generated on the other hand doesn't have to take a lot of time.
The combination with skills is where it really shines. And you can generate those as well for your shiny new cli. Once you have that in place, you can drive your API agentically to do non trivial things in it.
One of my OSS projects, jillesvangurp/ktsearch now has such a cli. Ktsearch is a kotlin multiplatform library for Elasticsearch and Opensearch. The new cli compiles to jvm and native linux/mac binaries. I've been playing with this for the last week and adding a few features. It's very nice to have around if you deal with opensearch/elasticsearch clusters. No more messy curl commands and json blobs.
And I've gotten codex to use it for me for a few things already.
I have a 16 core M4 Max and running at a fraction of the potential maximum speed just isn't very optimal on modern CPUs like that.
Threading is hard, especially if they share a lot of state. Memory management with multiple threads sharing stuff is hard and ideally minimized. What is optimal very much depends on the type of workload as well. Not all workloads are IO dependent, or require sharing a lot of state.
Using threads for blocking IO on server requests was popular 20 years ago in e.g. Java. But these days non blocking IO is preferred both for single and multi threaded systems. E.g. Elasticsearch uses threading and non blocking IO across CPU cores and cluster nodes to provide horizontal scalability for indexing. It tends to stick to just one indexing thread per CPU core of course. But it has additional thread pools and generally more threads than CPU cores in total.
A lot of workloads where the CPU is the bottleneck that have some IO benefit from threading by letting other threads progress while one is waiting for IO. And if the amount of context switching can be limited, that can be OK. For loads that are embarrassingly parallel with little or no IO and very limited context sharing, a 1 thread per CPU core tends to be the most optimal. It's really when you start having more than threads than cores that context switching becomes a factor. What's optimal there is very much dependent on how much shared state there is and whether you are IO or CPU limited.
In general, concurrency and parallelism tend to be harder in languages that predate when threading and multi core CPUs were common and lack good primitives for this. Python only recently started addressing the GIL obstacle and a big motivation for creating Rust was just how hard doing this stuff is in C/C++ without creating a lot of dead locks, crash bugs, and security issues. It's not impossible with the right frameworks, a lot of skill and discipline of course. But Rust is getting a well deserved reputation for being very optimal and safe for this kind of thing. Likewise functional languages like Elixir are more naturally suited for running on systems with lots of CPUs and threads.
> I have a 16 core M4 Max and running at a fraction of the potential maximum speed just isn't very optimal on modern CPUs like that.
To further muddy the waters: if your process is not bottlenecked at the CPU a modern unit might be more optimal in terms of power draw (directly and through secondary effects for increased cooling needs) running at a fraction of its speed. Moving at a low clock but fast enough not to become the bottleneck compared to other factors, instead of bursting to full speed for a bit then waiting, can be optimal.
Of course there are a bunch of chip specific optimisations here if you like complexity. Some chips are better off running all cores slowly, and others that can completely power down idle cores better off running a few faster, to optimise power use while getting the same job done in the same amount of wall-clock time.
>"just how hard doing this stuff is in C/C++ without creating a lot of dead locks, crash bugs, and security issues"
In my opinion this is probably problem for novice. Or people who only know how to program inside very limited and restricting environment. I write multithreaded business backends in modern C++ that accept outside http requests for processing, do some heavy math lifting. Some requests that expected to take short time are processed immediately, some long running one are going to a separate thread pools which also manage throttling of background tasks etc. etc.
I did not find it any particularly hard. All my "dangerous" stuff is centralized, debugged to death years ago and used and reused across multiple products. Stuff runs for years and years without single hick-up. To me it is a non issue.
I do realize that the situation is much tougher for those who write OS kernels but this is very specialized skill and they would know better what to do.
A key difference is that it sounds like you need to create and otherwise interact with that sort of code on a regular basis.
Most devs spend most of their time, all of it even, on tasks that are either naturally sequential or don't benefit from threading enough over the safer option of multiple independent processes, so when they do come across a problem that is inherently parallelizable and needs the highest performance it is not a familiar situation for them. Familiarity can make some rather complex processes feel simple.
The same can be said for event loop driven concurrency, for those who don't work that way often the collection of potential race conditions there can feel daunting so they appreciate their chosen platform holding their hand a bit.
I checked yesterday. The cheapest vm I can get from them was something like 25 euros/month. The one I get from Hetzner was 6/month and now will be 8 per month. That's a 3x difference. A little cheaper than GCP/AWS. But not a whole lot. I went with Hetzner based on that as I'm trying to reduce an 800 euro/month Elastic Cloud + GCP bill to < 100 month. Even with the price increases, I should get below 100/month.
I just started the process of migrating to them yesterday. They are still very affordable. But a bit less. I'm estimating that our quite lean GCP setup cost is going to be cut to about 20-25% when I'm done. So, it doesn't affect my decision to go with them literally yesterday morning.
It's all a bit barebones and primitive but I don't mind. I spent yesterday tweaking some ansible scripts with codex to setup stuff like bastion hosts and nat networking. I expect I have most of the rest ready in a few days.
The benefits of having an uncomplicated docker compose and boring tech stack. No microservices. Just a monolith.
One issue that I don't have a solution for yet is disk encryption and encrypted bucket content. Probably solvable but not natively supported. Might trigger compliance issues with some of our customers.
I always found that compliance issue with encrypted drives a bit funny.
The provider has the keys.
So, the drives encryption has no practical application.
The drives in, say, GCP aren't even real drives, the blocks are chunked over a distributed pool of storage- you can't just grab a drive and walk away with an OS or a data volume, you'd just get random junk. - So what's the encryption going to do?
I guess it's harder to attach your drive to someone elses VM, but ultimately since the provider has the key it doesn't actually change anything there either, except that you need another API call to launch a drive and maybe there's different permissions on your drives key than on the drive itself?
idk, feels like a weird theatre that the providers get away with because they're so big; there's no practical way of even checking if they're following up with drive encryption either. So it really is "here you can input a secret key, that you choose, we promise to use it *wink*".
Totally absent any verifiable outcome, or actual threat model.
You're one of those that's going to laugh at the idea of "verifiable cloud" that Apple and others are pushing? How dare you stand in the way of AI everywhere!
There is the notion that a lot of programming language preferences are based on the notion of people using them. As soon as it's LLMs using them, a lot of what motivates their choices becomes less valid.
I've been doing a few projects that are definitely outside my comfort zone with LLMs and its fine. I can read the code but I just don't have the muscle memory to produce it.
Translating software that has a lot of tests is easy for LLMs. I think we'll be seeing a lot more of that in the next years. But it will take some time for people to build up more trust in these tools. Good test harnesses are a key enabler.
The inevitable cleanup that will follow this could be done the same way. Refactoring like that can be done in more bite sized chunks, which makes it easier to review what is happening and control how it is done.
Japan has a lot of potential for wind and geothermal power. And much of it isn't too bad for solar either.
The madness with hydrogen in Japan is that they produce most of it from imported LNG. If they'd solve domestic clean energy, they'd have no need for hydrogen in transport. EVs are a lot more efficient than hydrogen vehicles. So they'd need a lot less clean energy to power those.
Japan is slowly and belatedly figuring out that physics and economics just won't favor hydrogen, ever. The Mirai is an exercise in futility. It doesn't make any economic sense whatsoever. It never has. Toyota at this point is grudgingly producing more EVs per quarter than it ever produced hydrogen vehicles (in total). They only sell a few hundred per year at this point. The only reason they still make them at all is because they are being subsidized to do that.
EV depreciation is a very different beast. Basically, EVs are still being sold at a higher price point than their actual cost justifies in some markets. Part of that is manufacturers being a bit behind on their cost cutting and part of that is just because the market is incentivizing selling vehicles at inflated prices.
If you strip that away, you get to more reasonable price points already getting common all over Asia, Australia, and even the EU market right now. There you might find reasonably priced new vehicles at around 25K euros or even below 20K. A few years ago, those vehicles didn't exist and ASPs were closer to 40-50K for a cheap one. So, the second hand value of those older vehicles has indeed depreciated enormously. Because they simply are not worth as much relative to the much cheaper newer generation of cars. These vehicles got obsoleted by a better and cheaper generation of cars.
With hydrogen cars, companies sell them at a loss. They always have. That's why Toyota, the biggest proponent, sells more EVs than they ever built hydrogen cars. Pretty much every quarter now.
The better/cheaper generation of hydrogen cars never materialized. And it probably never will. The hydrogen distribution network never happened either. Because as it turns out, making hydrogen is really expensive. So aside from a few heavily subsidized filling stations, the economics for those is so terrible that they tend to shut down as soon as the subsidies run out. So, that's why they are relatively worthless as a second hand car. You are better off buying a second hand EV. And since those have depreciated a lot, hydrogen cars simply aren't worth more second hand.
And since there is no realistic prospect of ever producing hydrogen cars or hydrogen at price points that can match those of EVs and electricity, hydrogen based transport is at this point dead as a door nail.
I've been staring at the same space for a while from the point of view of needing to integrate our user facing application with various hardware platforms provided by partner companies specializing in mostly RTLS solutions.
Because these companies are almost universally not very good at software (bad UI/UX, weird bespoke SDKs, lots of proprietary components, etc.), there's a lot of wheel reinvention, integration issues, etc. And the worst is that non of this stuff delivers any value until you build typically very bespoke services on top of this stuff. The software they ship is effectively low level middleware: a necessary evil that the end user doesn't care about.
In my view, what's lacking here is good Open Source components and standardized protocols. And out of the box software experiences that deliver true value using those. Companies seem to be defaulting to building walled gardens and low level SDKs. And they just aren't very good at it. It's a mess of low quality, low level software, and a lot of messy integration projects. You can't order any of this stuff on Amazon and expect it to be plug and play.
Ironically, the home automation market is much more mature than the industry is at this point. You even have some interoperability for devices, protocols, etc. and some pretty decent open integration platforms, slick mobile applications, etc.. That does not exist for large scale industrial/business usage outside a few narrow verticals/niches (e.g. fire alarms, agv tracking in automotive, etc.). Consumers are much more critical and less forgiving than companies when it comes to buying stuff. If it doesn't work, they'll return it.
Something like Apple Airtag is science fiction for asset tracking in an industrial context. It simply does not exist in a usable form. A polished easy to use end user experience that works out of the box. It's easier to track your luggage than it is to track expensive machinery across a supply chain.
The widespread software incompetence is holding back the IOT industry as a whole. They all talk big about topics like ESG, energy savings, smart buildings, fancy tracking, etc. But when you pick it apart it's a bunch of chips in a fugly 3d printed housing with some MQTT and Grafana on the side. They throw it over the fence with some proprietary SDK. And good luck building anything that does anything useful. You are typically looking at expensive integrations and consulting projects. Just to connect some hardware thingies to some ERP thingies.
We're trying to fix this in our company; just so we can remove the friction of getting companies started with our software solutions. It's hard poking through all the BS in this industry.
The core issue is the user/developer experience of Industrial IoT is nowhere near where it should be. I understand where you're coming from, and I feel the same way too. Building a great developer experience is something we deeply care about. Exactly for that reason, we started building our own Device SDKs.
We also have support for open protocols such as MQTT and HTTP+SSE, but the Device SDKs enable us to provide a richer set of capabilities. Our SDKs actually speak a custom protocol we developed for higher efficiency. We're also going to add many more features such as automatic telemetry collection and tracing support, which is more feasible with a plug-and-play SDK.
Another big issue you pointed out is with documentation, a key part of Developer Experience is always great docs. A compelling model might be standalone open source tooling that works independently, with an integrated platform that ties it all together, creating a strong ecosystem.
I've been using Home Assistant with a bunch of Zigbee and Wifi devices at home, and it's been pretty stable. However, for an industrial context, there are already many other hurdles, having a platform handle a lot of the cloud infra and connectivity & monitoring is really helpful.
The problem with custom protocols and SDKs is vendor dependence. When you go out of business or pivot, what happens to my product? I was already burned once by Google cloud IoT...
You’re completely right. Vendor lock-in and platforms shutting down are real risks. We also used Google IoT Core in our previous startup and would have been burned by its shutdown as well.
In fact, the idea for a unified IoT platform came from dealing with the complexity of setting up so many different Google Cloud services just to get data ingestion working.
I think a healthy balance between open source and commercial platforms is possible. We want to compete on reliability, UX, and features while building open device-side tooling and protocols that give users the ability to switch or self-host if they choose. We’re far from that today, but it’s the direction we want to pursue.
And yet, electrical semis exist that come without 16 ton batteries. The fallacy here is that most of the diesel is used to heat the universe rather than move the truck. Truck engines are relatively efficient but it's still a combustion engine. EV trucks are now a reality.
Mercedes-Benz’s eActros 600, one of the flagship battery-electric long-haul trucks now in series production, uses *three 207 kWh LFP battery packs for a total of 621 kWh of installed capacity, and under realistic conditions can deliver about 500 km of range on a full charge with a 40-ton gross combination weight, with opportunity charging in driver breaks enabling well over 1 000 km of daily travel. That's 4-6 tons of battery, not 16.
Volvo Trucks’ current flagship, the FH Electric, has 360–540 kWh of batteries (four to six packs) and achieves up to ~300 km of range for typical heavy-duty operation, and its forthcoming FH Aero Electric long-haul variant is being announced with ~780 kWh battery capacity targeting ~600 km of range. That's around 3 tons of battery.
The weight goes at the cost of the useful load. Though the EU allows an extra 2 tons for new energy trucks. And of course a lot of trucks aren't fully loaded typically. Also, the weight limitations have a lot to do with safety issues related to diesel trucks and their brake systems that electrical trucks have much less. Regenerative braking and lots of torque at low speeds mean that they could move a lot more weight safely than currently allowed. And adding more axles to distribute the weight can address any road damage concerns.
With mandatory 45 minute breaks every 4.5 hours, trucks can just top up as needed. With normal truck driver hours that's 1 or 2 breaks in a working day. There's a growing amount of chargers all over Europe and these things routinely drive all over Europe from Scandinavia to Iberia to Balkans and everything in between. There are of course still many places where more/better chargers are needed but these ranges are usable and practical enough that you can get loads from A to B in most of Europe with only minimal delays relative to diesel trucks in terms of charging time losses. It's early days and charging infrastructure is rapidly being improved. But the point is, that electrical trucks work just fine today. There are no fundamental real load or distance limitations here. But of course more infrastructure is needed to scale.
Lighter batteries will make trucks slightly more efficient. But price and longevity matter much more. Sodium ion with its well over a million mile lifespan looks like it should revolutionize trucking over the next decade. LFP is commonly used today already. NMC is lighter but has a lower lifespan.
reply