This is an amazing talk, and I think I have seen it on HN before. It lays out the problem of operating systems on modern computers well. I'm not sure that it offers the right solution by continuing to offer a system with the same set of abstraction layers as a solution to the problem of the "operating system" abstraction.
I think the folks at Oxide are probably closer to the right track by silo-busting the BIOS, OS, and hypervisor layers of a modern VM-hosting stack.
Edit: I should also add that this talk lays out a huge, gaping hole in the field of OS research, which might be its most important contribution.
But it's not an OS problem; the OS being written that way is only the symptom because it can't touch the real hardware.
Cantrill and Roscoe both pointed at the SOC vendors: if there's a new problem to solve, they add more proprietary, undocumented, cores and enclaves with tightly held secret functions and OSes of their own, all of which are out of visibility of the OS. Some of them are even designed at odds with the user's interests, such as DRM goop.
This gets back to the war on general purpose computing as Cory Doctrow put it. The hardware is ceasing to work for the user's interest and is starting to work for everyone else in the stack against the user.
I have a background in digital circuits and computer architecture, and I completely disagree with you that these SoC components should be run by the traditional CPU cores. Most of these cores have strict latency guarantees or security boundaries that are much harder to achieve when you are trying to share silicon with the general-purpose code running on the machine, and since the cores are so small, it does not meaningfully save silicon to use them.
OSes are generally terrible at providing strict latency guarantees. If you will crash the CPU if you don't do [X] every 10 microseconds, you should not be telling the operating system to do [X]. This is the case of audio systems, radio systems, PCIe, and almost every other complicated I/O function. These could all be done with hardware state machines, but it is better (cheaper) to use a small dedicated CPU instead.
When I refer to security boundaries, a lot of people think that "security through obscurity" is the idea. It is not. It is more similar to closing all the ports on a server and applying "API security" ideas to hardware. It is a lot easier to secure a function that has a simple queue interface to the main cores than to secure a function that shares the main cores - Spectre and Meltdown have showed us that it might be impossible. Yes, these secure enclaves are used for DRM crap and other nonsense, so I can see why you might not like the existence of that core, but even if you erase the DRM software from that core and make it work for the user completely, you still will want the boundary.
Not to mention that every modern motherboard these days has a board management controller, which cannot be part of the CPU, and controls power and resets.
From a hardware perspective, these SoCs and motherboards really need to be heterogeneous systems. It's up to the system software to work with that. Heterogeneous SoCs really have nothing to do with taking power away from the user. The user can program all of these cores, but we live with abstractions that make it very hard.
It's fine to have many smaller CPUs in the system, for all the purposes you state. The software that runs on them needs to be open, though, and something we can understand, fix, or even replace completely as needed. The operation of the components also needs to be documented so that it's possible to do that not just in principle but in practice.
We've added several small cores of our own to the systems that make up the Oxide rack, but critically we can control them with our own (open source) software stack. The large scale host CPUs where hypervisor workloads live can communicate with those smaller cores (service processor, root of trust, etc) in constrained ways across security and responsibility based boundaries.
It helps a lot that you are working with server-style computing, where you can realistically do this.
Things like audio and radio functions (eg bluetooth and wifi) have algorithms that are very proprietary and often patent-encumbered. The hardware architectures for radios are also similarly weird and proprietary. That kind of thing would result in a binary blob (at best) in a fully-open-source environment.
Hot take: I don't mind if Dolby or a wifi chipset vendor hides their software from me as long as it has very strict definitions for its role and a very narrow I/O interface.
There is another area that computer scientists are not studying much and namely new architectures for computers. Is Von Neumann the best and only option? It seems so called computer scientists are not really studying computers at all, they are studying applications. The fundamental science of computers is the hardware architecture and the operating system, and computer science almost completely ignores fundamental basic research in both of them.
It is also a worry that the hidden code in the SoC's subsystems might be hidden for a reason, namely to give control of computers to someone other than the user and OS. That seems a perfect way to compromise all computers while giving an illusion of security. That's why the Intel Management System has been controversial (TPM's also), but in truth there are many processors on the SoC's that each have enormous security implications that the OS does not control. This has been an issue for decades. I remember people running code on the floppy drive MCU for Amiga's, so this has been known about for a long time. Cell phones are intentionally designed so the OS has no control over basic cellular radio functionality. There is a totally separate processor with its own non-public firmware controlling the radio. Whoever writes the code for these subsystems has enormous power with very little oversight.
How different is this really from the fact that hardware designs themselves are proprietary? You can do computing without (what we commonly call) software, and much hardware does that - so even if 100% of the software running on your system were open source, that doesn't mean that your system is not compromised; you can very easily build a hardware key logger, and have it send the password over a cable (sending it over the internet through pure hardware would be quite hard, but modulating a signal shouldn't be). Note that even if the hardware design were open, you would still have a hell of a problem trying to check whether an IC actually matches the open design.
The only solution to these problems is actually regulation and trust. There is just no realistic way to actually check that your system is not doing some of the things you don't want it to do. Instead, you have to go to the source and make sure you can trust the vendors you buy from, and in order to do so, they themselves have to have hiring and shipment etc practices that allow this type of trust.
https://youtu.be/36myc8wQhLo
And yes, this is (as ever) a call to arms on fixing a massive security problem we all have.