There’s lots of “competitors” to esp32, pretty much none of them except STM have the software for developers, which makes their offerings close to useless.
The only way to gain critical mass is relentless deep commitment to developer tools/libraries and documentation.
In many cases these competitors actually try to hide their documentation and keep secret how their chips work.
I’d be interested to hear if there’s other chip vendors with software/documentation as impressive as Espressif but I haven’t heard any, except as I said STM.
> gain critical mass is relentless deep commitment to developer tools/libraries and documentation
Which seems (IMHO) why the likes of Texas Instruments has failed. They've made a few attempts at jumping on the IoT bandwagon over the last decade and should have been in a perfect position to capitalize on the huge uptake but it just didn't fit their profit margin goals I'm guessing.
RISC-V so far is a turndown for a design, speaking honestly.
ARM have by faaaaar the best tooling in the industry, which are more open, than not, and the baseline is made on GNU toolchain, only with more fancy debug tools being paid/locked down.
RISC-V did not yet benefit from its openness for as long as tooling go.
However, RISC-V would still be infinitely better than Cadence, and them wanting $100k for a basic (and terrible) debugger with coresight-like functionality.
It just means it isn't made for hobbyists... Hardware companies don't care about the lack of good software - they just pay some embedded code monkey a lowish salary to 'just make it work'.
It's a pretty uncharitable description, but yeah. There are only two types of companies that do this though: companies where they're working at such a scale that spending a few extra months to years of developer time doesn't make a dent compared to the BOM savings. You're more likely to get vendor support by name recognition at these kinds of companies, so it's not as bad as the hobbyist experience. The second kind is where either the project or the company can't afford the capital outlay for better documented chips (or support packages). Stay the heck away from these if you value your sanity or know your own worth.
Sadly no - apart from a very tiny number of silicon valley companies, everyone else in embedded tech seems to pay the people who write verilog and assembly the same as the people who make the schematics and the mechanical designs. That's sometimes only 40% of what people who write python and nodejs get in the same city...
I think that's partly due to market forces and the value produced per developer. A few Webdev's can write software that enables an entire web company in a few months. Due to the poor state of embedded development ecosystem the same is not true in hardware, compounded by the time/effort required to workaround bugs in the hardware plus working in error prone C. Then embedded developers/companies don't share workarounds via open source, so every team must spend time working around already solved issues. Personally, I'm using the ESP32 largely because of the open-source SDK, that didn't require an NDA or some crappy IDE, then I ported Nim to it. But normally it might take 2-3x more embedded devs to create products with comparable profits to similar devs in pure software
If you're doing automation of some kind in the bay, embedded has roughly comparable payscales to other dev jobs. Beyond that, no. You'll generally be paid slightly less than webdevs (but still far more than most engineers make).
I have been involved in embedded systems for more than 25 years now, and I barely do "pure" embedded development these days. That is projects that comprise only designing a specific PCB hardware, or writing the firmware drivers and top level application for them.
Except for a couple of years I have resided mostly outside of California, but when doing consulting I have mostly engaged with companies in the Bay area.
In the past when I did pure embedded development I had to fight tooth and claw to get rates close to $100/hour, while at the same time iOS app developers or webdevs were starting at $120-$140/hour easily, even after the mobile app craze. Despite taking into consideration that I am both a hardware and a software engineer, and rates outside of California were much lower ($45-$70/hour at the time). Which was one of the reasons I pushed hard to find my clients in the Bay area instead.
These days since I have more experience and business contacts I have diversified into more complex projects that have embedded components and pay me better since they belong to regulated industries. Even now I work with full-stack software programmers in AWS/GCP cloud apps, React/Vue frameworks, modern databases and connected middleware that make around $200/hour. For comparison, in the last years I have been involved in the development of a medical device where I did the hardware design (MCU/FPGA, signal acquisition), HDL/firmware programming (Verilog, RTOS, Linux, BLE), technical host tools (C++, Python, Qt) and was paid $180/hour, and lately a mobile robot project where I did the hardware architecture, integration and system control loops (MCUs), and prototyped the high level software application (RT-Linux, ROS, Nvidia Jetson) for $150/hour. An acquaintance in the valley that was working for a similar project but entirely in the ML/CV stack was billing $270/hour. That may be a bit extreme example since ML is hot today, but nevertheless.
Is this the same in Europe or outside of Bay area? It really isn't fair since you are doing the actual full-stack (from F_freaking_PGA to C++/Vue on host) but the ML type is invoking an amalgamation of libraries to do something with 90% confidence level.
Basically, if there more customers but smaller orders, the wasteful overhead of customers reverse engineering the produce will grow. But then there's more incentive for HW companies to overtake the competition by offering better docs.
This. If it doesn’t work with the Arduino IDE/platformio, I would rather not use it. What these manufacturers should do is invest in integrating with these SDKs instead of creating their own. And they should document everything to the level that IPFS is documented (just an example of excellent documentation that comes to mind). Do those two things and you got a winner even if the device itself isn’t technically the superior choice.
Because, to be frank, Arduino is a bit of a toy platform.
It is really good for beginners and for hobbyists to make embedded devices available to a wider audience, but nobody in their right mind would build an actual product on top of Arduino.
People trying to build "real" products on top of Arduino usually do so in spite of, not thanks to, the platform.
Once you start trying to build a polished experience, not a quick hack, you quickly run into limitations that are solvable by dropping to lower levels. I ran into this when I tried to make an ESP8266 temperature sensor - the Arduino stuff didn't have working power management, so it would heat up skewing the measurements, nor anything like threading or coroutines for network handlers, so a single stuck network request would block all others. Ironically, the Arduino port included a coroutines implementation under the hood, but used it in a silly way with a single coroutine. I rebuilt the thing on top of the ESP SDK, stole the thread switching code from Arduino, and used it to build a nice low power version that could handle up to 4 requests at once.
I don't think I've ever seen a well engineered product built on Arduino that wasn't trivial. They might've hacked on it until the user experience is good, but then you peek under the hood and it's clear the engineering isn't good. And that ends up creeping into the user experience in the end, or worse. (OnlyKey comes to mind, which is built on Arduino and also its authors clearly do not have the competence to be writing security/crypto code, but they seem unwilling to accept any criticism).
I know of a good production line that has seen improvements in efficiency and safety due to automation with Arduino. I’m sure there are better ways but in terms of bang for buck, it’s hard to see how a non-programmer could have done better.
On things I've personally worked on, I tend to just use the basic libraries supplied by the vendor, or even just implement everything myself from reading data sheets. This is obviously realistic mainly for very small and simple systems, which is what I've usually worked with.
Almost every consumer 3D printer out there runs on top of Arduino. Sure that’s a small market compared to all electronics. But it’s still a big market in absolute terms.
This was true three years ago. Plenty of 3D printers are now using ARM. The Monoprice MP Mini Delta has been out about two years now with a price of $175 US. Marlin 2.0 has support for ARM, I think it would be accurate to say "Almost every consumer 3D printer out there runs on top of Marlin", although there are a few alternatives like Smoothie.
That is going to put a dent in ARMs revenue. Even if that particular chip isn't a big seller, it means ST now has a drop-in replacement for ARM in their other SoCs. Soon people will realize that ISA is often irrelevant, but the peripherals, libraries, and tools are.
That's one opinion. Another ARM engineer who contributed one heck of a lot more to ARM's success (designed ARM7TDMI, Thumb, Thumb2) says RISC-V is great. See at 51 minutes in this https://www.youtube.com/watch?v=_6sh097Dk5k
There’s lots of “competitors” to esp32, pretty much none of them except STM have the software for developers, which makes their offerings close to useless.
The only way to gain critical mass is relentless deep commitment to developer tools/libraries and documentation.
In many cases these competitors actually try to hide their documentation and keep secret how their chips work.
I’d be interested to hear if there’s other chip vendors with software/documentation as impressive as Espressif but I haven’t heard any, except as I said STM.