I know this is a kind of stupid little thing, but I really wish they'd rebranded more completely if they were going to do this. "That box has a MIPS processor in it. But not a MIPS processor, a MIPS RISC-V processor." We're going to end up with OSs list on their "supported platforms" page things like MIPS, MIPS64, MIPS-brand RISC-V.
To be fair this isn't an announcement of a product or anything like that, so their line of RISC-V processors could have distinct branding e.g. FooBar RISC-V Processors (by MIPS).
Totally agree though that they should probably be careful and put some thought into how they brand these.
Those are versions of the same(ish) ISA and are (usually?) backwards compatible. I'd mind less if MIPS-the-company made CPUs that were RISC-V with a MIPS-the-ISA compatibility mode, or otherwise actually had continuity; it's the apparent plan to throw away the ISA but keep the name that's jarring.
Cortex M0+ is ARMv6-M. ARMv8 has also an M variant, Cortex M33 is ARMv8-M mainline. ARMv6-M and ARMv8-M have quite a lot of things in common, as v8 is a superset of v6. Or did you mean ARMv8-A? Well, ARMv8-A and ARMv8-M are not compatible intentionally, as the the A variant is strictly 64-bit (some implementations support AArch32 though) and the support for thumb instruction set has been dropped.
MIPS had their chance with MIPS Open and bungled it. If they hadn't, then there wouldn't have been the same need for RISC-V.
It's a shame because MIPS had a perfectly usable architecture with 64-bit support and a well supported toolchain.
It's sad to see it abandoned because it was historically one of the first successful RISC architectures that was used in everything from the DECstation to the PlayStation.
> MIPS had their chance with MIPS Open and bungled it. If they hadn't, then there wouldn't have been the same need for RISC-V.
I would, someday, like to hear a history of how we did get to RISC-V being needed. MIPS never actually made it to being open source, but I thought SPARC[0] and POWER[1] both did make it, quite a long time ago now, j-core is a thing[2] although that may have happened too late, OpenRISC[3] is apparently decades old now but just... never went anywhere? I'm probably missing context, but it seems like we should have had at least 2 and possibly as many as 5 open source processor designs by the time RISC-V took off. And now it has taken off, and the ecosystem has latched onto it and it will almost certainly come to dominate, but I still don't understand why it was needed at all or why it succeeded when everything else failed.
[1] https://en.wikipedia.org/wiki/OpenPOWER_Foundation - possibly not actually FOSS like RISC-V; unclear to me whether IBM is just using "open" to mean "we share with other companies who are working with us"
[2] "J-core is a clean-room open source processor and SOC design using the SuperH instruction set, implemented in VHDL and available royalty and patent free under a BSD license." https://j-core.org/
SPARC was a open standard but only one 32 bit version and later 64 bit updates were already available. And at that point interest in Open Source and Open Hardware simply didn't exist to the same degree. In the 'modern' period, nobody has interest in working with something Oracle controls.
Power was always 'fake open' it was basically a marketing vehicle and there were not actually open chips being built with it. Only in response to RISC-V did some of that change.
SuperH was under patent when RISC-V started and it was simple much less known about as well. Once it was actually going RISC-V was already happening and had far more momentum and RISC-V was 64 bit from the beginning.
As with so many thing Sun could have changed the world much more fundamentally had they understood open source better.
There was OpenRisc but that was really more a chip design and the ISA and it was also only 32 bit. OpenRISC was basically some students starting to implementing a design from David Patterson textbook. It was also a license that they didn't like.
So really you needed a ISA that was designed for architecture independent with 32 bit and 64 bit and that was 'future prove' in terms of license, design and so on and importantly some actually working chip and softcore being available for people to tinker with. Berkley did that and they taped out many chips while designing RISC-V. Other universities jumped on board and ETH was producing and taping out cores. From that point is snowballed.
One could imagine an alternative future where Super-H takes off and universities adopting and Berkley does a 62 bit Super-H or something like that.
I'm not a huge fan of register windows (an original SPARC – and Berkeley RISC I – feature) but IIRC MIPS does in fact have hardware interlocks. Branch delay slots aren't a big deal.
I don't see a huge technical need for RISC-V. As I understand it the main motivations were 1) it was supposed to be open/IP free 2) it was supposed to be small/modular and 3) it allowed Berkeley to do ISA research.
I consider MIPS to be simple enough for grad students to implement and fairly modular as well, with the added advantage that it was relatively complete and mature at the time. It could have continued to be improved as well. The main benefits I see to RISC-V are non-technical: IP licensing and current popularity/trendiness/support in both research and industry.
(As a side note: for extreme simplicity and pedagogic suitability, I rather like Niklaus Wirth's (very) similarly-named RISC5.)
Fair enough, but if the claim to fame over the current dominant ISA is that there isn't weird baggage (big deal or not), there is an opportunity for another new ISA without that baggage (important or not)...
It does allow you to not have a distinct move reg1,reg2 instruction but instead make it from "add r0,reg1,reg2" (or OR, or EOR r0 with reg1 into reg2). I guess one could synthesize a 2-complement NEG instruction by just SUB'ing the positive value from r0 and so on.
For advanced implementations, the r0 can also be used without considering register dependencies since it discards all writes and always read 0, so your "xor r0,r0" would possibly stall a long pipeline if the preceding instruction depended on the old value of r0 in a calculation before clearing it with xor.
Lastly, I guess one has to see it in the light of its day, loading registers with 0 was and probably is rather common at loop starts and so on, so it was deemed useful, just like some FPUs have instructions to read constants like PI, e and ln(2) just because it seems to be a good thing to waste a certain amount of transistors on if it is commonly used.
> "xor r0,r0" would possibly stall a long pipeline
but the same advanced implementation that can break dependencies when reading/writing to r0 can do the same when handling xor r0,r0. Indeed that's exactly what many CPUs do.
I'm not qualified to say whether a zero register is better or not, but I will mention that it basically requires your ISA to be 3 operands that has a cost in term of instruction length: so you save bits to have a smaller set of instructions, but need more bits to increase the number of registers.
RISCs are usually 3 operands anyway for many reasons, so in practice it doesn't make a difference I guess.
A fixed zero register is a trivial "increase in the number of registers". To be fair there's very little real-world code that even comes close to making use of 16 registers, let alone 32. So if you really wanted to optimize use of the encoding space, that would be something to focus on first.
sorry for the confusion, by "increase the number of registers" I meant the arity of an instruction (3 for risc vs the typical 2 for x86). So you need an additional log2 bits to encode the extra operand.
The author of my linked article also really didn't care for branch delay slots, which I am assuming was a carryover from the original pipeline that is long gone from modern designs.
I know that ARM removed many performance-adverse aspects in their 64-bit conversion (conditional execution, direct moves into the program counter), and I wonder how much was done for the 64-bit MIPS and SPARC instruction sets to make them more practical and less of a historical anachronism.
As I see it, it's not the best outcome for the computing world; evolving MIPS and SPARC and making them open seems to me like it would have been a better outcome. MIPS is still in widespread use after all.
The way I take Dave Patterson's point (ISAs should be open) is that it would be nice if ARM and x86 were open as well.
Hyperscalers make it easier to launch a new ISA - instead of convincing a lot of manufacturers that build diverse products with it, all of which you need to more or less manage into compatibility so that your software doesn't get diluted, now you need to convince one or two hyperscalers that your solution will offer the same capacity in less space for less power. That, of course, after some compiler enablement, but that's something every ISA needs to do. You can start by building a beefy server chip instead of a full lineup from thin laptops all the way up to beefy servers.
Still used in some consumer devices with older Mediatek (MTK) wireless SoCs and older Qualcomm (QCA) wireless SoCs. Qualcomm has moved onto arm uarch, in addition to largely shuttering their switch ASIC lineup (lots of MIPS in that product family). MTK seems to be moving in the arm direction as well.
Sometimes, we have vendors like Mikrotik who love the old QCA MIPS lineup and shove those ancient SoCs into everything they can.
But the older MTK MIPS chips still seem to find a lot of new hardware releases. I recently picked up a TPLink WiFi 6 AP because it used Mediatek wifi chips, which are well supported in the mainline kernel. Was a little surprised to see it still used a Mediatek MIPS SoC as the main glue between the various wireless chips.
If you see a WiFi 6 AP that only has WiFi6 on 5GHz, and WiFi4/n on 2.4GHz, a good chance it is using a MTK MIPS WiFi 4/n SoC + a MTK WiFi 6 PCIe IC, with the SoC providing 2.4GHz and 5GHz being provided by something like the MT7915 or similar. The Ubiquiti U6 Lite and U6 LR are examples of this, as are the Belkin RT3200 / Linksys E8450.
> Sometimes, we have vendors like Mikrotik who love the old QCA MIPS lineup and shove those ancient SoCs into everything they can.
Mikrotik don't really shove MIPS in anything new! Practically everything they've launched in the last few years is either ARM or ARM64. They seem to particularly love the IPQ-4018/4019 SoCs.
I agree, Mikrotik have largely moved onto the IPQ4000 series for their wireless products and their many of their advanced switches use the switch ASIC's onboard arm core(s) (CRS305, CRS309, CRS317, CRS328, etc) without an external management SoC.
However, some of their advanced switches (CRS312 12 10GbE RJ45, CRS354 48 GbE 4 SFP+ 2 QSFP+, CRS504 4 QSFP28, CRS326 variant with 24 SFP+ 2 QSFP+, etc) will often use a QCA9531 MIPS SoC as their management chip.
I was surprised to see their latest switch, the new CRS504 (4x 100GbE) [0] used a very advanced Marvell switch chip, with a QCA9531 attached to it. MIPS lives!
> But the older MTK MIPS chips still seem to find a lot of new hardware releases.
Yeah the MT7621 is still a popular choice at the lower-end. You'll see it bundled with a WiFi 6 radio sometimes too. You'll see the single-core MT7620 at the even lower lower end as well.
> If you see a WiFi 6 AP that only has WiFi6 on 5GHz, and WiFi4/n on 2.4GHz, a good chance it is using a MTK MIPS WiFi 4/n SoC + a MTK WiFi 6 PCIe IC, with the SoC providing 2.4GHz and 5GHz being provided by something like the MT7915 or similar. The Ubiquiti U6 Lite and U6 LR are examples of this, as are the Belkin RT3200 / Linksys E8450.
The U6-LR and E8450/RT3200 use the MT7622, which is dual-core ARM Cortex-A53's. The inbuilt 2.4ghz radio is essentially still a MT7615 block doing 4x4 11n though.
There's also the MT7986 ("Filogic 830"), which is essentially the same thing but with a 11ax capable radio instead. However I've only seen a couple products using it right now, with quick search reveals one only announced in the last few days too...
Edit: Well not really the same thing, it's quad-core, on a smaller process and has revised offload, but you get the gist...
What happened is that SGI dropped it and then it got passed around like between one company after another while it consistently lost more and more market share to ARM.
Until MIPS was mostly irrelevant as an arch.
So eventually the company realized that trying to only build on MIPS was not gone work, so they Open-Sourced MIPS itself and they are trying to use their knowledge to be part of the RISC-V ecosystem.
But since they were late to that game as well, I am skeptical.
Well, they didn't even do that, they just made a big announcement that they were opening up which as far as I could tell just meant they put a link to their sales staff on a web page rather than making you look them up.
That's not what mature means. Mature is that all edge cases have been explored, all kinds of random numerical computations have been expressed, tool chains & debuggers exists in many forms and are well supported, the chips have scaled from dish washers to RAD hardened satellites, etc etc
I'm sorry, I am the GP who used the word "mature" up there. This is not what I meant. I was talking about there being a greater software community around RISC-V compared to MIPS these days. E.g. having kernel, software, compiler support etc...
People are throwing stones, but I suspect you didn't mean mature but perhaps "modern"? RISC-V doesn't carry all the endless legacy of 30+ years of MIPS variants and is in most ways cleaner and more streamlined (I do fear RISC-V is rushing towards its own mess, but that's another story).
MIPS (the ISA) _could_ have had RISC-V current position but I don't think MIPS (the company) could have survived. Whether the Phoenix can survive this reinvention remains to be seen. It's a very crowded space, especially for in-order processors.
Yep. Amazing when you consider that Windows CE ran on MIPS hardware (NEC VR3xxx and VR4xxx chips). Casio's Cassiopeia series for example.
It's not quite a "what could have been" because they were really eclipsed by the StrongARM at that point, as I recall, but it's very notable how quickly people lost interest in MIPS.
It lived on for quite a while with a strong position in the networking market later on.
But then instead of focusing on networking/infrastructure hardware like they could have, they went on a wild goose chase... to try to gain a place in phones.
To be fair, Power PC was also trying an extended post-AIM life in networking. I don't know if the big-endianess of some networking stacks had anything to do with it.
The earliest Killer networking cards had a PPC chip onboard. One could even see it from the Windows Device manager as such :D
I see NXP has largely slowed development of their old Freescale PPC lineup (formerly Motorola's PPC and logic division) in favor of arm chips.
> The earliest Killer networking cards had a PPC chip onboard.
In my collection I have an IBM server that has two Pentium II processors and, IIRC, three PowerPCs handling specialized chores such as the network and disk array.
The Telum processor is only part of the story of their new mainframe too. While it's the Telum that runs the application code, there are many other different processors (and some Telums with different microcode loaded on boot) performing specialized jobs. The machine can have up to 256 Telum cores, but there's a maximum of 200 that can be dedicated to user code. The remaining cores will be working to ensure the user code doesn't need to wait for anything.
The PSP (Playstation Portable) was a MIPS device too.
My last encounter with MIPS was a payment terminal I worked on in 2014, which sadly never made it to market. There was a company making 'secure' variants with a variety of hardware features suited to payment terminals - a hardware TRNG, a couple of pins which transmitted and received a TRNG signal constantly, so that you could attach a tamper-detection wire and brick the device if it broke, key-erasure features etc.
Not sure, I'm not read up on RISC-V, but I don't know if someone would want it since AFAIK two's complement means fewer logic gates and a slimmer instruction set.
One's complement arithmetics don't break in the few odd cases like two's complement do though, so there could be that.