> It sounds like you’re thinking of Kwisatz Haderach from Dune.
> The spelling/pronunciation gets mangled a lot (“quiz-atz haderach,” “kwitzatteracht,” etc.), but the original term is Kwisatz Haderach.
I asked it if Hacker News was in it's training data and gave it the website and it gave me the first "I don't know anything about that" I've ever seen from it.
> IPv6 has some quirks that make it harder to digest.
Almost every point in your list is wrong.
> - link local gateway address, makes it hard to understand why the subnet does not have a gateway from the ssme address space
IPv4 has link-local addresses, too. Those are the 169.254.X.X addresses that you see on Windows machines. IPv6 adds nothing new.
> - privacy extensions: it is very hard to explain to people why they have 3-4 IPv6 addresses assigned to their computer
Well then, don’t use them. Configure the machines with one address each, just like before. If you want the (arguable) advantages of the privacy extensions, they are available, but not mandatory.
> - multicast instead of broadcast
IPv4 always had multicast, too. IPv6 is simplified by considering the broadcast concept to be a kind of multicast.
> - way too many ways for autoconfiguration (SLAAC, DHCPv6)
SLAAC is just link-local addresses, which you already mentioned above. Did you mean NDP with router advertisements?
If you did, you do have a small point, but DHCP6 is still there like always. IPv6 just offers an additional feature for the simple cases where a host just needs an IP address, netmask and a router address.
> - no real tentative mapping to what people were used to. Every IPv6 presentation I did had to start with “forget everything you know about IPv4”
That’s the complete opposite of my experience. Almost everything in IPv6 works exactly the same as with IPv4.
You're being obtuse. Every point in the original comment is correct, you just disagree they're issues. The original comment also doesn't state they are issues just that they are differences.
• link local addresses
.Auto configuration addresses are in V4 but they are used entirely differently. Interfaces do not have link local addresses if they have a DHCP or statically configured address, in V6 it is extremely common to use a link local address as the gateway, in V4 this basically never happens.
> The original comment also doesn't state they are issues just that they are differences.
My point is that, in most cases, these aren’t differences, since IPv4 does the same thing as IPv6. Therefore, the claim that IPv6 “has some quirks that make it harder to digest [than IPv4]” is incorrect.
> Interfaces do not have link local addresses if they have a DHCP or statically configured address
I could be wrong, but I seem to recall that Windows machines always have a IPv4LL address?
> in V6 it is extremely common to use a link local address as the gateway
The eternal problem with companies like Tailscale (and Cloudflare, Google, etc. etc.) is that, by solving a problem with the modern internet which the internet should have been designed to solve by itself, like simple end-to-end secure connectivity, Tailscale becomes incentivized to keep the problem. What the internet would need is something like IPv6 with automatic encryption via IPSEC, with IKE provided by DNSSEC. But Tailscale has every incentive to prevent such things to be widely and compatibly implemented, because it would destroy their business. Their whole business depends on the problem persisting.
> What the internet would need is something like IPv6 with automatic encryption via IPSEC, with IKE provided by DNSSEC.
I understand the appeal of this vision, but I think history has shown that it's not consistent with the realities of incremental deployment. One of the most important factors in successful deployment is the number of different independent actors who need to change in order to get some value; the lower this number the easier it is to get deployment. By very rough analogy to the effectiveness of medical treatments, we might call it the Number To Treat (NTT).
By comparison to the technologies which occupy the same ecological niches on the current Internet, all of the technologies you list have comparatively higher NTT values. First, they require changing the operating system[0], which has proven to be a major barrier. The vast majority of new protocols deployed in the past 20 years have been implementable at the application layer (compare TLS and QUIC to IPsec). The reason for this is obviously that the application can unilaterally implement and get value right away without waiting for the OS.
IPv6 requires you not only to update your OS but basically everyone else on the Internet to upgrade to IPv6. By contrast, you can just throw a NAT on your network and presto, you have new IP addresses. It's not perfect, but it's fast and easy. Even the WebPKI has somewhat better NTT properties than DNSSEC: you can get a certificate for any domain you own without waiting for your TLD to start signing (admittedly less of an issue now, but we're well into path dependency).
Even if we stipulate that the specific technologies you mention would by better than the alternatives if we had them -- which I don't -- being incrementally deployable is a huge part of good design.
[0] DNSSEC doesn't strictly require this, but if you want it to integrate with IKE, it does.
> First, they require changing the operating system
This was done very quickly with IPv6; most major vendors had adequate support very early. This shows that it can be done when the companies involved actually want to do it.
> IPv6 requires you not only to update your OS
Blatantly false. AFAIK, all mainstream OSs today has enough IPv6 support to work adequately in a theoretical IPv6-only environment.
> Even the WebPKI has somewhat better NTT properties than DNSSEC: you can get a certificate for any domain you own without waiting for your TLD to start signing (admittedly less of an issue now, but we're well into path dependency).
Wait for CDS and CDNSKEY record support to be more widespread among TLDs (some support it today, and from what I can see, the number is increasing). Then you don’t need even your registrar to be involved in you DNSSEC deployment, you can just enable DNSSEC in your DNS server and let it deploy automatically.
> being incrementally deployable is a huge part of good design.
Oh, agreed.
> [0] DNSSEC doesn't strictly require this, but if you want it to integrate with IKE, it does.
Yes, this kind of new feature would have to be implemented in a backwards compatible way, with fallback to normal connections if the other end does not support it. One idea would be to put KEY records in the reverse lookup zones; only if such a record exists will you get automatic IPsec.
Most tech businesses exist because problems exist. Tailscale delivers a solution that's available today. The only alternative is to sit and wait for IPv6. I don't imagine Tailscale is against IPv6 any more than security professionals are against memory-safe programming languages.
I thought that too and I've written a very similar comment before. But in fact Tailscale's main product seems to be the zero trust stuff, not dealing with IPv4. At least that's what they say...
To someone who was around at the time, this sounds silly. Is the Commodore 64 then a 16-bit machine, because its address pointers are 16 bits? No, the Amiga and related 68000-based machines were generally considered to be 16-bit machines, and their predecessors were all considered to be 8-bit machines.
The 6510 operates internally as an 8-bit processor. The 68000 operates internally as a 32-bit processor for the most part - instructions are 32-bit aligned, registers are 32-bit.
We don't consider the original IBM 5150 PC to be an "8-bit" machine even though the situation is very similar to the 68000 - internal 16-bit operation, but 8-bit data bus.
The 68000 series has always been 32-bit, even if some implementations have used 16-bit connectivity to the rest of the board. Thus, the Amiga has also always been a 32-bit platform.
Would you consider early version of MacOS to be running on a "24-bit" platform, since the high byte of pointers was often used for non-addressing functionality? No, the 68k Mac has also always been a 32-bit platform, since day one, albeit one that wasn't always "32-bit clean". The Amiga never had this issue, however.
Many years ago, terminal emulators used to allow keyboard rebindings via escape codes. This is why it was then common knowledge to never “cat” untrusted files, and to use a program to display the files instead; either a pager, like “less”, or a text editor.
I believe there were even more substantial issues in some terminal emulators, where escape sequences could write to arbitrary files or even execute programs. I think it's still very reasonable advice to avoid dumping arbitrary bytes into the terminal stream, even if only to avoid screwing up the state of the terminal.
> For example, in IPv4 each host has one local net address, and the gateway uses NAT to let it speak with the Internet. Simple and clean.
No, that’s not the IPv4 design. That’s an incredibly ugly hack to cope with IPv4 address shortage. It was never meant to work this way. IPv6 fixes this to again work like the original, simpler design, without ”local” addresses or NAT.
> In IPv6 each host has multiple global addresses.
Not necessarily. You can quite easily give each host one, and only one, static IPv6 address, just like with old-style IPv4.
The problem here is the IPv6 design. It has multiple ways of configuration, and ALL of them suck.
Manual address input is clumsy because of IPv6 address length, stateless RA is limited and doesn't allow network introspection, stateless DHCP is pointless, stateful DHCP is not supported by the most widely deployed OS. There's also prefix delegation that needs stateful DHCP.
reply