> This experience fits a pattern I keep running into with European business-facing APIs and services. Something is always a little bit broken.
I feel like this isn't just business services though.
American engineers are used to working for either big tech or "Silicon Valley inc." European engineers are used to working for Volkswagen, Ikea or Ryanair. Very different kinds of businesses who treat tech very differently.
Over here, competing on user experience and attracting users with a slick interface that people love to use isn't really something most companies think about (and so they get their lunch eaten by the Americans).
Nowhere is the European mentality more evident than in cybersecurity, where outdated beliefs still dominate. In this mentality, everybody is out to get you (and that notably incudes your vendors, your business partners and your customers), so all infrastructure has to be on prem, open source is free and hence suspicious by definition, obscurity is the best kind of security, encryption doesn't work so data should go over custom fiber, and if you have to expose an API on the public internet, an Authorization header isn't enough, it should also require MTLS behind a layer of IpSec.
And we're still getting passwords changed periodically or requiring a number, upper case letter, symbol...
I'm an European engineer and I can confirm that our tech is often broken and customer-reachable people are usually obtuse and hostile about it. We don't even seem to properly implement our own legal requirements. Sometimes, Americans implement the RGPD better than we do.
First, children also have a right to free speech. It is perhaps even more important than for adults, as children are not empowered to do anything but speak.
Second, it's turn-key authoritarianism. E.g. "show me the IDs of everyone who has talked about being gay" or "show me a list of the 10,000 people who are part of <community> that's embarrassing me politically" or "which of my enemies like to watch embarrassing pornography?".
Even if you honestly do delete the data you collect today, it's trivial to flip a switch tomorrow and start keeping everything forever. Training people to accept "papers, please" with this excuse is just boiling the frog. Further, even if you never actually do keep these records long term, the simple fact that you are collecting them has a chilling effect because people understand that the risk is there and they know they are being watched.
> First, children also have a right to free speech.
Maybe I'm wrong (not reading all the regulations that are coming up) but the scope of these regulations is not to ban speech but rather to prevent people under a certain age to access a narrow subset of the websites that exist on the web. That to me looks like a significant difference.
As for your other two points, I can't really argue against those because they are obviously valid but also very hypothetical and so in that context sure, everything is possible I suppose.
That said something has to be done at some point because it's obvious that these platforms are having profound impact on society as a whole. And I don't care about the kids, I'm talking in general.
Under most of these laws, most websites with user-generated content qualify.
I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky), but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.
> but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.
I'm currently scrolling through this list https://en.wikipedia.org/wiki/Social_media_age_verification_... and it seems to me these are primarily focused on "social media" but missing from these short summaries is how social media is defined which is obviously an important detail.
Seems to me that an "easy" solution would be to implement some sort of size cap this way you could easily leave old school forums out.
It would no be a perfect solution, but it's probably better than including every site with user generated content.
> I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky)
An alternative to playing whac-a-mole with all the innovative bad behavior companies cook up is to address the incentives directly: ads are the primary driving force behind the suck. If we are already on board with restricting speech for the greater good, that's where we should start. Options include (from most to least heavy-handed/effective):
1) Outlaw endorsing a product or service in exchange for compensation. I.e. ban ads altogether.
2) Outlaw unsolicited advertisements, including "bundling" of ads with something the recipient values. I.e. only allow ads in the form of catalogues, trade shows, industry newsletters, yellow pages. Extreme care has to be taken here to ensure only actual opt-in advertisements are allowed and to avoid a GDPR situation where marketers with a rapist mentality can endlessly nag you to opt in or make consent forms confusing/coercive.
3) Outlaw personalized advertising and the collection/use of personal information[1] for any purpose other than what is strictly necessary[2] to deliver the product or service your customer has requested. I.e. GDPR, but without a "consent" loophole.
These options are far from exhaustive and out of the three presented, only the first two are likely to have the effect of killing predatory services that aren't worth paying for.
[1] Any information about an individual or small group of individuals, regardless of whether or not that information is tied to a unique identifier (e.g. an IP address, a user ID, or a session token), and regardless of whether or not you can tie such an identifier to a flesh-and-blood person ("We don't know that 'adf0386jsdl7vcs' is Steve at so-and-so address" is not a valid excuse). Aggregate population-level statistics are usually, but not necessarily, in the clear.
[2] "Our business model is only viable if we do this" does not rise to the level of strictly necessary. "We physically can not deliver your package unless you tell us where to" does, barely.
The chilling effect of tying identity to speech means it directly effects free speech. The Founding Fathers of the US wrote under many pseudonyms. If you think you may be punished for your words, you might not speak out.
We know we cannot trust service providers on the internet to take care of our identifying data. We cannot ensure they won't turn that data over to a corrupt government entity.
Therefore, we can not guarantee free speech on these platforms if we have a looming threat of being punished for the speech. Yes these are private entities, but they have also taken advantage of the boom in tech to effectively replace certain infrastructure. If we need smart phones and apps to interact with public services, we should apply the same constitutional rights to those platforms.
> If we need smart phones and apps to interact with public services, we should apply the same constitutional rights to those platforms.
Are private social media platforms "public services"? And also, you mentioned constitutional rights. Which constitution are we talking about here? These are global scale issues, I don't think we should default on the US constitution.
> We know we cannot trust service providers on the internet to take care of our identifying data.
Nobody needs to trust those. I can, right now, use my government issues ID to identify myself online using a platform that's run by the government itself. And if your rebuttal is that we can't trust the government either then yeah, I don't know what to say.
Because at some point, at a certain level, society is built on at least some level of implicit trust. Without it you can't have a functioning society.
> Because at some point, at a certain level, society is built on at least some level of implicit trust. Without it you can't have a functioning society.
This is somewhat central to being remain anonymous.
Protesters and observers are having their passports cancelled or their TSA precheck revoked due to speech. You cannot trust the government to abide by the first amendment.
Private services sell your data to build a panopticon, then sell that data indirectly to the government.
Therefore, tying your anonymous speech to a legal identity puts one at risk of being punished by the government for protected speech.
> You cannot trust the government to abide by the first amendment.
Again, this is a global issue. There is no first amendment here where I live. But the issue of the power these platforms have at a global level is a real one and something has to be done in general to deal with that. The problem is what should we do.
It's weird how different and hyper-local the social media landscape was back then. It's not just that every country had their own thing, it's also that they were all very different concepts and ideas.
Poland's social media of choice was "Nasza Klasa" (lit. "Our Class"), the American alternative was called "Classmates" as far as I know. It was intended as a service that let you re-unite with your old classmates, designed with the way the Polish school system worked in mind. It was used for far more than that though, and was quite popular among kids who were still at school.
We're still in that era with messaging apps somehow. WHile the local alternatives have mostly died out, the world is now a patchwork of WhatsApp, Messenger and Telegram, with islands of iMessage, Line, KakaoTalk and WeChat thrown into the mix. Most countries have basically standardized on one of these, but they can't agree on which one.
Most of my local friends here in the united states were really into LiveJournal and Xanga for a couple years before myspace went live. That might have been more the younger crowds scene though.
Every time I open the dev tools on Safari (to reverse-engineer some random broken website that doesn't let me do what I need to and forces me to write yet another Python script using Beautifulsoup4), Google logs me out of all of my accounts.
To add insult to injury, Google's auth management is so broken that if I log in to the "wrong" account first by accident (E.G. when joining a work meeting from Calendar.app), that account now becomes primary for Google Search / Youtube, and there's no way to change that without logging back out from all accounts and then logging into them again.
Anthropic, OpenAI and Google have real user data that they can use to influence their models. Chinese labs have benchmarks. Once you realize this, it's obvious why this is the case.
You can have self-hosted models. You can have models that improve based on your needs. You can't have both.
I'm going to claim that the majority of those users are optimizing for cost and not correctness and therefore the quality of data collected from those sessions is questionable. If you're working on something of consequence, you're not using those platforms. If you're a tinkerer pinching pennies, sure.
This is a weird dichotomy and I don't agree with it. You don't need to have bags of money to burn to work on serious things. You also can value correctness if you're poor.
ChatGPT, Gemini and Claude are banned in China. Chinese model providers are getting absolutely massive amounts of very valuable user feedback from users in China.
Changes like these lend even more credibility to the approach of putting everything on port 443 over TLS, and distinguishing protocols based on hostname / HTTP path.
Wireguard over 443/udp is also a neat trick. No need to make it look like quic although I wouldn't be surprised if someone takes the effort to make it that stealthy.
If everything was on port 443 why would we even need ports.
The ports are there for a reason, it is idiotic to serve everything over http as you would need a mechanism to distinguish the different flows of traffic anyhow.
Preventing the traffic from being distinguished is the whole premise. Port 23 gets blocked because everyone uses it for telnet, and everyone expects bad actors to know that. If everything moves to 433, we'll end up with a variety of routing systems and no focal point for attack. The only alternative is to disallow port filtering in core internet infrastructure.
We can either have a standard and accept that bad actors will use it against us, or we can accept the chaos that results from abandoning it.
> The only alternative is to disallow port filtering in core internet infrastructure
I think this is an acceptable alternative. In the same way that your mail service is legally required to deliver your mail as part of their universal service obligation (without reading it).
You've got it wrong. It doesn't have to be HTTP[S] traffic.
Reverse proxies can disambiguate based on the SNI. I could run telnetd on port 23, but have port 23 firewalled off, and have my reverse proxy listening on port 443 with TLS forward anything going to telnet.mydomain.com to telnetd. Obviously, my client would need to support that, but a client-side proxy could easily handle that just as well.
Yes you can run any service on any port. But tunneling telnet over another protocol seems like you would just move the problem. I don't know too much about SNI but if "Reverse proxies can disambiguate based on the SNI" wouldn't your network service provider also be able to filter based on SNI?
You would need to agree on a protocol and you would gain all the advantages but also the disadvantages of the tunneling protocol.
> wouldn't your network service provider also be able to filter based on SNI?
Two things:
1. Only if they knew that the hostname in question is indeed being used for telnet tunneling. You can set that host name to whatever you want.
2. Encrypted SNI is a thing.
> You would need to agree on a protocol and you would gain all the advantages but also the disadvantages of the tunneling protocol.
Yeah, admittedly the entire thing is a bit contrived. If your client is capable of speaking the tunneling protocol, then likely you'd just use the tunneling protocol itself, rather than using it to tunnel telnet.
Protocol multiplexing/demultiplexing is a feature of software like sslh, nginx, and HAProxy exist, and they don't need to listen on multiple ports to speak multiple protocols or connect multiple services. Many advanced reverse proxies can do this with stream sniffing of some flavor.
People already do actually run everything through port 443 simultaneously.
Protocol multiplexing exists. But you will have to agree on a single protocol, which I view as impossible since different applications have different requirements.
If you route all your traffic through https that comes with all the upsides, for example the security layer (ssl). But also the downsides of for example overhead of headers. Currently we have an overarching (network layer) protocol, it is called IP. It divides the traffic into different ports at the host, these ports speak different protocols. If you move the multiplexing higher up the OSI stack, you are violating the principles of separation and making your stack less flexible, you are mixing OSI layers 4: transport up to 6: Presentation. Conflicting these layers can lead to big problems as this includes things like the Transport layer, for example the difference between udp/tcp is included there.
The beauty of the network stack is that there are certain layers that separate responsibility. This allows the stack to apply to wildly different scenarios. However I do agree that there should be no filtering applied on behalf of the customers.
> Protocol multiplexing exists. But you will have to agree on a single protocol
I may be misunderstanding your message here, but the requirement to agree on a single protocol isn't true when you're using multiplexing. I think you're confusing tunneling with multiplexing.
With multiplexing, you have multiple protocols listening on a single port. The multiplexer server sniffs the first few bytes of what the client sends to determine what protocol is being used, then decides which back-end to forward the connection to.
Neither the client nor the final back-end need to be aware that multiplexing is happening, and likely aren't.
Through this, you can use both HTTPS and Telnet on port 443 without the Telnet client needing to have any changes done.
1. Because people vote with their wallets and not their mouths, and most companies would rather have a cost accident (quickly refunded by AWS) rather than everything going down on a saturday and not getting back up until finance can figure out their stuff.
2. Because realtime cost control is hard. It's just easier to fire off events, store them somewhere, and then aggregate at end-of-day (if that).
I strongly suspect that the way major clouds do billing is just not ready for answering the question of "how much did X spend over the last hour", and the people worried about this aren't the ones bringing the real revenue.
> I strongly suspect that the way major clouds do billing is just not ready for answering the question of "how much did X spend over the last hour", and the people worried about this aren't the ones bringing the real revenue.
See: Google's AI studio. Its built on Google Cloud infrastructure so billing updates are slow which peeves users used to instant billing data with Anthropic and OpenAI.
> and the people worried about this aren't the ones bringing the real revenue.
It's this one. If you're in a position to refund a "cost accident", then clearly you don't have to enforce cost controls in real time, and the problem becomes much easier to achieve at billing cycle granularity; the user setting a cost limit is generally doesn't care if you're a bit late to best-effort throttle them.
So I guess we'll have a system whose API is "open and interoperable", meaning "spread across 3000 pages of 5 ETSI TS PDfs that nobody can understand, with the only integration environment available after an expensive security audit, and requiring you to send an email to an email address that hasn't existed for the last 5 years."
It's an anti-brute-force mechanism. It's not for you, it's for all the other accounts that an unattested phone (or a bot posing as an unattested phone that just stole somebody's credentials via some 0-day data exfiltration exploit) may be trying to access.
Sure, banks could probably build a mechanism that lets some users opt out of this, just as they could add a Klingon localization to their apps. There just isn't enough demand.
If you work on mobile apps you will notice that full attestation is too slow to put in the login path. [This might be better than it used to be, now in 2026].
I don't think a good security engineer would rely on atty as "front line" anti brute force control since bypasses are not that rare. But yeah you might incorporate it into the flow. Just like captchas, rate limiting, fingerprints etc and all the other controls you need for web, anyway.
I know I'm quibbling. My concern is that future where banks can "trust the client" is a future of total big tech capture of computing platforms, and I know banks and government don't really care, but I do.
> you work on mobile apps you will notice that full attestation is too slow to put in the login path
Hm, Play Integrity isn't that slow on Android, from my experience.
> don't think a good security engineer would rely on atty as "front line" anti brute force control since bypasses are not that rare
I'm not privy to device-wide bypasses of Play Integrity that ship with Trusted Execution Environment (which is pretty much all ARM based Androids), Secure Element, and/or Hardware Root of Trust, but I'd appreciate if you have some significant exploit writeups (on Pixels, preferably) for me to look at?
> My concern is that future where banks can "trust the client" is a future of total big tech capture of computing platforms
A valid concern. In the case of smart & personal devices like Androids though, the security is warranted due to the nature of the workloads it tends to support (think Pacemaker / Insulin monitoring apps; government-issued IDs; financial instruments like credit cards; etc) and the ubiquity & proliferation of the OS (more than half of all humanity) itself.
A monitoring app doesn't even interact with systems you don't own. Just put a liability disclaimer for running modified versions.
> warranted
Decided by whom? And why is Google trusted, not me? At minimum, I shouldn't face undue hardship with the government due to refusing to deal with a third party, unless we first remove most of Google's rights to set the terms.
Funny that you say that, but the so far best artificial pancreas that is completely free and open source will soon be much harder to install to any Android phone without every user getting a valid key from Google.
In Germany, doctors even recommend these tools if they work. Because they make patients who know what they are doing healthier and more safe.
Naturally me and hundreds of other diabetics have already contacted our EU representative due to the changes Google is planning to make in their platform.
> I'm not privy to device-wide bypasses of Play Integrity that ship with Trusted Execution Environment (which is pretty much all ARM based Androids), Secure Element, and/or Hardware Root of Trust, but I'd appreciate if you have some significant exploit writeups (on Pixels, preferably) for me to look at?
Hi, you don't have the break the control on the strongest device. You only have to break it on the weakest device that's not blacklisted.
The situation is getting better as you note, but in the past the problem was that a lot of customers have potatos and you get a lot of support calls when you lock them out.
Correct. And the end of ownership, privacy, and truth too. If something can betray you on someone else's orders, it's not yours in the first place. You'll own nothing and if you aren't happy, good luck living in the woods.
Ntfy pays Apple/Google for the ability to deliver notifications to you. They use the free plan as a "gateway drug." It's just a cost of business to them, a marketing tactic to acquire paid users, no different in principle than plastering ads on billboards.
You can't set up your own Ntfy server (at least not without also having a private copy of the Ntfy app).
(Things may be different on the fDroid side, but many custom notification servers are a batterly life and privacy concern nevertheless).
I feel like this isn't just business services though.
American engineers are used to working for either big tech or "Silicon Valley inc." European engineers are used to working for Volkswagen, Ikea or Ryanair. Very different kinds of businesses who treat tech very differently.
Over here, competing on user experience and attracting users with a slick interface that people love to use isn't really something most companies think about (and so they get their lunch eaten by the Americans).
Nowhere is the European mentality more evident than in cybersecurity, where outdated beliefs still dominate. In this mentality, everybody is out to get you (and that notably incudes your vendors, your business partners and your customers), so all infrastructure has to be on prem, open source is free and hence suspicious by definition, obscurity is the best kind of security, encryption doesn't work so data should go over custom fiber, and if you have to expose an API on the public internet, an Authorization header isn't enough, it should also require MTLS behind a layer of IpSec.
reply