I will make this argument in favor of casinos, which is that at least we have coevolved with them. They've been around for centuries. We collectively recognize the dangers. We are not collectively blindsided by them.
Individually, yeah, by all means they can prey on people. But they're on the list of things that have been preying on people for centuries, like alcohol, and all kinds of other things. Ring-fencing casinos has a track record of some success at containing them.
I mean, sure, I'd love to wake up tomorrow and for all of the human race to have advanced to the point that to every last individual we are no longer gambling and that industry vanishes in a puff of smoke. I am as far into the belief that they are immoral as it is practically possible to be. But they are at least a known and knowable risk.
These prediction markets are blindsiding us. We could put up with them for another few decades until we coevolve further with them, or we could just, you know, not. Just end them now. Plus, prediction markets have a certain meta-ness to them that casinos largely lack that will keep them fresh and coevolving their own new ways to predate on us. Casinos have basically reached their final form, prediction markets could take yet more decades to get there and it's possible there isn't a stable endgame with them. Or, again, we could just end them here and now.
The libertarian argument for prediction markets is really beautiful.
It's just a pity it basically depends on all participants in the prediction market being basically unaware that they are participating in a prediction market, and being oblivious to the incentives to create the outcomes they are predicting by the very act of predicting it with money.
But other than that minor detail, that little minor catastrophic flaw in the foundation, it's a beautiful argument.
I think we can call it as a society. It's a failure. We can go back to banning them, not just for moral reasons, but just pragmatic ones. The theory doesn't work. The supposed benefits don't manifest, and "unanticipated" costs to everyone do. We did the experiment. (Again.) We can close this out now.
You shouldn't model it as incredible hard to fake. It isn't. It's harder that typing a password you've stolen into a web site, but if you set out to do it, it's not that much harder.
This is the primary reason I'm against biometrics used for identity. Yeah, the privacy invasion is a problem, but I think that's completely dominated by the fact that if everyone uses it, it will be leaked, and once leaked, can indeed be quite practically faked. If used as a password, it's a password you can never change. That is useless.
The difficulty of overcoming a security measure should be greater in cost than the thing it is valuing. The cost of, for instance, replicating a fingerprint given a photo of it, is basically a home hobbyist project for the weekend. Check out Youtube for many people who have done exactly that and give instructions how. When the cost of bypass is "home hobbyist project on a weekend", the value of what it should be expected to protect is correspondingly low.
(In fact I don't even use it on my cell phone, with all its access to bank accounts and amazon accounts and other ways to spend my real money. The idea of a password to all that stuff that I leave arbitrary copies of sitting right on my screen is completely absurd. Everything important is locked behind codes and passwords. It's less convenient than fingerprints but at least those offer actual security.)
You also have to bear in mind the costs of the biometrics gathering. If you have a physical guard watching someone do a retinal scan and verifying that they have put their real eye up to it, you're at least on track to something that takes a lot of resources to overcome, especially if it's in combination with other techniques of identification. If you don't have that, now we're back to "how cheaply can we replicate whatever passes for a retina with this scanner" and that's likely to be cheaper than most people think. Real-world biometrics are in places where attackers can perform arbitrary attacks with impunity.
You might find it interesting to learn a bit about information theory. The entire purpose of your specific number is precisely to identify which number in that list is yours. Having the list of all possible numbers is irrelevant. Conceptually you can model that as everyone has that, all the time. But that's not enough to do anything with, because having that list entire list means you have zero information.
If you say "it starts with an 8", you've eliminated 90% of the possibilities. Now you have log2(10) bits of information, but you haven't nailed it down yet. For each additional number you give you give that many more bits until you nail it down.
This is a common misconception people have. I remember someone who claimed to have copyright all possible melodies by virtue of having printed them out and thus enumerated them. But that is meaningless, because the entire job of naming a specific melody is precisely the nailing down of which one you mean. Expanding the list of possibilities you might mean is actually a reduction in the amount of information, despite the superficial appearance of listing more numbers out, and when you expand the possibilities out to "all possible instances of the thing" you're actually at the minimum of information, not the maximum.
I've acquired a sense for at least some of the bots. There's this set of bots that post a high-engagement post about once a day to an implausibly large range of subreddits, with implausible regularity. I can tell by the way I remove them and the way that the other subs are mostly not that most subs have not figured this out yet.
There is an obvious solution to that problem, which I haven't wanted to put out there, but I've become increasingly suspicious that it's already been figured out anyhow, which is to limit a specific user account to a specific "persona" with plausible interests and posting rates.
And that's where I think the race may well end, victory spammers. If there's a winning move against that in general I haven't figured it out.
I know reddit is concerned about this at the corporate level but I'm not sure they realize this is possibly their #1 threat, towering above all others. Not that I have any specific suggestions about what to do about it either. And it's years before the masses realize this and stop visiting, and by the time that happens all the social media companies are going to be in trouble for the same reason. You can see the leading edge here on HN but it's still only an almost negligible fraction of the total userbase of something like Reddit today. But that will change.
Reddit's also famous for NSFW content. There are also stories about harrasing people who post in the "wrong" subreddit (e.g. political subreddits that are the opposite view).
Reddit has a P/E ratio higher than Nvidia. Go ahead and think about that for a little while and then try and explain it with anything other than their value is being a bot-driven propaganda-pushing device for sale.
Out of curiosity, has anyone noticed a non-negligible presence of bots in threads on HN? I haven't, but I'm not sure if that's because I'm bad at spotting them or because HN is good at getting rid of them or because HN is a niche platform.
Yes, they’re very identifiable. New or resurrected account makes multi-paragraph comments on random topics with “insights” that read like AI, even if they don’t have em-dashes or “it’s not X it’s Y” (and sometimes they do).
Fortunately and in fairness to this site, they’ve become rarer, and most seem to be flagged within hours. Usually I look at the comments to confirm, and most are already dead.
I made a post here a bit ago where one of the few replies I got was one of these conversational ad-bots, albeit on the more obvious side. It was getting flagged which gives me hope that HN is good at filtering it, but I also mildly worry I'm (or we're) just missing it when it's subtle. I do suspect it's a huge volume in terms of comment count either way though.
I have suspicions but there's fewer signals on HN available to the general public so it's harder to tell.
Well... to be more precise... I'm abundantly positive there are bots and shills here in a general sense. But when it comes to identifying specific accounts as bots or shills, it gets difficult. Yeah, a lot of us have gotten pretty good at identifying the "default LLM voice", but it is trivial to kick it out of that.
I have done some formal writing with AI, and I always feed it a sample of my own writing to emulate. It doesn't do it perfectly. For instance, I'm a semi-colon kind of guy and it still em-dashes without more explicitly instructions to avoid them. But what comes out the other end would definitely pass most people's "default LLM voice" sniff test; it eliminates most of the tells [1] people look for. (I just checked. The resulting output may actually be "better" at avoiding the tells than my own actual text...)
The upshot of all of that is that we are approaching a point with the current AIs that with just a bit of clever prompting it may take many, many kilobytes of text for someone to form a justified (!) opinion that some set of posts is actually AI.
Enormous latency is all relative. A network drive on a local network holding a swap file would still outperform some number of computers I've owned that put their swap on much, much slower hard drives of their time. Of course, nobody was trying to swap two gigabytes to these drives as that would have been 10 times their capacity....
It runs Linux with Windows underneath it, hence Windows is the subsystem being subordinate (in the most literal sense where it simply means "order" with no further implications) to Linux.
Per wongarsu's post, something like the OS/2 Subsystem is an OS/2 system with Windows beneath it, but the OS/2 Subsystem is much smaller and less consequential, thus subsidiary (in the auxiliary sense) to Windows as a whole.
Isn't marketing fun?
This is how we end up with hundreds of products that provide "solutions" to your business problems and "retain customers" and upwards of a dozen other similar phrases they all slather on their frontpages, even though one is a distributed database, one is a metrics analysis system, one handles usage-based billing, one is a consulting service, one is a hosted provider for authentication... so frustrating trying to figure out just what a product is sometimes with naming conventions that make "Windows Subsystem for Linux" look like a paragon of clarity. At least "Linux" was directly referenced and it wasn't Windows Subsystem for Alternate Binary Formats or something.
This is in the class of things where even if the specific text doesn't trace to a true story, it has certainly happened somewhere, many times over.
In the math space it's not even quite as silly as it sounds. Something can be both "obvious" and "true", but it can take some substantial analysis to make sure the obvious thing is true by hitting it with the corner cases and possibly exceptions. There is a long history of obvious-yet-false statements. It's also completely sensible for something to be trivially true, yet be worth some substantial analysis to be sure that it really is true, because there's also a history of trivial-yet-false statements.
I could analogize it in our space to "code so simple it is obviously bug free" [1]... even code that is so simple that it is obviously bug free could still stand to be analyzed for bugs. If it stands up to that analysis, it is still "so simple it is obviously bug free"... but that doesn't mean you couldn't spend hours carefully verifying that, especially if you were deeply dependent on it for some reason.
Heck I've got a non-trivial number of unit tests that arguably fit that classification, making sure that the code that is so simple it is bug free really is... because it's distressing how many times I've discovered I was wrong about that.
[1]: In reference to Tony Hoare's "There are two ways to write code: write code so simple there are obviously no bugs in it, or write code so complex that there are no obvious bugs in it."
The problem is people want to use 2026 tools to write their code but they want to be judged by 2016 standards.
In 2016, if I saw 10,000 lines of code, that carried a certain proof-of-work with it. They probably couldn't help but give the code some testing as they were working up to that point. We know there has to have been a certain amount of thought in it. They've been living with it for some months, guaranteed.
In 2026, 10,000 lines of code means they spent a minimum amount of money on tokens. 10,000 lines can be generated pretty quickly in a single task, if it's something like "turn this big OpenAPI spec into an API in my language". It's entirely possible 90%+ of the project hasn't actually been tested, except by the unit tests the AI wrote itself, which is a great start, but not more than that for code that hasn't ever actually run in any real scenario from the real world.
Nothing about any of that in intrinsically wrong. But the standards have to be shifted. While the bar for a "Show HN" should perhaps not be high, it should probably be higher than "I typed a few things into a text box". And that not because that's necessarily "bad" either, but because of the mismatch between valuable human attention and the cheapness of being able to make a draw on it.
It's kind of a bummer in some sense... but then again, honestly, the space of things that can be built with an idea and a few prompts to an AI was frankly fairly well covered even before AI coding tools. Already I had a list of "projects we've already seen a lot of so don't expect the community to shower you with adulation" for any language community I've spent any significant time in. AI has grown the list of "projects I've seen too many times" a bit, but a lot of what I've seen is that we're getting an even larger torrent of the same projects we already had too many of before.
> 2026 tools to write their code but they want to be judged by 2016 standards.
That's basically the entire AI landscape atm.
I keep seeing people do things like spend a weekend building a product then charging ridiculous prices for it with the justification that it's what those products would've cost a few years ago.
For some reason, it doens't click for them that those prices were a reflection of the effort it took to get to that point and that the situation has changed.
Really apt comment, and I think it applies to a broader domain than just coding. People want others to judge their super fancy slide deck or new branding by that same 2016 standard, essentially fabricating accomplishment for themselves.
I've always preferred to think of normalization as more about "removing redundancy" than in the frame it is normally presented. Or, to put it another way, rather than "normalizing" which has as a benefit "removing redundancy", raise the removing of redundancy up to the primary goal which has as a side benefit "normalization".
A nice thing about that point of view is that it fits with your point; redundancy is redundancy whether you look at it with a column-based view or a row-based view.
Individually, yeah, by all means they can prey on people. But they're on the list of things that have been preying on people for centuries, like alcohol, and all kinds of other things. Ring-fencing casinos has a track record of some success at containing them.
I mean, sure, I'd love to wake up tomorrow and for all of the human race to have advanced to the point that to every last individual we are no longer gambling and that industry vanishes in a puff of smoke. I am as far into the belief that they are immoral as it is practically possible to be. But they are at least a known and knowable risk.
These prediction markets are blindsiding us. We could put up with them for another few decades until we coevolve further with them, or we could just, you know, not. Just end them now. Plus, prediction markets have a certain meta-ness to them that casinos largely lack that will keep them fresh and coevolving their own new ways to predate on us. Casinos have basically reached their final form, prediction markets could take yet more decades to get there and it's possible there isn't a stable endgame with them. Or, again, we could just end them here and now.
reply