Is it? I’ve seen AI hallucinations, but they seem to be increasingly rare these days.
Much of the AI antipathy reminds me of Wikipedia in the early-mid 2000s. I remember feeling amazed with it, but also remember a lot of ranting by skeptics about how anyone could put anything on there, and therefore it was unreliable, not to be used, and doomed to fail.
20 years later and everyone understands that Wikipedia may have its shortcomings, and yet it is still the most impressive, useful advancement in human knowledge transfer in a generation.
I think robust crowdsourcing is probably the biggest capital-A Advancement in humanity's capabilities that came out of the internet, and there's a huge disparity in results that comes from how that capability is structured and used. Wikipedia designed protocols, laws, and institutions that leverage crowdsourcing to be the most reliable de facto aggregator of human knowledge. Social media designed protocols, laws, and institutions to rot people's brains, surveil their every move, and enable mass-disinformation to take over the public imagination on a regular basis.
I think LLMs as a technology are pretty cool, much like crowdsourcing is. We finally have pretty good automatic natural language processing that scales to large corpora. That's big. Also, I think the state of the software industry that is mostly driving the development, deployment, and ownership of this technology is mostly doing uninspired and shitty things with it. I have some hope that better orgs and distributed communities will accomplish some cool and maybe even monumental things with them over time, but right now the field is bleak, not because the technology isn't impressive (although somehow despite how impressive it is it's still being oversold) but because silicon valley is full of rotten institutions with broken incentives, the same ones that brought us social media and subscriptions to software. My hope for the new world a technology will bring about will never rest with corporate aristocracy, but with the more thoughtful institutions and the distributed open source communities that actually build good shit for humanity, time and time again
Words are something made up to express whatever the speaker/author intends them to, so there is really no such thing as correct or incorrect there. A dictionary can hint at the probability of someone else understanding a word absent of other context, which makes for a useful tool, but that is something quite different to establishing correctness.
As for things that can actually be incorrect, that has always been impossible, but we accept the human consensus to be a close enough approximation. With that, verifying 'correctness' to the degree that is possible is actually quite easy through validating it across many different LLMs trained on the human consensus. They will not all hallucinate identically. If convergence is found, then you have also found the human consensus. That doesn't prove correctness — we have never had a way to do that — but it is equivalent to how we have always dealt with establishing what we believe is correct.
It is a fundamental property of the universe. Whether or not it is useful is immaterial. Humans are unable to read minds. They can only make up words and use them as they intend. There is no other way.
Despite your insistence, I think you will find that the human consensus is that it useful. The human consensus is especially biased in this case, I will grant you that, but it seems few humans wish they were bears in the forest. Our ability to so effectively communicate in such a messy, imperfect environment is what has enabled us to be unlike all the other animals.
It might not sound like it should work on paper, but in the real world it does.
Turns out that because we've defined "words" as a thing that means a thing, now there are rules around "language" and "words". So while you're welcome to invent whatever combination of sounds you prefer to mean what you like, those sounds can be "correct" or "incorrect" as soon as other people become involved, because now you've entered into a social construct that extends beyond yourself.
So again your conclusion is technically correct, in a navel-gazing "the universe is what I perceive" sort of way, but counterproductive to use as a building block for communication.
There is no correct or incorrect here, but I will say it looks perfectly fine to me — naturally, as anything goes. I don't understand it. Is that what you are trying to communicate? There are many words I don't understand; even ones used commonly enough to be found in the dictionary. That is nothing new.
Here's the magic: I don't need to understand. Nobody is born with the understanding. Where communication is desired, we use other devices to express lack of understanding and keep trying to convey intent until a shared understanding is reached. I don't yet understand what that means, but assuming you are here in good faith, I eventually will as you continue to work to communicate your intent behind it.
I know computer people who spend their days writing in programming languages that never talk back struggle with this concept, but one's difficulties in understanding the world around them doesn't define that world.
> there are rules around "language" and "words".
If you are trying to suggest that there is some kind of purity test, it is widely recognized that what is often called Friesian is the closest thing to English as it used to be spoken. What you are writing looks nothing like it. If there are English rules, why don't you follow them? The answer, of course, is that the only "rules" are the ones you decide to make up in the moment. Hence why English today is different from English yesterday and is very different from English centuries ago.
this is important, i feel like a lot of people are falling in to the "stop liking what i don't like" way of thinking. Further, there's a million different ways to apply an AI helper in software development. You can adjust your workflow in whatever way works best for you. ..or leave it as is.
You're right, though I think a lot of the push back is due to the way companies are pushing AI usage onto employees. Not that complaining on HN will help anything...
And often incorrect! (and occasionally refuses to answer)