Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actually… I think this be solved by AI answers. I don’t look up commands on random websites, instead I ask an LLM for that kind of stuff. At the very least, check your commands with an LLMs.
 help



What we used to have, 15 years ago, was a really well functioning google. You could be lazy with your queries and still find what you wanted in the first two or three hits. Sometimes it was eerily accurate and figuring out what you were actually searching for. Modern google is just not there even with AI answers which is supposed to be infinitely better at natural language processing.

15 years ago there were fewer content farms trying to get your clicks.

I think that played a somewhat smaller role than Google seemingly gradually starting to take its position for granted and so everything became more focused on revenue generation and less focused on providing the highest quality experiences or results.

Beyond result quality it's absurd that it took LLMs to get meaningful natural language search. Google could have been working on that for many years, even if in a comparably simple manner, but seemingly never even bothered to try, even though that was always obviously going to be the next big step in search.


Google could afford to manually exclude the content farms if they didn't morph from a search company to an advertising company.

We used to have an endless supply of new search engines, so "SEO" was not viable. Then Google got a monopoly on search, DoubleClick reverse-acquired Google, and here we are.

Google was such a revelation after the misery of Alta Vista and kin. I miss the days when I liked them.

Yesterday I was debugging why on Windows, my Wifi would randomly disconnect every couple hours (whereas it worked on Linux). Claude decided it was a driver issue, and proceeded to download a driver update off a completely random website and told me to execute it.

My point is, this is not solved by AI answers.


Claude didn’t simply “proceed to download a driver update off a completely random website and told me to execute it”

You had to disable permissions or approve some of that.


Don’t the LLMs get their information from these random websites? They don’t know what is good and what is malware. Most of the time when I get an AI answer with a command in it, there is a reference to a random reddit post, or something similar.

LLMs will allow Mal to sneak in backdoors in the dataset. Most of the popular LLMs use some kind of blacklisting instead of a smaller specific/specialised dataset. The latter seems more akin to whitelisting.

FTFA: “This is almost identical to the previous attack via ChatGPT.”



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: