Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The solution is the same: punish people for their crimes, don’t punish people for wanting to know things.

The LLMs aren't being punished for wanting* to know things.

The problem for LLMs is, they're incredibly gullible and eager to please and it's been really difficult to stop any human who asks for help even when a normal human looking at the same transcript will say "this smells like the users wants to do a crime".

One use-case people reach for here is authors writing a novel about a crime. Do they need to know all the details? Mythbusters, on (one of?) their Breaking Bad episode(s?) investigated hydrofluoric acid, plus a mystery extra ingredient they didn't broadcast because it (a) made the stuff much more effective and (b) the name of the ingredient wasn't important, only the difference it made.

* Don't anthropomorphise yourself

 help



Ironically, it reads to me like they talking about the users wanting to know things, not the LLM.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: