Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This guy is convinced that LLMs don't work unless you specifically anthropomorphize them.

To me, this seems like a dangerous belief to hold.

 help



That feels like a somewhat emotional argument, really. Let's strip it down.

Within the domain of social interaction, you are committing to making Type II errors (False negatives), and divergent training for the different scenarios.

It's a choice! But the price of a false negative (treating a human or sufficeintly advanced agent badly) probably outweighs the cumulative advantages (if any) . Can you say what the advantages might even be?

Meanwhile, I think the frugal choice is to have unified training and accept Type I errors instead (False Positives). Now you only need to learn one type of behaviour, and the consequence of making an error is mostly mild embarrassment, if even that.


What are you talking about?

TL:DR; "you're gonna end up accidentally being mean to real people when you didn't mean to."

I meant to.

I want a world in which AI users need to stay in the closet.

AI users should fear shame.


> I want a world in which AI users need to stay in the closet.

> AI users should fear shame.

> created: 3 days ago

Unsolicited advice: stop trolling and see a therapist.


Reading elsewhere here, you've had some really bad experiences, I think.

[flagged]


I know what false positives and false negatives are. I don't understand the user's incoherent response to my comment.

It's funny you're chasing them down the threads with personal attacks.

edit: Okay, I see. The comment you're replying to is a troll. Just flag them.


Do I need to believe you are real before I respond? Not automatically. What I am initially engaging is a surface level thought expressed via HN.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: