Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

my most commonly repeated prompt; would be nice if the baked it into the tool itself:

"No emojis. be concise. no suggestions unless I explicitly ask for them. answer questions like the machine you are. Don't try and add personality or humour; remember you're a robot."

 help



> Don't try and add personality or humour; remember you're a robot."

> remember you're a robot."

The anthropomorphization juxtaposed to the actual command is a bit ironic.


It really does make you wonder why all the models seem to require that. In principle, it shouldn't be a property of LLMs, and lol no it's not an "emergent property".

Post-training and "human preference" according to "data". Don't know a single developer who use these tools for work who prefer that though, but also don't know anyone who use LLMs a lot just "for fun" either, might just be vastly different preferences between the two userbases.

LLM are a text prediction engine. Starting the prompt with “you are a helpful assistant” help make subsequent text prediction more in line of that of a helpful assistant.

lol I once used a similar “you’re a machine so just do as you’re told” to a prompt and it answered back: “I’m not a machine, I’m Claude a helpful assistant” and refused to do what I asked because it claimed I didn’t have the authority to make the decision I’d asked it to convey in writing.


I'd add "no ass-kissing"

I like it. Have you tried putting this in your LLM system prompt?

these are the reasons why I like Claude! when you are talking to it just normal, it recognizes and adapts and does none of these things

need prompt macros

Absolutely right! You must be fun at parties .



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: