Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can’t reason about inference (or training) of LLMs on the semantic level. You can’t predict the output of an LLM for a specific input other than by running it. If you want the output to be different in a specific way, you can’t reason with precision that a particular modification of the input, or of the weights, will achieve the desired change (and only that change) in the output. Instead, it’s like a slot machine that you just have to try running again.

The fact that LLMs are based on a network of simple matrix multiplications doesn’t change that. That’s like saying that the human brain is based on simple physical field equations, and therefore its behavior is easy to understand.

 help



That’s like saying that the human brain is based on simple physical field equations, and therefore its behavior is easy to understand.

Right, which is the point: LLMs are much more like human coworkers than compilers in terms of how you interact with them. Nobody would say that there's no point to working with other people because you can't predict their behavior exactly.


This thread is about what software developers like. It’s common knowledge that many programmers like working with computers because that’s different in specific ways from working with people. So saying that LLMs are just like people doesn’t help here.

Yeah, and you are sugarcoating it - the stereotype is that some programmers actively dislike socializing is a stereotype for a reason.

Working with people != socializing, those are two very different things.

You can be professional and collaborate productively at work with people who you don't like at a personal level and have no intention to socialize with. The Mythbusters were the best example for this.

I get along great with all colleagues but I stopped joining them for coffee and watercooler smalltalk since we don't vibe and have nothing in common, so not only is it a waste of my time, it's also an energy drain for me to focus and fake interest in forced social interactions. But that doesn't mean we can't be productive together at technical stuff. I do think my PoV resonates with most people.


There is quite a bit of overlap - both require social skills.

Yeah there's a reason there is a stereotype/trope about programmers not liking people. I like a lot of people. But I would hate to work with many of them even if I like hanging out with them.

That said, I do like having an LLM that I can treat like the crappy bosses on TV treat their employees. When it gets something totally wrong I can yell at it and it'll magically figure out the right solution, but still keep a chipper personality. That doesn't work with humans.


Kinda funny how we managed to type the exact same thought at the same time.

You beat me by two mins :)

Be careful. You get good at what your practice.

> LLMs are much more like human coworkers than compilers in terms of how you interact with them.

Human coworkers are much more predictable. A workplace where people act similarly to LLM would be a complete zoo. Imagine asking for an endpoint modification and the result is a broken backend. Or brainstorming with a PM and the reply are "you're absolutely right, whatever I was saying was completely wrong, but let me repeat it in a different manner".


> Or brainstorming with a PM and the reply are "you're absolutely right, whatever I was saying was completely wrong, but let me repeat it in a different manner".

As if this isn't incredibly common..?


Nobody would say:

"...there's no point to working with other people because you can't predict their behavior exactly."

Because you CAN predict coworker behavior to a useful point. Ex, they'll probably reply to that email on Monday. They'll probably show you a video that you find less amusing than they do.

With LLMs you can't be quite sure whether they will make something up, forget a key detail, hide a mistake that will obviously be found out when everything breaks, etc. Stupid things that most employable people wouldn't do, like building a car and forgetting the wheels.


> LLMs are much more like human coworkers

Specifically they are like Julius, the colleague managers like but is a drag on everyone else.

https://ploum.net/2024-12-23-julius-en.html


What are you inputs and outputs? If inputs are zip files and outputs is uncompressed text, don't use an LLM. If inputs are English strings and outputs are localized strings, LLMs are way more accurate than any procedural code you might attempt for the purpose. Plus changing the style of outputs by modifying inputs/weights is also easier, you just need to provide a few thousand samples rather than think of every case. Super relevant for human coding, how many hobbyists or small businesses have teams of linguists on staff?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: