Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> no living thing can get away with making so many mistakes before it's learned anything

If you consider that LLMs have already "learned" more than any one human in this world is able to learn, and still make those mistakes, that suggests there may be something wrong with this approach...



Not so: "Per example" is not "per wall clock".

To a limited degree, they can compensate for being such slow learners (by example) due to the transistors doing this learning being faster (by the wall clock) than biological synapses to the same degree to which you walk faster than continental drift. (Not a metaphor, it really is that scale difference).

However, this doesn't work on all domains. When there's not enough training data, when self-play isn't enough… well, this is why we don't have level-5 self-driving cars, just a whole bunch of anecdotes about various different self-driving cars that work for some people and don't work for other people: it didn't generalise, the edge cases are too many and it's too slow to learn from them.

So, are LLMs bad at… I dunno, making sure that all the references they use genuinely support the conclusions they make before declaring their task is complete, I think that's still a current failure mode… specifically because they're fundamentally different to us*, or because they are really slow learners?

* They *definitely are* fundamentally different to us, but is this causally why they make this kind of error?


But humans do the same thing. How many eons did we make the mistake of attributing everything to God's will, without a scientific thought in our heads? It's really easy to be wrong, when the consequences don't lead to your death, or are actually beneficial. The thinking machines are still babies, whose ideas aren't honed by personal experience; but that will come, in one form or another.


> The thinking machines are still babies, whose ideas aren't honed by personal experience; but that will come, in one form or another.

Some machines, maybe. But attention-based LLMs aren't these machines.


I'm not sure. If you see what they're doing with feedback already in code generation. The LLM makes a "hallucination", generates the wrong idea, then tests its code only to find out it doesn't compile. And goes on to change its idea, and try again.


A few minutes worth of “personal experience” doesn't really deserve the “personal experience” qualifier.


Why not? It's just a minor example of what's possible, to show the general concept has already started.


> show the general concept has already started

The same way a todler creeping is the start of the general concept of space exploration.


Yes. And even so, it shows remarkable effectiveness already.


Like a car engine or a combine. The problem isn't the effectiveness of the tool for its purpose, it's the religion around it.


We seem to be talking past one another. All I was talking about was the facts of how these systems perform, without any reverence about it at all.

But to your point, I do see a lot of people very emotionally and psychologically committed to pointing out how deeply magical humans are, and how impossible we are to replicate in silicon. We have a religion about ourselves; we truly do have main character syndrome. It's why we mistakenly thought the earth was at the center of the universe for eons. But even with that disproved, our self-importance remains boundless.


> I do see a lot of people very emotionally and psychologically committed to pointing out how deeply magical humans are, and how impossible we are to replicate in silicon.

This a straw man, the question isn't if this is possible or not (this is an open question), it's about whether or not we are already here, and the answer is pretty straightforward: no we aren't. (And the current technology isn't going to bring us anywhere near that)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: