Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The frontier-of-knowledge point is the right question. My own research is a case in point - I apply experimental physics methods to LLMs, measuring their equations of motion in search of a unified framework for how and why they work. Some of the answers I'm looking for may not exist in any training data.

That's where the 4.5->4.6 jump hit me hardest - not routine tasks but problems where I need the model to reason about stuff it hasn't seen. It still fails, but it went from confidently wrong to productively wrong, if that makes sense. I can actually steer it now.

The cerebellum analogy resonates. I'd go further - it's becoming something I think out loud with, which is changing how I approach problems, not just how fast I solve them.

 help



That wrongness is the frontier labs trying to remove their benchmaxxing bias, so the models now have a concept of 'I don't know' and will rethink directions and goals better. There was lots of research last year on this topic, and it takes 6 to 12 months before it is implemented for general consumption.

2026 will see further improvements for you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: