Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a very narrow view of how LLMs can interact to improve inference accuracy. Different LLMs have different capabilities. They can be used in coordination to improve results.

Your objections seem to rely on a restricted view that says "we can't do better" but with no evidence. Whereas we have plenty of evidence of massive, continual improvement in the very areas you are holding up as problematic.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: