Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hallucination usually isn’t random — it’s a confidence problem. The model commits to a guess when context certainty drops below a threshold. I’ve had success fixing that by letting multiple small models “cross-check” each other instead of one large one rambling to fill the gap. Feels more like peer review than single-brain improvisation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: