Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very concerned with this direction of training “counterfactual events such as whether the Waymo Driver could have safely driven more confidently instead of yielding in a particular situation.” Seems dicey. This could lead in the direction to a less safe Waymo. Since the counterfactual will be generated, I suspect that that the generations will be biased towards survivor situations where most video footage in its training data will be from environments where people reacted well not those that ended in tragedy. Emboldening Waymo on generated best case data. THIS IS DANGEROUS!!!
 help



Not at all. It's not the counter-factual they're generating, it's the "too rare to capture often enough to train a response to" they're generating.

They're implying that without the model having knowledge, even approximate, of a scene to react to, it simply doesn't react at all; it simply "yields" to the situation until it passes. In my experience taking Waymo's almost daily this holds.

I would rather not have the Waymo yield to a tornado, rising flood-waters, or charging elephant...


Driving is always a balance between speed and safety. If you want ultimate safety you just sit in the driveway. But obviously that isn't useful. So functionally one of the most important things a self-driving system will decide is "how fast is it safe to drive right now". Slower is not always better and it has to balance safety with productivity.

Not entering a roundabout when it's clearly safe to do so is a mark against you at a driving exam. So would be always driving at 5mph. It's just not that simple.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: