Neat! What happens when the simulated data is hallucinated/incorrect?
In the example videos, the Golden Gate bridge with snow shows the bridge as 1 road, with total of 3 lanes. But in reality, it’s a split highway with divider, so 2 sides both have 3 lanes, 6 total lanes.
What happens when the car “learns” to drive on the simulated incorrect 3 lane example? For example will next time it goes on the real GG bridge hug to the rightmost lane?
Ideally it would learn a relationship between the sensor input and the correct actions, even if the sensor input is not realistic for the GG in reality.
In the example videos, the Golden Gate bridge with snow shows the bridge as 1 road, with total of 3 lanes. But in reality, it’s a split highway with divider, so 2 sides both have 3 lanes, 6 total lanes.
What happens when the car “learns” to drive on the simulated incorrect 3 lane example? For example will next time it goes on the real GG bridge hug to the rightmost lane?