> Motion parallax has to do with the apparent size of an object. If you put a soda can in front of you and then move it closer, it will get bigger in your visual field. Your brain assumes that the can didn’t suddenly grow and concludes that it’s just got closer to you.
> Shape-from-shading is a bit trickier. If you stare at a point on an object in front of you and then move your head around, you’ll notice that the shading of that point changes ever so slightly depending on the lighting around you. The funny thing is that your eyes actually flicker constantly, recalculating the tiny differences in shading, and your brain uses that information to judge how far away the object is.
I need help understanding the mechanics here.
What exactly is the flicker described? What does it have to do with shading being recalculated? If hypothetically our eyes did not flicker, how would that affect our depth perception?
As a follow up - what would be involved in emulating this in a virtual system?
[Edit: Removed discussion not relevant to parent's question for reposting as a top-level comment.]
I've been assuming that this bit about "eye flickers" is boyd's way of describing for a general audience the phenomenon of optical saccades, which are automatic and very fast eyeball twitches which allow the brain to gather enough information to construct a visual scene through an eyeball with a (usually) very narrow area of focus.
I'm not sure how that affects shape-from-shading, though, because saccades are movements of the eyeballs only, not of the head. Saccades therefore don't affect angles of reflection or refraction, which would seem to make it impossible for them to change the shading of a given scene.
The fact is that for VR to be accurate, we'd have to do very fast tracking of the eye movement and even the change of the focus of the eye. Both are certainly used by our brains to unconsciously calculate the distance of an object. Both can happen independently of our head movements.
I can easily recognize that such adjustments of the picture aren't present in immersive 3D projections. Add to that that we males are much less aware of the incorrectness of the shades, it's easy for me to imagine that women have bigger problems under given conditions.
> Shape-from-shading is a bit trickier. If you stare at a point on an object in front of you and then move your head around, you’ll notice that the shading of that point changes ever so slightly depending on the lighting around you. The funny thing is that your eyes actually flicker constantly, recalculating the tiny differences in shading, and your brain uses that information to judge how far away the object is.
I need help understanding the mechanics here.
What exactly is the flicker described? What does it have to do with shading being recalculated? If hypothetically our eyes did not flicker, how would that affect our depth perception?
As a follow up - what would be involved in emulating this in a virtual system?