Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To me, the best part is that most of it is just an emergent property of raytracing. Once you implement the core physics needed for any raytracer, things like lenses and aperture blades just work.

I can't want for full-scene realtime raytracing to hit the consumer market.



In a similar vein, when raycasting was being added to second life I built a rudimentary approximation of a camera. It took some time to put together an image because of the limits of SL scripting and the rate limits on raycasting, but the first successful image had incredibly obvious vignetting. The solution was changing the size of my 'aperture' equivalent.

It's always very cool when simulation behavior starts to resemble real behavior.


This is where I’m losing track of what’s going on here. I generally understand the principles from at least the first half of the video, the pinhole camera and lenses and can imagine how it works in real life, but this is blender. It’s not real life. How does this work in BLENDER? Is it because the underlying physics models in Blender are so accurate one can recreate even these increasingly convoluted things in real life?


Well, that's the best part of it: the underlying physics model is really easy!

A pinhole camera requires zero additional physics, just a simulation of light rays passing through a small hole. A lens just requires Snell's Law - which is pretty much trivial. Adding dispersion is just a matter of using a formula instead of a constant for the refractive index of the lens, and doing the raytracing using random wavelengths instead of a single "composite" one.

Raytracers capable of doing this are implemented in a few hundred lines of code as a toy project. The difficult part is getting it to render quickly - and of course modeling the scene properly.


The physics of light is relatively simple. Blender wants to look photorealistic, so cycles (a branched path tracer, commonly known as a ray tracer) almost perfectly models how light works. That means these emergent properties just work.

It wouldn't work under a rasteriser (Blender has one, called Eevee) as those operate using a complex series of approximations of reality that are more difficult to create but run far faster. But because Cycles (slower, but more realistic) operates by mimicking how light works this kind of thing is entirely possible.


Also keep in mind that Cycles and most renderers use a simplified version of physics—rather than pathtrace all wavelengths, they use a simplified RGB model, which means certain optical properties behave differently. For instance, you don’t get the chromatic aberration with the simulated lens.


raytracing rendering moved to physics based rendering a while ago: the core idea is to approximate the physics of light as accurately as can reasonably be made fast. It's not like it's solving a wave equation directly, it's using a statistical sampling of a model of how light interacts with different interfaces between mediums. This is basically the easiest way to get photorealistic renders: the human brain is very good at noticing when your scene has non-physical lighting (not in a 'ahah, the specular highlights violate conservation of energy' way, but in a 'this looks fake' way).

(this approach was started by some researchers who basically said 'well, let's make a simple scene for real and measure everything and make our render look identical' https://en.wikipedia.org/wiki/Cornell_box )




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: