Cool! Just to clarify the idea, it's not rendering it on screen (I was first curious how you'd do that from Rust in a simple example), but rather saves image frames.
When I wrote my ray tracer, I set up a simple websocket server to stream pixels to a canvas element.[1] it’s a trivial way to do this without a ton of GUI code.
Interesting. I'd guess you could do it without GUI, but you still need access to something like WSI / Vulkan to actually get to the screen if you want to do it directly.
In the decade I spent working on RenderMan at Pixar, I learned just how immensely useful it was to have an image viewer running in a separate process talking to the renderer over a socket or pipe. (The Image Tool, or "It" is RenderMan's viewer.) Having it stay up even if you kill the render or it crashes for some reason and being able to flip back and forth to easily compare test renders across recompiles is game changing.
If I were to start writing a new renderer, the first thing I'd do is to hook it up to an external image viewer over some protocol. These days, I find myself liking TEV (https://github.com/Tom94/tev) a lot as a simple open-source image viewer that supports this and most other basic features that I'd want. See the links in the README for Python and Rust implementations of its protocol.
“Minimal code to put pixels on the screen?” is a question I’ve seen often enough that I’ve made a little gist to link whenever it comes up. https://gist.github.com/CoryBloyd/6725bb78323bb1157ff8d4175d... It requires https://www.libsdl.org/ But, making a window and putting some pixels in it, on all the world’s varied platforms, is the premier feature of that lib.
Yeah, I figured you could use SDL. But probably more interesting to do it more directly with Rust and some WSI functions to get the needed surface (though it would be a lot more code).
with multithreaded rendering, is it possible to use SDL ?
anecdotally, in my cpu based ray-tracers (i have one in c++ and python so far) , i have found that SDL based rendering causes noticeable slowdowns to become quite useless after the initial novelty wears off.
I've checked the code under the hood of SDL and it does pretty much the best you can do for fast uploads of an image from the CPU to the GPU.
My CPU-based ray tracer uses the code I linked. SDL_LockTexture, go wide with threads writing into the locked buffer, SDL_UnlockTexture. If I skip the actual ray tracing, I get 1400 FPS uploading a 1024x1024 image.