gardn banner

ray tracer

you can find the code for this project on gitlab.

a youtube idea

what made me want to write a tiny ray-tracing software is a series of youtube videos from the amazing sebastian lague.

seeing the pretty pictures produced by a relatively simple code made me want to try and implement something similar.

but what is ray-tracing

the reversibility of light

the idea behind ray-tracing is relatively straightforward because it actually consists in simulating rays of light.

there is, however, an assumption that is made and actually used in physics which is that given some light ray coming from a source point s and reaching a point m, a light ray coming from m in the opposite direction would follow the same path and reach s likewise.

this allows shooting rays from the camera instead of shooting rays from each and every light source in the rendered scene.

a ray can then be fired from the camera in the direction of every pixel of the screen, represented by a square orthogonal to the direction of the camera.

a camera shooting rays in different directions

ray propagation

after a ray is fired from the camera, there are two possible situations:


specular direction

because the ray only follows one particular path, which is random due to the random diffuse direction introduced at each collision, only firing one ray per pixel is not enough because the color obtained would not be representative of the actual color of the ray received by the camera.

therefore, instead of actually firing one ray per pixel, a lot of them are fired, and their color is averaged to obtain the final color of the pixel.

some kind of anti-aliasing

to smoothen the image, it is possible to fire all the rays fired from each pixel in slightly different directions. this is like adding a little bit of divergence to the rays.

i actually used the same technique as in my fractal renderer, which relies on fibonacci lattices, to get an uniform distribution of direction offsets.

without divergence / with divergence

signed distance functions

to render more complex shapes, it is possible to use signed distance functions (sdfs). those are functions that take a point in 3d space and output the minimum distance to the associated object. a lot of them are illustrated in this amazing page by inigo quilez.

2d sdfs for different shapes (image source)

ray-marching

the issue with a general sdf object is that there is no analytical formula to get the point where a ray will collide with the object, let alone an analytical formula for the normal at that point.

a new technique for finding these essential components is to use ray-marching.

ray-marching consists in moving along the ray direction by tiny steps until the distance to the sdf object is very very small, the associated point is then used as the collision point.

steps taken along the ray direction until hitting the object

the idea is to:

note that this has to be done in a volume, for example a sphere, around the sdf object because we can't afford ray-marching on every single ray fired from the camera.

what about normals ?

now that it is possible to get the point where the ray collides with the sdf object, we need the normal at that point, otherwise shading is impossible.

what's quite convenient is that it turns out the gradient of the signed distance function is, by definition, a vector orthogonal to the surfaces of equal distance of the function. what's even more convenient is that it is easy to get a good numerical approximation of the gradient. all that's left to do is normalize it, and we get the normal !

the gradient at point x and three surfaces of equal distance

some renders

here are some images i've rendered using my software :)

i like how it turns out in the dark
5 balls stuck in between two infinite mirrors
a cube minus a sphere