ray tracer
you can find the code for this project on gitlab.
a youtube idea
what made me want to write a tiny ray-tracing software is a series of youtube videos from the amazing sebastian lague.
seeing the pretty pictures produced by a relatively simple code made me want to try and implement something similar.
but what is ray-tracing
the reversibility of light
the idea behind ray-tracing is relatively straightforward because it actually consists in simulating rays of light.
there is, however, an assumption that is made and actually used in physics which is that given some light ray coming from a source point s and reaching a point m, a light ray coming from m in the opposite direction would follow the same path and reach s likewise.
this allows shooting rays from the camera instead of shooting rays from each and every light source in the rendered scene.
a ray can then be fired from the camera in the direction of every pixel of the screen, represented by a square orthogonal to the direction of the camera.
ray propagation
after a ray is fired from the camera, there are two possible situations:
- the ray does not hit any object and "escapes to infinity": its color is set to some default environmental light
- the ray hits an object:
- the color of the pixel is updated based on the color of the object
- the point of collision and the normal of the object at that point are computed and used to find the specular direction and a random direction called diffuse direction
- a new ray is fired in a direction that is a linear interpolation of the specular direction and the diffuse direction based on the smoothness of the object which expresses whether the object is mirror-smooth or matte
- the same process is applied to the new ray
because the ray only follows one particular path, which is random due to the random diffuse direction introduced at each collision, only firing one ray per pixel is not enough because the color obtained would not be representative of the actual color of the ray received by the camera.
therefore, instead of actually firing one ray per pixel, a lot of them are fired, and their color is averaged to obtain the final color of the pixel.
some kind of anti-aliasing
to smoothen the image, it is possible to fire all the rays fired from each pixel in slightly different directions. this is like adding a little bit of divergence to the rays.
i actually used the same technique as in my fractal renderer, which relies on fibonacci lattices, to get an uniform distribution of direction offsets.
signed distance functions
to render more complex shapes, it is possible to use signed distance functions (sdfs). those are functions that take a point in 3d space and output the minimum distance to the associated object. a lot of them are illustrated in this amazing page by inigo quilez.
ray-marching
the issue with a general sdf object is that there is no analytical formula to get the point where a ray will collide with the object, let alone an analytical formula for the normal at that point.
a new technique for finding these essential components is to use ray-marching.
ray-marching consists in moving along the ray direction by tiny steps until the distance to the sdf object is very very small, the associated point is then used as the collision point.
the idea is to:
- compute the distance d to the nearest point of the object using the sdf
- if the distance d is smaller than a small epsilon, stop iterating and keep the current position as the collision point
- take a step along the ray direction by exactly the distance d
- repeat
note that this has to be done in a volume, for example a sphere, around the sdf object because we can't afford ray-marching on every single ray fired from the camera.
what about normals ?
now that it is possible to get the point where the ray collides with the sdf object, we need the normal at that point, otherwise shading is impossible.
what's quite convenient is that it turns out the gradient of the signed distance function is, by definition, a vector orthogonal to the surfaces of equal distance of the function. what's even more convenient is that it is easy to get a good numerical approximation of the gradient. all that's left to do is normalize it, and we get the normal !
some renders
here are some images i've rendered using my software :)