Ray Tracing in LinearMAX
Ray Tracing in LinearMAX
This paper will detail the ray tracing system which LinearMAX uses for the rendering of scenes.
Ray Tracing is a rendering technique used to create photorealistic scenes by mimicking the inverse of how our eyes actually see. Animation studios use ray tracing to create beautiful animated works, video game studios use ray tracing to provide greater realism and better graphics for the player, and architects and car manufacturers use ray tracing to model their buildings and cars in the most realistic way they can. These were only a few of the modern uses of ray tracing, and as computers become more powerful and AI accelerated ray tracing hits its stride, the rendering technique is only going to become more popular.
How it Works:
In essence, the ray tracing system implemented in LinearMAX solves the “rendering equation” at each pixel on the screen.
While this equation looks intimidating, after understanding how to works, it is not as hard to implement as one would think. To define terms Lo is the light value at the pixel we care about, Le is the light hitting the camera lens emitting from the object, and the integral is the total light reflected off the pixel we care about in a hemisphere.
We will first tackle the easiest parts of the solution and finish with the more complex sections. Keep in mind that this is not a tutorial on the specific implementation, rather an overview of the technique and how it is solved at a high level of abstraction. First, to solve for the emission value all we need is to find the emission color of the object which can be set as one of the material properties of the object. Easy enough. Next we will begin to solve the integral. Since this integral is infinitely recursive, as ideally we would gather the light from every possible direction in a hemisphere around the hit point, we can't actually directly solve this equation. We can however get very close by using Monte Carlo Integration, which randomly samples directions in the hemisphere and averages the results. The great thing about Monte Carlo Integration is that it is guaranteed to converge on the actual value of the integral. The first section of the integral we have to solve for is the Bidirectional Reflectance Distribution Function or the “fr” function in the integral. The BRDF describes the color and material properties of the object the pixel we are about is a part of. It helps describe how the simulated light rays interact with the object and can be described with an albedo, specular, and smoothness value. The smoother the material is the more heavily used the specular color is as compared to the albedo color. Next can we multiply that with the dot product of the surface normal and the hemisphere sample’s direction so that we can weigh rays of light which more directly impact the pixel higher. Instead of weighting the value we get from each ray, we can take better samples of the hemisphere so that samples are more likely to be selected from the hemisphere based on the cosine of the sample direction and surface normal of the pixel. This drastically speeds up the noise reduction as more important samples are more far more common than ineffectual samples. Finally the last variable in the equation is the actual value of the light coming into the pixel. This is the difficult part of the equation since this value is determined recursively by the same function. This means for each ray we have to solve this equation many times for each bounce of the ray. This is a very expensive calculation when needing to calculate this equation for each ray shot out of the camera. We run all this code on the GPU and in LinearMAX it is written in HLSL, High Level Shader Language.
This extremely brief overview of the ray tracing system implemented in LinearMAX aims to give the reader a very basic understanding of the process by which ray tracing can be handled.
Leave a comment
Log in with itch.io to leave a comment.