Thursday, August 27, 2015

Voxel Occlusion

Real time rendering systems (like the ones in game engines) have two big problems to solve: First to determine what is visible from the camera's point of view, then to render what is visible.

While rendering is now a soft problem, finding out what is potentially visible remains difficult. There is a long history of hackery in this topic: BSP trees, PVS, portals etc. (The acronyms in this case make it sound simpler.) These approaches perform well for some cases to then fail miserably in other cases. What works for indoors breaks in large open spaces. To make it worse, these visibility structures take long to build. For an application where the content constantly changes, they are a very poor choice or not practical at all.

Occlusion testing on the other hand is a dynamic approach to visibility. The idea is simple: using a simplified model of the scene we can predict when some portions of the scene become eclipsed by another parts of the scene.

The challenge is how to do it very quickly. If the test is not fast enough, it could still be faster to render everything than to test and then render the visible parts. It is necessary to find out simplified models of the scene geometry. Naturally these simple, approximated models must cover as much of the original content as possible.

Voxels and clipmap scenes make it very easy to perform occlusion tests. I wrote about this before: Covering the Sun with a finger.

We just finished a new improved version of this system, and we were ecstatic to see how good the occluder coverage turned out to be. In this post I will show how it can be done.

Before anything else, here is a video of the new occluder system in action:


A Voxel Farm scene is broken down into chunks. For each chunk the system computes several quads (a four vertex polygon) that are fully inscribed in the solid section of a chunk. They also are as large as possible. A very simple example is shown here, where a horizontal platform produces a series of horizontal quads:


These quads do not need to be axis aligned. As long as they remain inside the solid parts of the cell, they could go in any direction. The following image shows occluder quads going at different angles:


Here is how it works:

Each chunk is generated or loaded as a voxel buffer. You can imagine this as a 3D matrix, where each element is a voxel.

The voxel buffer is scanned along each main axis. The following images depict the process of scanning along one direction. Below there is a representation of the 3D buffer as a slice. If this was a top down view, you can imagine this is a vertical wall going at an angle:

For each direction, two 2D voxel buffers are computed. One stores where the ray enters the solid and the second where the ray exits the solid.

For each 2D buffer, the maximum solid rectangle is computed. A candidate rectangle can grow if the neighboring point in the buffer is also solid and its depth value does not differ more than a given threshold.

Each buffer can produce one quad, showing in blue and green in the following image:
Here is another example where a jump in depth (5 to 9) makes the green occluder much smaller:

In fact, if we run again the function that finds the maximum rectangle on the second 2D buffer it will return another quad, this time covering the missing piece :
Once we have the occluders for all chunks in a scene, we can test very quickly whether a given chunk in the scene is hidden behind other chunks. Our engine does this using a software rasterizer, which renders the occluder quads to a depth buffer. This buffer can be used to test all chunks in the scene. If a chunk's area on screen is covered in the depth  buffer by a closer depth, it means the chunk is not visible.

This depth buffer can be very low resolution. We currently use a 64x64 buffer to make sure the software rasterization is fast. Here you can see how the buffer looks like:


It is also possible not to use our rasterizer test at all and feed the quads to a system like Umbra. What really matters is not how the test is performed, but how good and simple the occluder quads are.

While this can still be improved, I'm very happy with this system. It is probably the best optimization we have ever done.


Wednesday, August 12, 2015

Texture Grain

So far we have been using triplanar mapping for texturing voxels. This is good enough for terrain but does not look right every time for man-made structures. With triplanar mapping, textures are always aligned to the universal coordinate planes. If you are building features at an angle, the texture orientation ignores that and continues to follow the universal grid. That has been a thorny problem for all voxel creators and enthusiasts out there.

It does not have to be like that anymore. Take a look at this previously impossible structure:


It is two wooden beams going at an angle. The texture grain properly follows the line direction. 

It is not only about lines, this can be applied to any tool. In the following video you can see how curves also preserve the texture grain:


This new ability is a nice side effect of including UV mapping information in voxels. This is no different than the zebras and leopards we stamped into the world in earlier videos, in this case the tool comes up with the UV mapping information on-the-fly.

UV mapped voxels are not yet released in Voxel Farm, it is just now we got to work on the tools so they take full advantage of them, but it really looks to me this will take our engine to a whole new level.