We would like to proudly announce that our Volume Cone Renderer has become operational, after months of research and intensive development of never before seen technology of 3D volumetric data compression
We have been so quiet about this as it was a project with a high risk of failure (it could never make it out, we couldnt guess if this was feasible)
I can finally say that I achieved my life long dream of having been involved in bringing the first next-gen ray-traced game to the market, ahead of Epic or Crytek
Obviously this has not gone without extending irrlicht beyond the limits, we now have:
- OpenCL devices which can be linked to OpenGL driver or used separately (intel HD cards double up as OpenCL devices aside from your NVIDIA card)
- OpenGL and OpenCL interop
- OpenGL compute shaders, sparse and bindless textures
Just integrating compute shaders we were able to vastly speed up the FFT Water computation and enable proper HDR
Here is the first screenshot from our Volume Compressor (Voxelizes 3d meshes and compresses then into our own format)
We are now working really hard to transcode all of our assets into the 3d volumetric format... some of the assets (like the grass blocks, etc.) have to basically be sculpted or created in high resolution in Blender and ZBrush as no such game assets exist. Especially for the purpose of compression (we are squeezing 48GB of data into 2GB) we have rented 4 amazon GPGPU instances.
It will be yet another 4 or 5 months before all the assets are ready in HD and volumetric formats
The compressed voxel format and decompression algorithm will be an open spec, unencumbered by any patents which shall be released around the time that version 1.0 of BaW comes out. We really look forward to it becoming The Standard for next generation assets. From what I can divulge so far, we record the following material properties at every point:
--Albedo Color Channel + Transparency/Opacity
--Refraction Index + Physical Density
Normals we retrieve from the partial deriviatives of the tricubic interpolation functions for the optical density, although we are experimenting with a special normal compression algorithm for higher quality lighting
Because of the new data format, we had to change the render pipeline completely. For example, now we are barely using OpenGL functionality for rendering (except dynamic/moving and animated geometry and particles).
There are a lot of advantages:
--Real Reflections (as we trace rays through volumes we can reflect and bend them)
--Quick Ambient Occlusion (not SSAO which is physically inaccurate)
--Implicit Depth-Of-Field (we can simulate the focusing of a lens by moving the tip of the traced dual cone)
--No "conventional" aliasing (due to our volume mipmapping scheme, filtering and tracing technique we dont trace discrete points but sections of a volume)
--Physically Accurate Refraction and glass rendering
--Order Independent Transparency
This is a
non realtime screenshot of a quick test put together (we havent redesigned the lighting system yet, so we pretty much had to brute force "path trace" this)
We have yet a few issues to sort out, such as how to integrate conventional particle rendering with volume rendering
But hopefully all of these problems are minor non-issues which can be worked around and surpassed