Virtual Reality Rendering Framework
This framework is based on liblava and was extended to include features necessary for developing and evaluating new stereo rendering strategies. A special focus of the framework lies on remote rendering for standalone consumer HMDs over WiFi.
In order to support multiple APIs as well as local and remote rendering a Headset interface was introduced with implementations for
- OpenXR Headset, which uses the OpenXR standard developed by Khronos to communicate with an HMD.
- OpenVR Headset (legacy), which uses the OpenVR API developed by Valve to communicate with HMDs via the SteamVR platform.
- Remote Headset, which uses custom protocol to communicate with a standalone HMD running a custom application.
- Emulated Headset, which is a fallback solution to test out the rendering techniques without the need to having an actual HMD attached.
Rendering
This framework is intended to investiage different stereo rendering strategies. Thus, a stereo strategy interface was created to easiliy switch between different strategies and compare them. Currently, there exist the following stereo strategies:
- Naive Stereo Forward: renders the image for one eye at a time using forward shading.
- Naive Stereo Deferred: renders the image for one eye at a time using deferred shading.
- Multi View Stereo: renders both images simulatenously using multi-view.
- Depth Peeling Reprojection: a custom rendering technique desribed below.
In general, the framework supports shadow mapping and approximate global illumination using light propagation volumes. For evaluation of the performance and quality of the different rendering techniques the framework provides utility functions for measuring gpu times and capturing images for an external comparison to a ground truth.
Depth Peeling Reprojection
Depth Peeling Reprojection is a rendering technique that aims to reduce the duplicate shading that occurs when rendering images for the left and right eye in virtual reality applications. Instead of rendering the scene from two perspectives, it will render the first two layers from a single perspective similar to Mara et. al, Deep G-Buffers for Stable Global Illumination Approximation. The goal of this approach is to have more information available when reprojecting the resulting images and, thus, having less artifacts due to disoccluded regions. Especially when considering streaming the result wirelessly to remote clients it is critical to have reprojection strategies that can handle lost or delayed frames nicely.