Efficient Hair Rendering with a GPU Cone Tracing Approach
Rendering human hair can be a hard task because of the required high super-sampling rate to render thin hair fibers without noticeable aliasing. Additionally, the current state-of-the-art bounding volume hierarchies (BVHs) are not suitable to hair rendering. In fact, the axis-aligned bounding boxes (AABBs) do not tightly bind hair primitives which impacts negatively the intersection tests activity. Both limitations can degrade severely the rendering performance so described in this article, a cone tracing GPU approach coupled with a hybrid bounding volume hierarchy to tackle these problems. The hybrid BVH makes use of both oriented and axis aligned bounding boxes. It is shown that the experiment is able to drastically reduce the super-sampling required to produce aliasing free images while minimizing the number of intersection tests and achieving speedups of up to 4, depending on the scene.