scholarly journals A Shader-Based Ray Tracing Engine

2021 ◽  
Vol 11 (7) ◽  
pp. 3264
Author(s):  
Sukjun Park ◽  
Nakhoon Baek

Recently, ray tracing techniques have been highly adopted to produce high quality images and animations. In this paper, we present our design and implementation of a real-time ray-traced rendering engine. We achieved real-time capability for triangle primitives, based on the ray tracing techniques on GPGPU (general-purpose graphics processing unit) compute shaders. To accelerate the ray tracing engine, we used a set of acceleration techniques, including bounding volume hierarchy, its roped representation, joint up-sampling, and bilateral filtering. Our current implementation shows remarkable speed-ups, with acceptable error values. Experimental results shows 2.5–13.6 times acceleration, and less than 3% error values for the 95% confidence range. Our next step will be enhancing bilateral filter behaviors.

Author(s):  
Vasco Costa ◽  
João Madeiras Pereira ◽  
Joaquim A. Jorge

Accurately rendering occlusions is required when ray-tracing objects to achieve more realistic rendering of scenes. Indeed, soft phenomena such as shadows and ambient occlusion can be achieved with stochastic ray tracing techniques. However, computing randomized incoherent ray-object intersections can be inefficient. This is problematic in Graphics Processing Unit (GPU) applications, where thread divergence can significantly lower throughput. The authors show how this issue can be mitigated using classification techniques that sort rays according to their spatial characteristics. Still, classifying occlusion terms requires sorting millions of rays. This is offset by savings in rendering time, which result from a more coherent ray distribution. The authors survey and test different ray classification techniques to identify the most effective. The best results were achieved when sorting rays using a compress-sort-decompress approach using 32-bit hash keys.


2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


2020 ◽  
Vol 32 ◽  
pp. 03054
Author(s):  
Akshata Parab ◽  
Rashmi Nagare ◽  
Omkar Kolambekar ◽  
Parag Patil

Vision is one of the very essential human senses and it plays a major role in human perception about surrounding environment. But for people with visual impairment their definition of vision is different. Visually impaired people are often unaware of dangers in front of them, even in familiar environment. This study proposes a real time guiding system for visually impaired people for solving their navigation problem and to travel without any difficulty. This system will help the visually impaired people by detecting the objects and giving necessary information about that object. This information may include what the object is, its location, its precision, distance from the visually impaired etc. All these information will be conveyed to the person through audio commands so that they can navigate freely anywhere anytime with no or minimal assistance. Object detection is done using You Only Look Once (YOLO) algorithm. As the process of capturing the video/images and sending it to the main module has to be carried at greater speed, Graphics Processing Unit (GPU) is used. This will help in enhancing the overall speed of the system and will help the visually Impaired to get the maximum necessary instructions as quickly as possible. The process starts from capturing the real time video, sending it for analysis and processing and get the calculated results. The results obtained from analysis are conveyed to user by means of hearing aid. As a result by this system the blind or the visually impaired people can visualize the surrounding environment and travel freely from source to destination on their own.


2012 ◽  
Vol 3 (7) ◽  
pp. 1557 ◽  
Author(s):  
Kenneth K. C. Lee ◽  
Adrian Mariampillai ◽  
Joe X. Z. Yu ◽  
David W. Cadotte ◽  
Brian C. Wilson ◽  
...  

2021 ◽  
Vol 87 (5) ◽  
pp. 363-373
Author(s):  
Long Chen ◽  
Bo Wu ◽  
Yao Zhao ◽  
Yuan Li

Real-time acquisition and analysis of three-dimensional (3D) human body kinematics are essential in many applications. In this paper, we present a real-time photogrammetric system consisting of a stereo pair of red-green-blue (RGB) cameras. The system incorporates a multi-threaded and graphics processing unit (GPU)-accelerated solution for real-time extraction of 3D human kinematics. A deep learning approach is adopted to automatically extract two-dimensional (2D) human body features, which are then converted to 3D features based on photogrammetric processing, including dense image matching and triangulation. The multi-threading scheme and GPU-acceleration enable real-time acquisition and monitoring of 3D human body kinematics. Experimental analysis verified that the system processing rate reached ∼18 frames per second. The effective detection distance reached 15 m, with a geometric accuracy of better than 1% of the distance within a range of 12 m. The real-time measurement accuracy for human body kinematics ranged from 0.8% to 7.5%. The results suggest that the proposed system is capable of real-time acquisition and monitoring of 3D human kinematics with favorable performance, showing great potential for various applications.


Sign in / Sign up

Export Citation Format

Share Document