Generating and Rendering Procedural Clouds in Real Time on Programmable 3D Graphics Hardware

Author(s):  
M Mahmud Hasan ◽  
M Sazzad Karim ◽  
Emdad Ahmed
2009 ◽  
Vol 2009 ◽  
pp. 1-7 ◽  
Author(s):  
Hanli Zhao ◽  
Xiaogang Jin ◽  
Jianbing Shen ◽  
Shufang Lu

Mouse picking is the most commonly used intuitive operation to interact with 3D scenes in a variety of 3D graphics applications. High performance for such operation is necessary in order to provide users with fast responses. This paper proposes a fast and reliable mouse picking algorithm using graphics hardware for 3D triangular scenes. Our approach uses a multi-layer rendering algorithm to perform the picking operation in linear time complexity. The objectspace based ray-triangle intersection test is implemented in a highly parallelized geometry shader. After applying the hardware-supported occlusion queries, only a small number of objects (or sub-objects) are rendered in subsequent layers, which accelerates the picking efficiency. Experimental results demonstrate the high performance of our novel approach. Due to its simplicity, our algorithm can be easily integrated into existing real-time rendering systems.


2012 ◽  
Vol 1 (3) ◽  
pp. 49-61 ◽  
Author(s):  
Michael Auer

Parallel processing methods in Geographic Information Systems (GIS) are traditionally used to accelerate the calculation of large data volumes with sophisticated spatial algorithms. Such kinds of acceleration can also be applied to provide real-time GIS applications to improve the responsiveness of user interactions with the data. This paper presents a method to enable this approach for Web GIS applications. It uses the JavaScript 3D graphics API (WebGL) to perform client-side parallel real-time computations of 2D or 2.5D spatial raster algorithms on the graphics card. The potential of this approach is evaluated using an example implementation of a hillshade algorithm. Performance comparisons of parallel and sequential computations reveal acceleration factors between 25 and 100, mainly depending on mobile or desktop environments.


2015 ◽  
Vol 75 (4) ◽  
Author(s):  
Ajune Wanis Ismail ◽  
Mark Bilinghust ◽  
Mohd Shahrizal Sunar

In this paper, we describe a new tracking approach for object handling in Augmented Reality (AR). Our approach improves the standard vision-based tracking system during marker extraction and its detection stage. It transforms a unique tracking pattern into set of vertices which are able to perform interaction such as translate, rotate, and copy. This is based on arobust real-time computer vision algorithm that tracks a paddle that a person uses for input. A paddle pose pattern is constructed in a one-time calibration process and through vertex-based calculation of the camera pose relative to the paddle we can show 3D graphics on top of it. This allows the user to look at virtual objects from different viewing angles in the AR interface and perform 3D object manipulation. This approach was implemented using marker-based tracking to improve the tracking in term of the accuracy and robustness in manipulating 3D objects in real-time. We demonstrate our improved tracking system with a sample Tangible AR application, and describe how the system could be improved in the future.


Sign in / Sign up

Export Citation Format

Share Document