rendering pipeline
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 19)

H-INDEX

5
(FIVE YEARS 1)

Author(s):  
Fabrizio Cutolo ◽  
Nadia Cattari ◽  
Marina Carbone ◽  
Renzo D'Amato ◽  
Vincenzo Ferrari

Author(s):  
Sheldon Andrews ◽  
Loic Nassif ◽  
Kenny Erleben ◽  
Paul G. Kry

We present a novel meso-scale model for computing anisotropic and asymmetric friction for contacts in rigid body simulations that is based on surface facet orientations. The main idea behind our approach is to compute a direction dependent friction coefficient that is determined by an object's roughness. Specifically, where the friction is dependent on asperity interlocking, but at a scale where surface roughness is also a visual characteristic of the surface. A GPU rendering pipeline is employed to rasterize surfaces using a shallow depth orthographic projection at each contact point in order to sample facet normal information from both surfaces, which we then combine to produce direction dependent friction coefficients that can be directly used in typical LCP contact solvers, such as the projected Gauss-Seidel method. We demonstrate our approach with a variety of rough textures, where the roughness is both visible in the rendering and in the motion produced by the physical simulation.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-21
Author(s):  
Yang Zhou ◽  
Lifan Wu ◽  
Ravi Ramamoorthi ◽  
Ling-Qi Yan

In Computer Graphics, the two main approaches to rendering and visibility involve ray tracing and rasterization. However, a limitation of both approaches is that they essentially use point sampling. This is the source of noise and aliasing, and also leads to significant difficulties for differentiable rendering. In this work, we present a new rendering method, which we call vectorization, that computes 2D point-to-region integrals analytically, thus eliminating point sampling in the 2D integration domain such as for pixel footprints and area lights. Our vectorization revisits the concept of beam tracing, and handles the hidden surface removal problem robustly and accurately. That is, for each intersecting triangle inserted into the viewport of a beam in an arbitrary order, we are able to maintain all the visible regions formed by intersections and occlusions, thanks to our Visibility Bounding Volume Hierarchy structure. As a result, our vectorization produces perfectly anti-aliased visibility, accurate and analytic shading and shadows, and most important, fast and noise-free gradients with Automatic Differentiation or Finite Differences that directly enables differentiable rendering without any changes to our rendering pipeline. Our results are inherently high-quality and noise-free, and our gradients are one to two orders of magnitude faster than those computed with existing differentiable rendering methods.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Leiming Li ◽  
Wenyao Zhu ◽  
Hongwei Hu

For VR systems, one of its core parts is to present people with a real and immersive 3D simulation environment. This paper uses real-time computer graphics technology, three-dimensional modeling technology, and binocular stereo vision technology to study the multivisual animation character objects in virtual reality technology; designs a binocular stereo vision animation system; designs and produces a three-dimensional model; and develops a virtual multivisual animation scene application. The main research content and work performed in the text include the research of the basic graphics rendering pipeline process and the analysis and research of each stage of the rendering pipeline. It mainly analyzes the 3D graphics algorithm used in the three-dimensional geometric transformation of computer graphics and studies the basic texture technology, basic lighting model, and other image output processes used in the fragment processing stage. Combined with the development needs of the subject, the principles of 3D animation rendering production software and 3D graphics modeling are studied, and the solid 3D model displayed in the virtual reality scene is designed and produced. This article also reflects the application of virtual reality in multivisual animation character design from the side, so it has realistic value and application prospects.


2021 ◽  
Vol 60 (02) ◽  
Author(s):  
Yanxin Guan ◽  
Xinzhu Sang ◽  
Shujun Xing ◽  
Yuanhang Li ◽  
Yingying Chen ◽  
...  

Author(s):  
Egor Komarov ◽  
Dmitry Zhdanov ◽  
Andrey Zhdanov

Caustic illumination frequently appears in the real life, however, this type of illumination is especially hard to be rendered in real-time. Currently, some solutions allow to render the caustic illumination caused by the water surface, but these methods could not be applied to the arbitrary geometry. In the scope of the current article, we present a method of real-time caustics rendering that uses the DirectX Raytracing API and is integrated into the rendering pipeline. The method is based on using additional forward caustic visibility maps and backward caustics visibility maps that are created for light sources and the virtual camera correspondingly. The article presents the algorithm of the developed real-time caustics rendering method and the results of testing its implementation on test scenes. The analysis of the dependence of the rendering speed on the depth of specular ray tracing, the number of light sources, and the number of rays per pixel is carried out. Testing shows promising results which can be used in the modern game industry to increase the realism of the virtual world visualization.


2021 ◽  
Author(s):  
Mark Wesley Harris ◽  
Sudhanshu Semwal

The graphics rendering pipeline is key to generating realistic images, and is a vital process of computational design, modeling, games, and animation. Perhaps the largest limiting factor of rendering is time; the processing required for each pixel inevitably slows down rendering and produces a bottleneck which limits the speed and potential of the rendering pipeline. We applied deep generative networks to the complex problem of rendering an animated 3D scene. Novel datasets of annotated image blocks were used to train an existing attentional generative adversarial network to output renders of a 3D environment. The annotated Caltech-UCSD Birds-200-2011 dataset served as a baseline for comparison of loss and image quality. While our work does not yet generate production quality renders, we show how our method of using existing machine learning architectures and novel text and image processing has the potential to produce a functioning deep rendering framework


Author(s):  
M.V.I Salgado ◽  
H.A.D.D Hettiarachchi ◽  
T.U Munasinghe ◽  
K.A.U Fernando ◽  
Ishara Gamage ◽  
...  

2020 ◽  
Vol 2020 (28) ◽  
pp. 199-204
Author(s):  
Abhijith Punnappurath ◽  
Michael S. Brown

A camera's image signal processor (ISP) is dedicated hardware that performs a series of processing steps to render a captured raw sensor image to its final display-referred output suitable for viewing and sharing. It is often desirable to be able to revert – or de-render – the ISP-processed image back to the original raw sensor image. Undoing the ISP rendering, however, is not an easy task. This is because ISPs perform many nonlinear routines in the rendering pipeline that are difficult to invert. Moreover, modern cameras often apply scene-specific image processing, resulting in a wide range of possible ISP parameters. In this paper, we propose a modification to the ISP that allows the ISP-rendered image to be reverted back to a raw image. Our approach works by appending a fixed-sampling of the raw sensor values to all captured images. The appended raw samples comprise no more than 8 rows of pixels in the full-sized image and represent a negligible overhead given that 12–16 MP sensors typically have 3000 rows of pixels or more. The appended pixels are rendered along with the captured image to the final output. From these rendered raw samples, a reverse mapping function can be computed to undo the ISP processing. We demonstrate that this method performs almost on par with competing state-ofthe-art approaches for ISP de-rendering while offering a practical solution that is integrable to current camera ISP hardware.


Sign in / Sign up

Export Citation Format

Share Document