The Graphics Rendering Pipeline

2019 ◽  
pp. 29-46
2018 ◽  
pp. 11-27
Author(s):  
Tomas Akenine-Möller

2008 ◽  
Vol 08 (02) ◽  
pp. 209-222
Author(s):  
HAOYU PENG ◽  
HUA XIONG ◽  
ZHEN LIU ◽  
JIAOYING SHI

Existing parallel graphics rendering systems only support single level parallel rendering pipeline. This paper presents a novel high performance parallel graphics rendering architecture on PC-Cluster supporting tiled display wall. It employs a hybrid sort-first and sort-last architecture based on a new rendering and scheduling structure, called dynamic rendering team (DRT as abbreviation), which is composed of multiple PCs instead of single PC to act as a rendering node. Each DRT responds for a certain projector area in the tiled display wall and all DRTs form a outer level parallel rendering pipeline natively. Inside separate DRT there is an optimized parallel Rendering-Composing-Display (R-C-D) pipeline reconstructing the serial workflow to parallel one. The Optimized parallel R-C-D pipeline along with the outer parallel rendering pipelines among DRTs forms a special nested parallel pipeline architecture, which promotes the whole rendering performance of our system greatly. Experiments show that parallel rendering system with the proposed architecture and nested parallel rendering pipelines, called Parallel-SG, can render over 13M triangles at an average speed of 12 frames per second without any accelerating technologies on a tile-display wall of 5*3 projectors array.


2021 ◽  
Author(s):  
Mark Wesley Harris ◽  
Sudhanshu Semwal

The graphics rendering pipeline is key to generating realistic images, and is a vital process of computational design, modeling, games, and animation. Perhaps the largest limiting factor of rendering is time; the processing required for each pixel inevitably slows down rendering and produces a bottleneck which limits the speed and potential of the rendering pipeline. We applied deep generative networks to the complex problem of rendering an animated 3D scene. Novel datasets of annotated image blocks were used to train an existing attentional generative adversarial network to output renders of a 3D environment. The annotated Caltech-UCSD Birds-200-2011 dataset served as a baseline for comparison of loss and image quality. While our work does not yet generate production quality renders, we show how our method of using existing machine learning architectures and novel text and image processing has the potential to produce a functioning deep rendering framework


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1387
Author(s):  
Oswaldo Sebastian Peñaherrera-Pulla ◽  
Carlos Baena ◽  
Sergio Fortes ◽  
Eduardo Baena ◽  
Raquel Barco

Cloud Gaming is a cutting-edge paradigm in the video game provision where the graphics rendering and logic are computed in the cloud. This allows a user’s thin client systems with much more limited capabilities to offer a comparable experience with traditional local and online gaming but using reduced hardware requirements. In contrast, this approach stresses the communication networks between the client and the cloud. In this context, it is necessary to know how to configure the network in order to provide service with the best quality. To that end, the present work defines a novel framework for Cloud Gaming performance evaluation. This system is implemented in a real testbed and evaluates the Cloud Gaming approach for different transport networks (Ethernet, WiFi, and LTE (Long Term Evolution)) and scenarios, automating the acquisition of the gaming metrics. From this, the impact on the overall gaming experience is analyzed identifying the main parameters involved in its performance. Hence, the future lines for Cloud Gaming QoE-based (Quality of Experience) optimization are established, this way being of configuration, a trendy paradigm in the new-generation networks, such as 4G and 5G (Fourth and Fifth Generation of Mobile Networks).


2003 ◽  
Vol 3 (2) ◽  
pp. 170-173 ◽  
Author(s):  
Karthik Ramani, ◽  
Abhishek Agrawal, and ◽  
Mahendra Babu ◽  
Christoph Hoffmann

New and efficient paradigms for web-based collaborative product design in a global economy will be driven by increased outsourcing, increased competition, and pressures to reduce product development time. We have developed a three-tier (client-server-database) architecture based collaborative shape design system, Computer Aided Distributed Design and Collaboration (CADDAC). CADDAC has a centralized geometry kernel and constraint solver. The server-side provides support for solid modeling, constraint solving operations, data management, and synchronization of clients. The client-side performs real-time creation, modification, and deletion of geometry over the network. In order to keep the clients thin, many computationally intensive operations are performed at the server. Only the graphics rendering pipeline operations are performed at the client-side. A key contribution of this work is a flexible architecture that decouples Application Data (Model), Controllers, Viewers, and Collaboration. This decoupling allows new feature development to be modular and easy to develop and manage.


Sign in / Sign up

Export Citation Format

Share Document