Stylized rendering techniques for scalable real-time 3D animation

Author(s):  
Adam Lake ◽  
Carl Marshall ◽  
Mark Harris ◽  
Marc Blackstein
2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Won-Sun Lee ◽  
Seung-Do Kim ◽  
Seongah Chin

Subsurface scattering that simulates the path of a light through the material in a scene is one of the advanced rendering techniques in the field of computer graphics society. Since it takes a number of long operations, it cannot be easily implemented in real-time smartphone games. In this paper, we propose a subsurface scattering-based object rendering technique that is optimized for smartphone games. We employ our subsurface scattering method that is utilized for a real-time smartphone game. And an example game is designed to validate how the proposed method can be operated seamlessly in real time. Finally, we show the comparison results between bidirectional reflectance distribution function, bidirectional scattering distribution function, and our proposed subsurface scattering method on a smartphone game.


2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Yangzi Dong ◽  
Chao Peng

Achieving the efficient rendering of a large animated crowd with realistic visual appearance is a challenging task when players interact with a complex game scene. We present a real-time crowd rendering system that efficiently manages multiple types of character data on the GPU and integrates seamlessly with level-of-detail and visibility culling techniques. The character data, including vertices, triangles, vertex normals, texture coordinates, skeletons, and skinning weights, are stored as either buffer objects or textures in accordance with their access requirements at the rendering stage. Our system preserves the view-dependent visual appearance of individual character instances in the crowd and is executed with a fine-grained parallelization scheme. We compare our approach with the existing crowd rendering techniques. The experimental results show that our approach achieves better rendering performance and visual quality. Our approach is able to render a large crowd composed of tens of thousands of animated instances in real time by managing each type of character data in a single buffer object.


2018 ◽  
Author(s):  
William Michael Carter

We are of an era in which digital technology now enhances the method and practice of archaeology. In our rush to embrace these technological advances however, Virtual Archaeology has become a practice to visualize the archaeological record, yet it is still searching for its methodological and theoretical base. I submit that Virtual Archaeology is the digital making and interrogating of the archaeological unknown. By wayfaring means, through the synergy of the maker, digital tools and material, archaeologists make meaning of the archaeological record by engaging the known archaeological data with the crafting of new knowledge by multimodal reflection and the tacking and cabling of archaeological knowledge within the virtual space. This paper addresses through the 3D (re)imagination of a 16th century pre-contact Iroquoian longhouse, by community paradata blogging and participatory research, how archaeologists negotiate meaningmaking through the use of presence and phenomenology while also addressing the foundations of the London Charter: namely agency, authority, authenticity and transparency when virtually representing constructed archaeological knowledge. Through the use of Ontario Late Woodland longhouse excavation archaeological data, archaeological literature, historical accounts and linguistic research in combination with 3D animation and visual effects production methodologies, and engaging this mental construction made real in virtual reality by deploying these assets in a real-time gaming and head mounted immersive digital platform, archaeologists can interact, visualize and interrogate archaeological norms, constructs and notions. I advocate that by using Virtual Archaeology, archaeologists build meaning by making within 3D space, and by deploying these 3D assets within a real-time, immersive platform they are able to readily negotiate the past in the present.


2021 ◽  
Vol 11 (16) ◽  
pp. 7687
Author(s):  
Jie Huang ◽  
Guoqing Tian ◽  
Jiancheng Zhang ◽  
Yutao Chen

Unmanned aerial vehicle (UAV) light shows (UAV-LS) have a wow factor due to their advantages in terms of environment friendliness and controllability compared to traditional fireworks. In this paper, a UAV-LS system is developed including a collision-free formation transformation trajectory planning algorithm, a software package that facilitates animation design and real-time monitoring and control, and hardware design and realization. In particular, a dynamic task assignment algorithm based on graph theory is proposed to reduce the impact of UAV collision avoidance on task assignment and the frequency of task assignment in the formation transformation. In addition, the software package consists of an animation interface for formation drawing and 3D animation simulation, which helps the monitoring and control of UAVs through a real-time monitoring application. The developed UAV-LS system hardware consists of subsystems of decision-making, real-time kinematic (RTK) global positioning system (GPS), wireless communication, and UAV platforms. Outdoor experiments using six quadrotors are performed and details of implementations of high-accuracy positioning, communication, and computation are presented. Results show that the developed UAV-LS system can successfully complete a light show and the proposed task assignment algorithm performs better than traditional static ones.


Sign in / Sign up

Export Citation Format

Share Document