scholarly journals Art-Directed Composition in Dynamic Real Scenes

2021 ◽  
Author(s):  
Lohit Petikam

<p>Art direction is crucial for films and games to maintain a cohesive visual style. This involves carefully controlling visual elements like lighting and colour to unify the director's vision of a story. With today's computer graphics (CG) technology 3D animated films and games have become increasingly photorealistic. Unfortunately, art direction using CG tools remains laborious. Since realistic lighting can go against artistic intentions, art direction is almost impossible to preserve in real-time and interactive applications. New live applications like augmented and mixed reality (AR and MR) now demand automatically art-directed compositing in unpredictably changing real-world lighting. </p> <p>This thesis addresses the problem of dynamically art-directed 3D composition into real scenes. Realism is a basic component of art direction, so we begin by optimising scene geometry capture in realistic composites. We find low perceptual thresholds to retain perceived seamlessness with respect to optimised real-scene fidelity. We then propose new techniques for automatically preserving art-directed appearance and shading for virtual 3D characters. Our methods allow artists to specify their intended appearance for different lighting conditions. Unlike with previous work, artists can direct and animate stylistic edits to automatically adapt to changing real-world environments. We achieve this with a new framework for look development and art direction using a novel latent space of varied lighting conditions. For more dynamic stylised lighting, we also propose a new framework for art-directing stylised shadows using novel parametric shadow editing primitives. This is a first approach that preserves art direction and stylisation under varied lighting in AR/MR.</p>

2021 ◽  
Author(s):  
Lohit Petikam

<p>Art direction is crucial for films and games to maintain a cohesive visual style. This involves carefully controlling visual elements like lighting and colour to unify the director's vision of a story. With today's computer graphics (CG) technology 3D animated films and games have become increasingly photorealistic. Unfortunately, art direction using CG tools remains laborious. Since realistic lighting can go against artistic intentions, art direction is almost impossible to preserve in real-time and interactive applications. New live applications like augmented and mixed reality (AR and MR) now demand automatically art-directed compositing in unpredictably changing real-world lighting. </p> <p>This thesis addresses the problem of dynamically art-directed 3D composition into real scenes. Realism is a basic component of art direction, so we begin by optimising scene geometry capture in realistic composites. We find low perceptual thresholds to retain perceived seamlessness with respect to optimised real-scene fidelity. We then propose new techniques for automatically preserving art-directed appearance and shading for virtual 3D characters. Our methods allow artists to specify their intended appearance for different lighting conditions. Unlike with previous work, artists can direct and animate stylistic edits to automatically adapt to changing real-world environments. We achieve this with a new framework for look development and art direction using a novel latent space of varied lighting conditions. For more dynamic stylised lighting, we also propose a new framework for art-directing stylised shadows using novel parametric shadow editing primitives. This is a first approach that preserves art direction and stylisation under varied lighting in AR/MR.</p>


2021 ◽  
pp. 1-17
Author(s):  
Andrew Fedorovich Lemeshev ◽  
Dmitry Dmitrievich Zhdanov ◽  
Boris Khaimovich Barladyan

The paper deals with the problem of visual perception discomfort inherent in mixed reality systems, more precisely, the determination of the lighting parameters of the objects of the virtual world, corresponding to the lighting conditions of the real world, into which the virtual objects are embedded. The paper proposes an effective solution to the problem of reconstructing the coordinates of a light source from an RGBD image of the real world. A detailed description of the algorithm and the results of a numerical experiment on reconstructing the coordinates of light sources in a model scene are given. The accuracy of coordinate recovery is analyzed and the limitations of the method are considered, associated with the inaccuracy of determining the boundaries of objects and their shadows, as well as the lack of interconnected areas of the boundaries of objects and their shadows in the RGBD image of the scene.


2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.


2019 ◽  
Vol 12 (4) ◽  
pp. 1-33 ◽  
Author(s):  
Telmo Adão ◽  
Luís Pádua ◽  
David Narciso ◽  
Joaquim João Sousa ◽  
Luís Agrellos ◽  
...  

MixAR, a full-stack system capable of providing visualization of virtual reconstructions seamlessly integrated in the real scene (e.g. upon ruins), with the possibility of being freely explored by visitors, in situ, is presented in this article. In addition to its ability to operate with several tracking approaches to be able to deal with a wide variety of environmental conditions, MixAR system also implements an extended environment feature that provides visitors with an insight on surrounding points-of-interest for visitation during mixed reality experiences (positional rough tracking). A procedural modelling tool mainstreams augmentation models production. Tests carried out with participants to ascertain comfort, satisfaction and presence/immersion based on an in-field MR experience and respective results are also presented. Ease to adapt to the experience, desire to see the system in museums and a raised curiosity and motivation contributed as positive points for evaluation. In what regards to sickness and comfort, the lowest number of complaints seems to be satisfactory. Models' illumination/re-lightning must be addressed in the future to improve the user's engagement with the experiences provided by the MixAR system.


Author(s):  
Arvid Bell ◽  
Alexander Bollfrass

Abstract Current wargaming techniques are effective training and research instruments for military scenarios with fixed tools and boundaries on the problem. Control cells composed of officiants adjudicating and evaluating moves enforce these boundaries. Real-world crises, however, unfold in several dimensions in a chaotic context, a condition requiring decision-making under deep uncertainty. In this article, we assess how pedagogical exercises can be designed to effectively capture this level of complexity and describe a new framework for developing deeply immersive exercises. We propose a method for designing crisis environments that are dynamic, deep, and decentralized (3D). These obviate the need for a control cell and enhance the usefulness of exercises in preparing military and policy practitioners by better replicating real-world decision-making dynamics. This paper presents the application of this 3D method, which integrates findings from wargame and negotiation simulation design into immersive crisis exercises. We share observations from the research, design, and execution of “Red Horizon,” an immersive crisis exercise held three times at Harvard University with senior civilian and military participants from multiple countries. It further explores connections to contemporary trends in international relations scholarship.


2006 ◽  
Vol 5 (3) ◽  
pp. 53-58 ◽  
Author(s):  
Roger K. C. Tan ◽  
Adrian David Cheok ◽  
James K. S. Teh

For better or worse, technological advancement has changed the world to the extent that at a professional level demands from the working executive required more hours either in the office or on business trips, on a social level the population (especially the younger generation) are glued to the computer either playing video games or surfing the internet. Traditional leisure activities, especially interaction with pets have been neglected or forgotten. This paper introduces Metazoa Ludens, a new computer mediated gaming system which allows pets to play new mixed reality computer games with humans via custom built technologies and applications. During the game-play the real pet chases after a physical movable bait in the real world within a predefined area; infra-red camera tracks the pets' movements and translates them into the virtual world of the system, corresponding them to the movement of a virtual pet avatar running after a virtual human avatar. The human player plays the game by controlling the human avatar's movements in the virtual world, this in turn relates to the movements of the physical movable bait in the real world which moves as the human avatar does. This unique way of playing computer game would give rise to a whole new way of mixed reality interaction between the pet owner and her pet thereby bringing technology and its influence on leisure and social activities to the next level


2019 ◽  
Vol 5 (10) ◽  
pp. 79 ◽  
Author(s):  
Tunai Porto Marques ◽  
Alexandra Branzan Albu ◽  
Maia Hoeberechts

Underwater images are often acquired in sub-optimal lighting conditions, in particular at profound depths where the absence of natural light demands the use of artificial lighting. Low-lighting images impose a challenge for both manual and automated analysis, since regions of interest can have low visibility. A new framework capable of significantly enhancing these images is proposed in this article. The framework is based on a novel dehazing mechanism that considers local contrast information in the input images, and offers a solution to three common disadvantages of current single image dehazing methods: oversaturation of radiance, lack of scale-invariance and creation of halos. A novel low-lighting underwater image dataset, OceanDark, is introduced to assist in the development and evaluation of the proposed framework. Experimental results and a comparison with other underwater-specific image enhancement methods show that the proposed framework can be used for significantly improving the visibility in low-lighting underwater images of different scales, without creating undesired dehazing artifacts.


Sign in / Sign up

Export Citation Format

Share Document