Supporting Motion Capture Acting Through a Mixed Reality Application

Author(s):  
Daniel Kade ◽  
Rikard Lindell ◽  
Hakan Ürey ◽  
Oğuzhan Özcan

Current and future animations seek for more human-like motions to create believable animations for computer games, animated movies and commercial spots. A technology widely used technology is motion capture to capture actors' movements which enrich digital avatars motions and emotions. However, a motion capture environment poses challenges to actors such as short preparation times and the need to highly rely on their acting and imagination skills. To support these actors, we developed a mixed reality application that allows showing digital environments while performing and being able to see the real and virtual world. We tested our prototype with 6 traditionally trained theatre and TV actors. As a result, the actors indicated that our application supported them getting into the demanded acting moods with less unrequired emotions. The acting scenario was also better understood with less need of explanation than when just discussing the scenario, as commonly done in theatre acting.

2018 ◽  
pp. 1780-1807
Author(s):  
Daniel Kade ◽  
Rikard Lindell ◽  
Hakan Ürey ◽  
Oğuzhan Özcan

Current and future animations seek for more human-like motions to create believable animations for computer games, animated movies and commercial spots. A technology widely used technology is motion capture to capture actors' movements which enrich digital avatars motions and emotions. However, a motion capture environment poses challenges to actors such as short preparation times and the need to highly rely on their acting and imagination skills. To support these actors, we developed a mixed reality application that allows showing digital environments while performing and being able to see the real and virtual world. We tested our prototype with 6 traditionally trained theatre and TV actors. As a result, the actors indicated that our application supported them getting into the demanded acting moods with less unrequired emotions. The acting scenario was also better understood with less need of explanation than when just discussing the scenario, as commonly done in theatre acting.


2006 ◽  
Vol 5 (3) ◽  
pp. 53-58 ◽  
Author(s):  
Roger K. C. Tan ◽  
Adrian David Cheok ◽  
James K. S. Teh

For better or worse, technological advancement has changed the world to the extent that at a professional level demands from the working executive required more hours either in the office or on business trips, on a social level the population (especially the younger generation) are glued to the computer either playing video games or surfing the internet. Traditional leisure activities, especially interaction with pets have been neglected or forgotten. This paper introduces Metazoa Ludens, a new computer mediated gaming system which allows pets to play new mixed reality computer games with humans via custom built technologies and applications. During the game-play the real pet chases after a physical movable bait in the real world within a predefined area; infra-red camera tracks the pets' movements and translates them into the virtual world of the system, corresponding them to the movement of a virtual pet avatar running after a virtual human avatar. The human player plays the game by controlling the human avatar's movements in the virtual world, this in turn relates to the movements of the physical movable bait in the real world which moves as the human avatar does. This unique way of playing computer game would give rise to a whole new way of mixed reality interaction between the pet owner and her pet thereby bringing technology and its influence on leisure and social activities to the next level


2015 ◽  
Vol 78 (2-2) ◽  
Author(s):  
Ismahafezi Ismail ◽  
Mohd Shahrizal Sunar ◽  
Hoshang Kolivand

Realistic humanoid 3D character movement is very important to apply in the computer games, movies, virtual reality and mixed reality environment. This paper presents a technique to deform motion style using Motion Capture (MoCap) data based on computer animation system. By using MoCap data, natural human action style could be deforming. However, the structure hierarchy of humanoid in MoCap Data is very complex. This method allows humanoid character to respond naturally based on user motion input. Unlike existing 3D humanoid character motion editor, our method produces realistic final result and simulates new dynamic humanoid motion style based on simple user interface control.


2021 ◽  
Vol 3 (1) ◽  
pp. 6-7
Author(s):  
Kathryn MacCallum

Mixed reality (MR) provides new opportunities for creative and innovative learning. MR supports the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real-time (MacCallum & Jamieson, 2017). The MR continuum links both virtual and augmented reality, whereby virtual reality (VR) enables learners to be immersed within a completely virtual world, while augmented reality (AR) blend the real and the virtual world. MR embraces the spectrum between the real and the virtual; the mix of the virtual and real worlds may vary depending on the application. The integration of MR into education provides specific affordances which make it specifically unique in supporting learning (Parson & MacCallum, 2020; Bacca, Baldiris, Fabregat, Graf & Kinshuk, 2014). These affordance enable students to support unique opportunities to support learning and develop 21st-century learning capabilities (Schrier, 2006; Bower, Howe, McCredie, Robinson, & Grover, 2014).   In general, most integration of MR in the classroom tend to be focused on students being the consumers of these experiences. However by enabling student to create their own experiences enables a wider range of learning outcomes to be incorporated into the learning experience. By enabling student to be creators and designers of their own MR experiences provides a unique opportunity to integrate learning across the curriculum and supports the develop of computational thinking and stronger digital skills. The integration of student-created artefacts has particularly been shown to provide greater engagement and outcomes for all students (Ananiadou & Claro, 2009).   In the past, the development of student-created MR experiences has been difficult, especially due to the steep learning curve of technology adoption and the overall expense of acquiring the necessary tools to develop these experiences. The recent development of low-cost mobile and online MR tools and technologies have, however, provided new opportunities to provide a scaffolded approach to the development of student-driven artefacts that do not require significant technical ability (MacCallum & Jamieson, 2017). Due to these advances, students can now create their own MR digital experiences which can drive learning across the curriculum.   This presentation explores how teachers at two high schools in NZ have started to explore and integrate MR into their STEAM classes.  This presentation draws on the results of a Teaching and Learning Research Initiative (TLRI) project, investigating the experiences and reflections of a group of secondary teachers exploring the use and adoption of mixed reality (augmented and virtual reality) for cross-curricular teaching. The presentation will explore how these teachers have started to engage with MR to support the principles of student-created digital experiences integrated into STEAM domains.


2016 ◽  
Author(s):  
Felix D. Schönbrodt ◽  
Jens B. Asendorpf

Computer games are advocated as a promising tool bridging the gap between the controllability of a lab experiment and the mundane realism of a field experiment. At the same time, many authors stress the importance of observing real behavior instead of asking participants about possible or intended behaviors. In this article we introduce an online virtual social environment, which is inhabited by autonomous agents including the virtual spouse of the participant. Participants can freely explore the virtual world and interact with any other inhabitant, allowing the expression of spontaneous and unprompted behavior. We investigated the usefulness of this game for the assessment of interactions with a virtual spouse and their relations to intimacy and autonomy motivation as well as relationship satisfaction with the real life partner. Both the intimacy motive and the satisfaction with the real world relationship showed significant correlations with aggregated in-game behavior, which shows that some sort of transference between the real world and the virtual world took place. In addition, a process analysis of interaction quality revealed that relationship satisfaction and intimacy motive had different effects on the initial status and the time Course of the interaction quality. Implications for psychological assessment using virtual social environments are discussed.


Proceedings ◽  
2019 ◽  
Vol 31 (1) ◽  
pp. 83
Author(s):  
Lasala ◽  
Jara ◽  
Alamán

During the last decade some technologies have achieved the necessary maturity to allow the widespread emergence of Virtual Worlds; many people are living an alternative life in them. One objective of this paper is to argue that the blending of real and virtual worlds has been happening for centuries, and in fact is the mark of “civilization”. This project presents a proposal to improve student motivation in the classroom, through a new form of recreation of a mixed reality environment. To this end, two applications have been created that work together between the real environment and the virtual environment: these applications are called “Virtual Craft” and “Virtual Touch”. Virtual Craft is related with the real world and Virtual Touch is related with the virtual world. These applications are in constant communication with each other, since both students and teachers carry out actions that influence the real or virtual world. A gamification mechanics was used in the recreated environment, in order to motivate the students to carry out the activities assigned by the teacher. For the evaluation of the proposal, a pilot experiment with Virtual Craft was carried out in a Secondary Educational Center in Valls (Spain).


2021 ◽  
pp. 1-17
Author(s):  
Andrew Fedorovich Lemeshev ◽  
Dmitry Dmitrievich Zhdanov ◽  
Boris Khaimovich Barladyan

The paper deals with the problem of visual perception discomfort inherent in mixed reality systems, more precisely, the determination of the lighting parameters of the objects of the virtual world, corresponding to the lighting conditions of the real world, into which the virtual objects are embedded. The paper proposes an effective solution to the problem of reconstructing the coordinates of a light source from an RGBD image of the real world. A detailed description of the algorithm and the results of a numerical experiment on reconstructing the coordinates of light sources in a model scene are given. The accuracy of coordinate recovery is analyzed and the limitations of the method are considered, associated with the inaccuracy of determining the boundaries of objects and their shadows, as well as the lack of interconnected areas of the boundaries of objects and their shadows in the RGBD image of the scene.


2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.


Impact ◽  
2020 ◽  
Vol 2020 (2) ◽  
pp. 9-11
Author(s):  
Tomohiro Fukuda

Mixed reality (MR) is rapidly becoming a vital tool, not just in gaming, but also in education, medicine, construction and environmental management. The term refers to systems in which computer-generated content is superimposed over objects in a real-world environment across one or more sensory modalities. Although most of us have heard of the use of MR in computer games, it also has applications in military and aviation training, as well as tourism, healthcare and more. In addition, it has the potential for use in architecture and design, where buildings can be superimposed in existing locations to render 3D generations of plans. However, one major challenge that remains in MR development is the issue of real-time occlusion. This refers to hiding 3D virtual objects behind real articles. Dr Tomohiro Fukuda, who is based at the Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering at Osaka University in Japan, is an expert in this field. Researchers, led by Dr Tomohiro Fukuda, are tackling the issue of occlusion in MR. They are currently developing a MR system that realises real-time occlusion by harnessing deep learning to achieve an outdoor landscape design simulation using a semantic segmentation technique. This methodology can be used to automatically estimate the visual environment prior to and after construction projects.


Author(s):  
Mark Grimshaw-Aagaard

Mark Grimshaw-Aagaard addresses the role of sound in the creation of presence in virtual and actual worlds. He argues that imagination is a central part of the generation and selection of perceptual hypotheses—models of the world in which we can act—that emerge from what Grimshaw-Aagaard calls the “exo-environment” (the sensory input) and the “endo-environment” (the cognitive input). Grimshaw-Aagaard further divides the exo-environment into a primarily auditory and a primarily visual dimension and he deals with the actual world of his own apartment and the virtual world of first-person-shooter computer games in order to exemplify how we perceptually construct an environment that allows for the creation of presence.


Sign in / Sign up

Export Citation Format

Share Document