scholarly journals Co-Reading as a Generative Ontology

Author(s):  
Simon Biggs

This paper discusses the immersive full body motion tracking installation Dark Matter, developed by the author and completed in early 2016. The paper outlines the conceptual focus of the project, including the use of the metaphor of dark matter to explore questions around interactive systems and assemblage. The primary technical considerations involved in the project are also outlined. ‘Co-reading' is proposed as a framework for a generative ontology, within the context of assemblage theory, deployed within a multimodal multi-agent interactive system.

2017 ◽  
Vol 26 (2) ◽  
pp. 228-246 ◽  
Author(s):  
Tanmay Randhavane ◽  
Aniket Bera ◽  
Dinesh Manocha

The simulation of human behaviors in virtual environments has many applications. In many of these applications, situations arise in which the user has a face-to-face interaction with a virtual agent. In this work, we present an approach for multi-agent navigation that facilitates a face-to-face interaction between a real user and a virtual agent that is part of a virtual crowd. In order to predict whether the real user is approaching a virtual agent to have a face-to-face interaction or not, we describe a model of approach behavior for virtual agents. We present a novel interaction velocity prediction (IVP) algorithm that is combined with human body motion synthesis constraints and facial actions to improve the behavioral realism of virtual agents. We combine these techniques with full-body virtual crowd simulation and evaluate their benefits by conducting a user study using Oculus HMD in an immersive environment. Results of this user study indicate that the virtual agents using our interaction algorithms appear more responsive and are able to elicit more reaction from the users. Our techniques thus enable face-to-face interactions between a real user and a virtual agent and improve the sense of presence observed by the user.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245717
Author(s):  
Shlomi Haar ◽  
Guhan Sundar ◽  
A. Aldo Faisal

Motor-learning literature focuses on simple laboratory-tasks due to their controlled manner and the ease to apply manipulations to induce learning and adaptation. Recently, we introduced a billiards paradigm and demonstrated the feasibility of real-world-neuroscience using wearables for naturalistic full-body motion-tracking and mobile-brain-imaging. Here we developed an embodied virtual-reality (VR) environment to our real-world billiards paradigm, which allows to control the visual feedback for this complex real-world task, while maintaining sense of embodiment. The setup was validated by comparing real-world ball trajectories with the trajectories of the virtual balls, calculated by the physics engine. We then ran our short-term motor learning protocol in the embodied VR. Subjects played billiard shots when they held the physical cue and hit a physical ball on the table while seeing it all in VR. We found comparable short-term motor learning trends in the embodied VR to those we previously reported in the physical real-world task. Embodied VR can be used for learning real-world tasks in a highly controlled environment which enables applying visual manipulations, common in laboratory-tasks and rehabilitation, to a real-world full-body task. Embodied VR enables to manipulate feedback and apply perturbations to isolate and assess interactions between specific motor-learning components, thus enabling addressing the current questions of motor-learning in real-world tasks. Such a setup can potentially be used for rehabilitation, where VR is gaining popularity but the transfer to the real-world is currently limited, presumably, due to the lack of embodiment.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2108
Author(s):  
Maik Boltes ◽  
Juliane Adrian ◽  
Anna-Katharina Raytarowski

For our understanding of the dynamics inside crowds, reliable empirical data are needed, which could enable increases in safety and comfort for pedestrians and the design of models reflecting the real dynamics. A well-calibrated camera system can extract absolute head position with high accuracy. The inclusion of inertial sensors or even self-contained full-body motion capturing systems allows the relative tracking of invisible people or body parts or capturing the locomotion of the whole body even in dense crowds. The newly introduced hybrid system maps the trajectory of the top of the head coming from a full-body motion tracking system to the head trajectory of a camera system in global space. The fused data enable the analysis of possible correlations of all observables. In this paper we present an experiment of people passing though a bottleneck and show by example the influences of bottleneck width and motivation on the overall movement, velocity, stepping locomotion and rotation of the pelvis. The hybrid tracking system opens up new possibilities for analyzing pedestrian dynamics inside crowds, such as the space requirement while passing through a bottleneck. The system allows linking any body motion to characteristics describing the situation of a person inside a crowd, such as the density or movements of other participants nearby.


Author(s):  
Shlomi Haar ◽  
Guhan Sundar ◽  
A. Aldo Faisal

AbstractMotor-learning literature focuses on simple laboratory-tasks due to their controlled manner and the ease to apply manipulations to induce learning and adaptation. Recently, we introduced a billiards paradigm and demonstrated the feasibility of real-world-neuroscience using wearables for naturalistic full-body motion-tracking and mobile-brain-imaging. Here we developed an embodied virtual-reality (VR) environment to our real-world billiards paradigm, which allows to control the visual feedback for this complex real-world task, while maintaining sense of embodiment. The setup was validated by comparing real-world ball trajectories with the trajectories of the virtual balls, calculated by the physics engine. We then ran our learning protocol in the embodied VR. Subjects played billiard shots when they held the physical cue and hit a physical ball on the table while seeing it all in VR. We found comparable learning trends in the embodied VR to those we previously reported in the physical real-world task. Embodied VR can be used for learning real-world tasks in a highly controlled environment which enables applying visual manipulations, common in laboratory-tasks and rehabilitation, to a real-world full-body task. Embodied VR enables to manipulate feedback and apply perturbations to isolate and assess interactions between specific motor-learning components, thus enabling addressing the current questions of motor-learning in real-world tasks. Such a setup can be used for rehabilitation, where VR is gaining popularity but the transfer to the real-world is currently limited, presumably, due to the lack of embodiment.


2020 ◽  
Author(s):  
Karl K. Kopiske ◽  
Daniel Koska ◽  
Thomas Baumann ◽  
Christian Maiwald ◽  
Wolfgang Einhäuser

Most humans can walk effortlessly across uniform terrain even without paying much attention to it. However, most natural terrain is far from uniform, and we need visual information to maintain stable gait. In a controlled yet naturalistic environment, we simulated terrain difficulty through slip-like perturbations that were either unpredictable (experiment 1) or sometimes followed visual cues (experiment 2) while recording eye and body movements using mobile eye tracking and full-body motion tracking. We quantified the distinct roles of eye and head movements for adjusting gaze on different time scales. While motor perturbations mainly influenced head movements, eye movements were primarily affected by visual cues, both immediately following slips, and – to a lesser extent – over 5-minute blocks. We find adapted gaze parameters already after the first perturbation in each block, with little transfer between blocks. In conclusion, gaze-gait interactions in experimentally perturbed naturalistic walking are adaptive, flexible, and effector-specific.


Sign in / Sign up

Export Citation Format

Share Document