scholarly journals Gaze Coordination of Groups in Dynamic Events – A Tool to Facilitate Analyses of Simultaneous Gazes Within a Team

2021 ◽  
Vol 12 ◽  
Author(s):  
Frowin Fasold ◽  
André Nicklas ◽  
Florian Seifriz ◽  
Karsten Schul ◽  
Benjamin Noël ◽  
...  

The performance and the success of a group working as a team on a common goal depends on the individuals’ skills and the collective coordination of their abilities. On a perceptual level, individual gaze behavior is reasonably well investigated. However, the coordination of visual skills in a team has been investigated only in laboratory studies and the practical examination and knowledge transfer to field studies or the applicability in real-life situations have so far been neglected. This is mainly due to the fact that a methodological approach along with a suitable evaluation procedure to analyze the gaze coordination within a team in highly dynamic events outside the lab, is still missing. Thus, this study was conducted to develop a tool to investigate the coordinated gaze behavior within a team of three human beings acting with a common goal in a dynamic real-world scenario. This team was a (three-person) basketball referee team adjudicating a game. Using mobile eye-tracking devices and an indigenously designed software tool for the simultaneous analysis of the gaze data of three participants, allowed, for the first time, the simultaneous investigation of the coordinated gaze behavior of three people in a highly dynamic setting. Overall, the study provides a new and innovative method to investigate the coordinated gaze behavior of a three-person team in specific tasks. This method is also applicable to investigate research questions about teams in dynamic real-world scenarios and get a deeper look at interactions and behavior patterns of human beings in group settings (for example, in team sports).

In the real life, there are difficulties in fixing faults in appliances, vehicles and day-to-day used equipment’s. Normally, for Set righting the false mechanics or service centers are approached. In nowadays troubleshooting are attempted even by self, in as far as minor repairs are concerned and, due to lack of knowledge the technologies are being hired for solving the defects. Augmented reality (AR) helps in sorting out the issues of minor repairing of appliances or vehicles in interactive and digitally manipulated real world problems. Information about the environment and its object are overlaid on the real-world environment with the help of Smartphone’s to sort out the day-to-day issues. With the help of advanced AR technologies AR cameras are incorporated into the Smartphone application to interact with the information in the surrounding world of the user for digital manipulation. In AR, Unity game engine which supports more than 25 platforms is used for efficient and flexible workflow that enables to work confidently. Likewise, Blender an open sourced 3D graphics software tool used for creating animated films, 3D print modules, motion graphics, and interactive 3D application to help the user. A versatile language C# is used to handle user input manipulated objects for switching over to different scenes displayed for the user. Other software such as Vuforia, Android SDK, and Java JDK are also used for sending APK files to Smartphone’s. The application starts with user interface where 3D objects are downloaded with the help of blender. It opens the troubleshooting guide to select car brand which have different repair techniques. This leads to view the video solution of repairs through AR cameras. Here troubleshooting techniques are revealed to the user for their self-attempt for repairing the minor faults.


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
Felix Wang ◽  
Julian Wolf ◽  
Mazda Farshad ◽  
Mirko Meboldt ◽  
Quentin Lohmeyer

Eye tracking (ET) has shown to reveal the wearer’s cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers’ use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object-Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jan Drewes ◽  
Sascha Feder ◽  
Wolfgang Einhäuser

How vision guides gaze in realistic settings has been researched for decades. Human gaze behavior is typically measured in laboratory settings that are well controlled but feature-reduced and movement-constrained, in sharp contrast to real-life gaze control that combines eye, head, and body movements. Previous real-world research has shown environmental factors such as terrain difficulty to affect gaze; however, real-world settings are difficult to control or replicate. Virtual reality (VR) offers the experimental control of a laboratory, yet approximates freedom and visual complexity of the real world (RW). We measured gaze data in 8 healthy young adults during walking in the RW and simulated locomotion in VR. Participants walked along a pre-defined path inside an office building, which included different terrains such as long corridors and flights of stairs. In VR, participants followed the same path in a detailed virtual reconstruction of the building. We devised a novel hybrid control strategy for movement in VR: participants did not actually translate: forward movements were controlled by a hand-held device, rotational movements were executed physically and transferred to the VR. We found significant effects of terrain type (flat corridor, staircase up, and staircase down) on gaze direction, on the spatial spread of gaze direction, and on the angular distribution of gaze-direction changes. The factor world (RW and VR) affected the angular distribution of gaze-direction changes, saccade frequency, and head-centered vertical gaze direction. The latter effect vanished when referencing gaze to a world-fixed coordinate system, and was likely due to specifics of headset placement, which cannot confound any other analyzed measure. Importantly, we did not observe a significant interaction between the factors world and terrain for any of the tested measures. This indicates that differences between terrain types are not modulated by the world. The overall dwell time on navigational markers did not differ between worlds. The similar dependence of gaze behavior on terrain in the RW and in VR indicates that our VR captures real-world constraints remarkably well. High-fidelity VR combined with naturalistic movement control therefore has the potential to narrow the gap between the experimental control of a lab and ecologically valid settings.


2014 ◽  
Vol 25 (4) ◽  
pp. 233-238 ◽  
Author(s):  
Martin Peper ◽  
Simone N. Loeffler

Current ambulatory technologies are highly relevant for neuropsychological assessment and treatment as they provide a gateway to real life data. Ambulatory assessment of cognitive complaints, skills and emotional states in natural contexts provides information that has a greater ecological validity than traditional assessment approaches. This issue presents an overview of current technological and methodological innovations, opportunities, problems and limitations of these methods designed for the context-sensitive measurement of cognitive, emotional and behavioral function. The usefulness of selected ambulatory approaches is demonstrated and their relevance for an ecologically valid neuropsychology is highlighted.


2013 ◽  
Vol 9 (2) ◽  
pp. 173-186 ◽  
Author(s):  
Mari Wiklund

Asperger syndrome (AS) is a form of high-functioning autism characterized by qualitative impairment in social interaction. People afflicted with AS typically have abnormal nonverbal behaviors which are often manifested by avoiding eye contact. Gaze constitutes an important interactional resource, and an AS person’s tendency to avoid eye contact may affect the fluidity of conversations and cause misunderstandings. For this reason, it is important to know the precise ways in which this avoidance is done, and in what ways it affects the interaction. The objective of this article is to describe the gaze behavior of preadolescent AS children in institutional multiparty conversations. Methodologically, the study is based on conversation analysis and a multimodal study of interaction. The findings show that three main patterns are used for avoiding eye contact: 1) fixing one’s gaze straight ahead; 2) letting one’s gaze wander around; and 3) looking at one’s own hands when speaking. The informants of this study do not look at the interlocutors at all in the beginning or the middle of their turn. However, sometimes they turn to look at the interlocutors at the end of their turn. This proves that these children are able to use gaze as a source of feedback. When listening, looking at the speaker also seems to be easier for them than looking at the listeners when speaking.


This survey of research on psychology in five volumes is a part of a series undertaken by the ICSSR since 1969, which covers various disciplines under social science. Volume One of this survey, Cognitive and Affective Processes, discusses the developments in the study of cognitive and affective processes within the Indian context. It offers an up-to-date assessment of theoretical developments and empirical studies in the rapidly evolving fields of cognitive science, applied cognition, and positive psychology. It also analyses how pedagogy responds to a shift in the practices of knowing and learning. Additionally, drawing upon insights from related fields it proposes epithymetics–desire studies – as an upcoming field of research and the volume investigates the impact of evolving cognitive and affective processes in Indian research and real life contexts. The development of cognitive capability distinguishes human beings from other species and allows creation and use of complex verbal symbols, facilitates imagination and empowers to function at an abstract level. However, much of the vitality characterizing human life is owed to the diverse emotions and desires. This has made the study of cognition and affect as frontier areas of psychology. With this in view, this volume focuses on delineating cognitive scientific contributions, cognition in educational context, context, diverse applications of cognition, psychology of desire, and positive psychology. The five chapters comprising this volume have approached the scholarly developments in the fields of cognition and affect in innovative ways, and have addressed basic as well applied issues.


2021 ◽  
Author(s):  
Amarildo Likmeta ◽  
Alberto Maria Metelli ◽  
Giorgia Ramponi ◽  
Andrea Tirinzoni ◽  
Matteo Giuliani ◽  
...  

AbstractIn real-world applications, inferring the intentions of expert agents (e.g., human operators) can be fundamental to understand how possibly conflicting objectives are managed, helping to interpret the demonstrated behavior. In this paper, we discuss how inverse reinforcement learning (IRL) can be employed to retrieve the reward function implicitly optimized by expert agents acting in real applications. Scaling IRL to real-world cases has proved challenging as typically only a fixed dataset of demonstrations is available and further interactions with the environment are not allowed. For this reason, we resort to a class of truly batch model-free IRL algorithms and we present three application scenarios: (1) the high-level decision-making problem in the highway driving scenario, and (2) inferring the user preferences in a social network (Twitter), and (3) the management of the water release in the Como Lake. For each of these scenarios, we provide formalization, experiments and a discussion to interpret the obtained results.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Wiebren Zijlstra ◽  
Eleftheria Giannouli

Abstract Background Based on a conceptual framework, Kuspinar and colleagues analysed life-space mobility in community-dwelling older adults. However, a number of earlier mobility studies that used the same framework remained undiscussed. This correspondence article addresses similarities and differences between these studies, as well as highlight issues that need to be addressed to improve our understanding of mobility determinants in older adults. Findings Despite differences in methodological approach as well as in detailed results, the studies share one important outcome: regardless of the specific choice of potential mobility determinants, only a low to moderate proportion of mobility could be explained. Conclusions Our present understanding of the determinants of mobility in community-dwelling older adults is limited. A consistent terminology that takes into account the different aspects of mobility; the use of objective methods to assess real-life mobility; and monitoring changes in real-life mobility in response to interventions will contribute to furthering our understanding of mobility determinants.


Author(s):  
Marcelo N. de Sousa ◽  
Ricardo Sant’Ana ◽  
Rigel P. Fernandes ◽  
Julio Cesar Duarte ◽  
José A. Apolinário ◽  
...  

AbstractIn outdoor RF localization systems, particularly where line of sight can not be guaranteed or where multipath effects are severe, information about the terrain may improve the position estimate’s performance. Given the difficulties in obtaining real data, a ray-tracing fingerprint is a viable option. Nevertheless, although presenting good simulation results, the performance of systems trained with simulated features only suffer degradation when employed to process real-life data. This work intends to improve the localization accuracy when using ray-tracing fingerprints and a few field data obtained from an adverse environment where a large number of measurements is not an option. We employ a machine learning (ML) algorithm to explore the multipath information. We selected algorithms random forest and gradient boosting; both considered efficient tools in the literature. In a strict simulation scenario (simulated data for training, validating, and testing), we obtained the same good results found in the literature (error around 2 m). In a real-world system (simulated data for training, real data for validating and testing), both ML algorithms resulted in a mean positioning error around 100 ,m. We have also obtained experimental results for noisy (artificially added Gaussian noise) and mismatched (with a null subset of) features. From the simulations carried out in this work, our study revealed that enhancing the ML model with a few real-world data improves localization’s overall performance. From the machine ML algorithms employed herein, we also observed that, under noisy conditions, the random forest algorithm achieved a slightly better result than the gradient boosting algorithm. However, they achieved similar results in a mismatch experiment. This work’s practical implication is that multipath information, once rejected in old localization techniques, now represents a significant source of information whenever we have prior knowledge to train the ML algorithm.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


Sign in / Sign up

Export Citation Format

Share Document