scholarly journals The impact of visual gaze direction on auditory object tracking

2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Ulrich Pomper ◽  
Maria Chait
2018 ◽  
Vol 14 (2) ◽  
pp. 66-76
Author(s):  
O.I. Shabalina ◽  
M.R. Contreras ◽  
L. Feng

The present research explores the impact of native language on perception patterns of monolingual students from China, Russia, Mexico, the USA, and bilingual students from India. The research findings, obtained with verbal protocols, demonstrate statistically significant differences in the way representatives of different cultures perceive information and prove the hypothesis according to which the principles of sentence organization in native languages determine the focus of perception and gaze direction in individuals. In particular, with a = 0.01, 0.05 and 0.005 American students are focused on the object and demonstrate linear gaze direction, whereas Russian, Chinese, Mexican, and Indian students are focused on the field and demonstrate chaotic, unstructured gaze direction. Differences in perception patterns explain the co-existence of local and Western approaches to advertisement layout design in national and multicultural markets around the world, which makes them all in all an important issue for consideration in global advertising.


2017 ◽  
Vol 36 (2) ◽  
pp. 180-198 ◽  
Author(s):  
Fen Lin ◽  
Mike Yao

This study explores how accompanying text affects the way an individual views and interprets a painting. We randomly assigned participants to view 20 paintings from the classical era with factual information, contextualized background information, or no information displayed next to them. We then recorded their visual gaze using an eye-tracking device and asked them to evaluate the paintings. The results show that how people view a painting and how they evaluate a painting are two distinct cognitive processes. The contextual information serves to orient the viewing process. The accompanying text influences the visual attention and gaze pattern but has limited impact on the hedonic evaluation of paintings. Instead, hedonic evaluation is more of a taste acquired through education and socialization. This study offers an empirical footnote to discussions on the cognitive assumptions in sociological studies of art and cultural phenomena.


2021 ◽  
Vol 11 (4) ◽  
pp. 1963
Author(s):  
Shanshan Luo ◽  
Baoqing Li ◽  
Xiaobing Yuan ◽  
Huawei Liu

The Discriminative Correlation Filter (DCF) has been universally recognized in visual object tracking, thanks to its excellent accuracy and high speed. Nevertheless, these DCF-based trackers perform poorly in long-term tracking. The reasons include the following aspects—first, they have low adaptability to significant appearance changes in long-term tracking and are prone to tracking failure; second, these trackers lack a practical re-detection module to find the target again after tracking failure. In our work, we propose a new long-term tracking strategy to solve these issues. First, we make the best of the static and dynamic information of the target by introducing the motion features to our long-term tracker and obtain a more robust tracker. Second, we introduce a low-rank sparse dictionary learning method for re-detection. This re-detection module can exploit a correlation among these training samples and alleviate the impact of occlusion and noise. Third, we propose a new reliability evaluation method to model an adaptive update, which can switch expediently between the tracking module and the re-detection module. Massive experiments demonstrate that our proposed approach has an obvious improvement in precision and success rate over these state-of-the-art trackers.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1528
Author(s):  
Xiaofei Qin ◽  
Yipeng Zhang ◽  
Hang Chang ◽  
Hao Lu ◽  
Xuedian Zhang

In visual object tracking fields, the Siamese network tracker, based on the region proposal network (SiamRPN), has achieved promising tracking effects, both in speed and accuracy. However, it did not consider the relationship and differences between the long-range context information of various objects. In this paper, we add a global context block (GC block), which is lightweight and can effectively model long-range dependency, to the Siamese network part of SiamRPN so that the object tracker can better understand the tracking scene. At the same time, we propose a novel convolution module, called a cropping-inside selective kernel block (CiSK block), based on selective kernel convolution (SK convolution, a module proposed in selective kernel networks) and use it in the region proposal network (RPN) part of SiamRPN, which can adaptively adjust the size of the receptive field for different types of objects. We make two improvements to SK convolution in the CiSK block. The first improvement is that in the fusion step of SK convolution, we use both global average pooling (GAP) and global maximum pooling (GMP) to enhance global information embedding. The second improvement is that after the selection step of SK convolution, we crop out the outermost pixels of features to reduce the impact of padding operations. The experiment results show that on the OTB100 benchmark, we achieved an accuracy of 0.857 and a success rate of 0.643. On the VOT2016 and VOT2019 benchmarks, we achieved expected average overlap (EAO) scores of 0.394 and 0.240, respectively.


2019 ◽  
Vol 20 (4) ◽  
Author(s):  
Mateusz Jarosz ◽  
Piotr Nawrocki ◽  
Leszek Placzkiewicz ◽  
Bartlomiej Sniezynski ◽  
Marcin Zieinski ◽  
...  

Two common channels through which humans communicate are speech andgaze. Eye gaze is an important mode of communication: it allows people tobetter understand each others’ intentions, desires, interests, and so on. The goalof this research is to develop a framework for gaze triggered events which canbe executed on a robot and mobile devices and allows to perform experiments.We experimentally evaluate the framework and techniques for extracting gazedirection based on a robot-mounted camera or a mobile-device camera whichare implemented in the framework. We investigate the impact of light on theaccuracy of gaze estimation, and also how the overall accuracy depends on usereye and head movements. Our research shows that the light intensity is im-portant, and the placement of light source is crucial. All the robot-mountedgaze detection modules we tested were found to be similar with regard to ac-curacy. The framework we developed was tested in a human-robot interactionexperiment involving a job-interview scenario. The flexible structure of thisscenario allowed us to test different components of the framework in variedreal-world scenarios, which was very useful for progressing towards our long-term research goal of designing intuitive gaze-based interfaces for human robotcommunication.


2021 ◽  
pp. 174702182110077
Author(s):  
Giulia Mattavelli ◽  
Daniele Romano ◽  
Andrew Young ◽  
Paola Ricciardelli

The gaze cueing effect involves the rapid orientation of attention to follow the gaze direction of another person. Previous studies reported reciprocal influences between social variables and the gaze cueing effect, with modulation of gaze cueing by social features of face stimuli and modulation of the observer’s social judgments from the validity of the gaze cues themselves. However, it remains unclear which social dimensions can affect - and be affected by - gaze cues. We used computer-averaged prototype face-like images with high and low levels of perceived trustworthiness and dominance to investigate the impact of these two fundamental social impression dimensions on the gaze cueing effect. Moreover, by varying the proportions of valid and invalid gaze cues across three experiments, we assessed whether gaze cueing influences observers' impressions of dominance and trustworthiness through incidental learning. Bayesian statistical analyses provided clear evidence that the gaze cueing effect was not modulated by facial social trait impressions (Experiments 1-3). On the other hand, there was uncertain evidence of incidental learning of social evaluations following the gaze cueing task. A decrease in perceived trustworthiness for non-cooperative low dominance faces (Experiment 2) and an increase in dominance ratings for faces whose gaze behaviour contradicted expectations (Experiment 3) appeared, but further research is needed to clarify these effects. Thus, this study confirms that attentional shifts triggered by gaze direction involve a robust and relatively automatic process, which could nonetheless influence social impressions depending on perceived traits and the gaze behaviour of faces providing the cues.


2008 ◽  
Vol 61 (3) ◽  
pp. 491-504 ◽  
Author(s):  
Paola Ricciardelli ◽  
Jon Driver

Several past studies have considered how perceived head orientation may be combined with perceived gaze direction in judging where someone else is attending. In three experiments we tested the impact of different sources of information by examining the role of head orientation in gaze-direction judgements when presenting: (a) the whole face; (b) the face with the nose masked; (c) just the eye region, removing all other head-orientation cues apart from some visible part of the nose; or (d) just the eyes, with all parts of the nose masked and no head orientation cues present other than those within the eyes themselves. We also varied time pressure on gaze direction judgements. The results showed that gaze judgements were not solely driven by the eye region. Gaze perception can also be affected by parts of the head and face, but in a manner that depends on the time constraints for gaze direction judgements. While “positive” congruency effects were found with time pressure (i.e., faster left/right judgements of seen gaze when the seen head deviated towards the same side as that gaze), the opposite applied without time pressure.


Sign in / Sign up

Export Citation Format

Share Document