direction judgment
Recently Published Documents


TOTAL DOCUMENTS

23
(FIVE YEARS 8)

H-INDEX

3
(FIVE YEARS 0)

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261063
Author(s):  
Sachiyo Ueda ◽  
Kazuya Nagamachi ◽  
Junya Nakamura ◽  
Maki Sugimoto ◽  
Masahiko Inami ◽  
...  

Visual perspective taking is inferring how the world looks to another person. To clarify this process, we investigated whether employing a humanoid avatar as the viewpoint would facilitate an imagined perspective shift in a virtual environment, and which factor of the avatar is effective for the facilitation effect. We used a task that involved reporting how an object looks by a simple direction judgment, either from the avatar’s position or from the position of an empty chair. We found that the humanoid avatar’s presence improved task performance. Furthermore, the avatar’s facilitation effect was observed only when the avatar was facing the visual stimulus to be judged; performance was worse when it faced backwards than when there was only an empty chair facing forwards. This suggests that the avatar does not simply attract spatial attention, but the posture of the avatar is crucial for the facilitation effect. In addition, when the directions of the head and the torso were opposite (i.e., an impossible posture), the avatar’s facilitation effect disappeared. Thus, visual perspective taking might not be facilitated by the avatar when its posture is biomechanically impossible because we cannot embody it. Finally, even when the avatar’s head of the possible posture was covered with a bucket, the facilitation effect was found with the forward-facing avatar rather than the backward-facing avatar. That is, the head/gaze direction cue, or presumably the belief that the visual stimulus to be judged can be seen by the avatar, was not required. These results suggest that explicit perspective taking is facilitated by embodiment towards humanoid avatars.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yue Zhang ◽  
Qiqi Hu ◽  
Xinwei Lai ◽  
Zhonghua Hu ◽  
Shan Gao

AbstractPrevious studies have shown that humans have a left spatial attention bias in cognition and behaviour. However, whether there exists a leftward perception bias of gaze direction has not been investigated. To address this gap, we conducted three behavioural experiments using a forced-choice gaze direction judgment task. The point of subjective equality (PSE) was employed to measure whether there was a leftward perception bias of gaze direction, and if there was, whether this bias was modulated by face emotion. The results of experiment 1 showed that the PSE of fearful faces was significantly positive as compared to zero and this effect was not found in angry, happy, and neutral faces, indicating that participants were more likely to judge the gaze direction of fearful faces as directed to their left-side space, namely a leftward perception bias. With the response keys counterbalanced between participants, experiment 2a replicated the findings in experiment 1. To further investigate whether the gaze direction perception variation was contributed by emotional or low-level features of faces, experiment 2b and 3 used inverted faces and inverted eyes, respectively. The results revealed similar leftward perception biases of gaze direction in all types of faces, indicating that gaze direction perception was biased by emotional information in faces rather than low-level facial features. Overall, our study demonstrates that there a fear-specific leftward perception bias in processing gaze direction. These findings shed new light on the cerebral lateralization in humans.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chun Liu ◽  
Jian Li

Automatic ship detection, recognition, and counting are crucial for intelligent maritime surveillance, timely ocean rescue, and computer-aided decision-making. YOLOv3 pretraining model is used for model training with sample images for ship detection. The ship detection model is built by adjusting and optimizing parameters. Combining the target HSV color histogram features and LBP local features’ target, object recognition and selection are realized by using the deep learning model due to its efficiency in extracting object characteristics. Since tracking targets are subject to drift and jitter, a self-correction network that composites both direction judgment based on regression and target counting method with variable time windows is designed, which better realizes automatic detection, tracking, and self-correction of moving object numbers in water. The method in this paper shows stability and robustness, applicable to the automatic analysis of waterway videos and statistics extraction.


2021 ◽  
Author(s):  
Sachiyo Ueda ◽  
Kazuya Nagamachi ◽  
Maki Sugimoto ◽  
Masahiko Inami ◽  
Michiteru Kitazaki

Abstract Visual perspective taking is inferring how the world looks to another person. To clarify this process, we investigated whether employing a humanoid avatar as the viewpoint would facilitate an imagined perspective shift in a virtual environment. We used a task that involved reporting how an object looks by a simple direction judgment either from the avatar’s position or an empty chair’s position. We found that the humanoid avatar’s presence improved task performance. Furthermore, the avatar’s facilitation effect was observed only when the avatar was facing the visual stimulus to be judged; performance was worse when it faced backwards than when there was only an empty chair faced forwards. When the directions of the head and the torso were opposite (i.e., an impossible posture), the avatar’s facilitation effect disappeared. The performance was better in the order of the condition that both the head and the torso were facing forward, the condition that both the head and the torso were facing backward, the condition that the torso was facing toward the stimulus while the head was facing away, and the condition that the head was facing toward the stimulus while the torso was facing away. Thus, visual perspective taking might not be facilitated by the avatar when its posture is biomechanically impossible. These results suggest that the facilitation effect is based not only on attention capture but also on visual perspective taking of humanoid avatar.


2021 ◽  
Vol 11 ◽  
Author(s):  
Takahiro Kawabe

When an elastic material (e.g., fabric) is horizontally stretched (or compressed), the material is compressed (or extended) vertically – so-called the Poisson effect. In the different case of the Poisson effect, when an elastic material (e.g., rubber) is vertically squashed, the material is horizontally extended. In both cases, the visual system receives image deformations involving horizontal expansion and vertical compression. How does the brain disentangle the two cases and accurately distinguish stretching from squashing events? Manipulating the relative magnitude of the deformation of a square between horizontal and vertical dimensions in the two-dimensional stimuli, we asked observers to judge the force direction in the stimuli. Specifically, the participants reported whether the square was stretched or squashed. In general, the participant’s judgment was dependent on the relative deformation magnitude. We also checked the anisotropic effect of deformation direction [i.e., horizontal vs. vertical stretching (or squashing)] and found that the participant’s judgment was strongly biased toward horizontal stretching. We also observed that the asymmetric deformation pattern, which indicated the specific context of force direction, was also a strong cue to the force direction judgment. We suggest that the brain judges the force direction in the Poisson effect on the basis of assumptions about the relationship between image deformation and force direction, in addition to the relative image deformation magnitudes between horizontal and vertical dimensions.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Simin Li ◽  
Jingkun Wang ◽  
Wentao Zhang ◽  
Hao DU ◽  
Xianming Xiong

2020 ◽  
Author(s):  
Sachiyo Ueda ◽  
Kazuya Nagamachi ◽  
Maki Sugimoto ◽  
Masahiko Inami ◽  
Michiteru Kitazaki

Abstract Understanding another’s viewpoint is the ability to infer others’ minds, which is important for successful social communication. To clarify this process, we investigated whether employing a humanoid avatar as the viewpoint would facilitate an imagined perspective shift in a virtual environment. We used a task that involved reporting how an object looks by a simple direction judgment either from the avatar’s position or an empty chair’s position. We found that the humanoid avatar’s presence improved task performance. Furthermore, the avatar’s facilitation effect was observed only when the avatar was facing the visual stimulus to be judged; performance was worse when it faced backwards. This suggests that the facilitation effect is based not only on attention capture but also the embodied perspective-taking to the avatar.


2019 ◽  
Vol 84 (759) ◽  
pp. 479-486
Author(s):  
Yusuke NISHIJIMA ◽  
Yoshiki IKEDA ◽  
Marina NISHIKAWA ◽  
Jaeyoung HEO ◽  
Kotaroh HIRATE
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document