Visional Attentional Allocation of Top-down and Bottom-up Cues in Three-Dimensional Space

2011 ◽  
Vol 179-180 ◽  
pp. 1322-1326
Author(s):  
Ru Ting Xia

The aim of the present experiment was to investigate visual attentional allocation of top-down and bottom-up cues in three-dimensional (3D) space. Near and far stimuli were used by a 3D attention measurement apparatus. Two experiments were conducted in order to examine top-down and bottom-up controls of visual attention. In the experiment 1, the cue about the location of a target by means of location information. In the experiment 2, color cue by brief change of color at target locations was presented. Observers were required to judge whether the target presented nearer than fixation point or further than it. The results in experiment 1 and experiment 2 show that both location and color cue have the effect on reaction time, and that shift of attention were faster from far to near than the reverse. These findings suggest that (1) attention in 3D space might be operated with both location and color controls included the depth information, (2) the shift of visual attention in 3D space has an asymmetric characteristic in depth.

2010 ◽  
Vol 3 (9) ◽  
pp. 739-739 ◽  
Author(s):  
T. Kimura ◽  
T. Miura ◽  
S. Doi ◽  
Y. Yamamoto

2013 ◽  
Vol 319 ◽  
pp. 343-347
Author(s):  
Ru Ting Xia ◽  
Xiao Yan Zhou

This research aimed to reveal characteristics of visual attention of low-vision drivers. Near and far stimuli were used by means of a three-dimensional (3D) attention measurement system that simulated traffic environment. We measured the reaction time of subjects while attention shifted in three kinds of imitational peripheral environment illuminance (daylight, twilight and dawn conditions). Subjects were required to judge whether the target presented nearer than fixation point or further than it. The results showed that the peripheral environment illuminance had evident influence on the reaction time of drivers, the reaction time was slow in dawn and twilight conditions than in daylight condition, distribution of attention had the advantage in nearer space than farther space, that is, and the shifts of attention in 3D space had an anisotropy characteristic in depth. The results suggested that (1) visual attention might be operated with both precueing paradigm and stimulus controls included the depth information, (2) an anisotropy characteristic of attention shifting depend on the attention moved distance, and it showed remarkably in dawn condition than in daylight and twilight conditions.


1989 ◽  
Vol 62 (2) ◽  
pp. 582-594 ◽  
Author(s):  
J. F. Soechting ◽  
M. Flanders

1. The accuracy with which subjects pointed to targets in extrapersonal space was assessed under a variety of experimental conditions. 2. When subjects pointed in the dark to remembered target locations, they made substantial errors. Errors in distance, measured from the shoulder to the target, were sometimes as much as 15 cm. Errors in direction, also measured from the shoulder, were smaller. 3. An analysis of the information transmitted by the location of the subject's finger about the location of the target showed that the information about the target's distance was consistently lower than the information about its direction. 4. The errors in distance persisted when subjects had their arm in view and pointed in the light to remembered target locations. 5. The errors were much smaller when subjects used a pointer to point to the target or when they were asked to reproduce the position of their finger after it had been passively moved to the target. 6. From these findings we conclude that subjects have a reasonably accurate visual representation of target location and are able to effectively use kinesthetically derived information about target location. We therefore suggest that errors in pointing result from errors in the sensorimotor transformation from the visual representation of the target location to the kinematic representation of the arm movement.


2011 ◽  
Vol 55-57 ◽  
pp. 401-406
Author(s):  
Ru Ting Xia ◽  
Jian Fan ◽  
Bai Shao Zhan ◽  
Shun’ichi Doi

This paper introduces the designs on visual attention measurement system based on the visual attention theory. The measurement system simulated the traffic environment and driving conditions, to examine the reaction time and accuracy of judge response of drivers. It applies to research the top-down and bottom-up controls of visual attention in three-dimensional (3D) space. It included the setting of observation location, stimuli location, moving speed, environmental illuminance and the design of software control interface. The paper describe how the stimuli can be presented by the control system and introduce the design of variable periphery environmental illuminance and obversion targets for detect tasks in detail. The results are demonstrated that this measurement system can be used to examine the reaction time and accuracy of judgment response in three environmental illuminance conditions.


2018 ◽  
Vol 13 (1) ◽  
pp. 155892501801300
Author(s):  
Xuan Luo ◽  
Gaoming Jiang ◽  
Honglian Cong ◽  
Yan Zhao

An adaptive force model is proposed to achieve better performance between the accuracy and the speed of cloth simulation in three-dimensional (3D) space. The proposed force model can be expressed with a general mathematical form demonstrated by the distance between the clothing and the human body. This paper defines how a continuous adaptive area can be established with a shape “block”. It is clarified that, within a specific block, a force model is expressed with the gravity of the clothing, the forces of the adjacent blocks and the anti-force of the human body to the block. In this manner, the force model of the desired clothing can be obtained through a general mathematical expression. The simulations and experimental results demonstrate that the acceptable clothing simulation in 3D space can be achieved with higher speed by saving about 20.2% runtime, and the efficiency of the proposed scheme can be verified.


Perception ◽  
2021 ◽  
Vol 50 (3) ◽  
pp. 231-248
Author(s):  
Xiaoyuan Liu ◽  
Qinyue Qian ◽  
Lingyun Wang ◽  
Aijun Wang ◽  
Ming Zhang

Spatial inhibition of return (IOR) being affected by the self-prioritization effect (SPE) in a two-dimensional plane has been well documented. However, it remains unknown how the spatial IOR interacts with the SPE in three-dimensional (3D) space. By constructing a virtual 3D environment, Posner’s classically two-dimensional cue-target paradigm was applied to a 3D space. Participants first associated labels for themselves, their best friends, and strangers with geometric shapes in a shape-label matching task, then performed Experiment 1 (referential information appeared as the cue label) and Experiment 2 (referential information appeared as the target label) to investigate whether the IOR effect could be influenced by the SPE in 3D space. This study showed that when the cue was temporarily established with a self-referential shape and appeared in far space, the IOR effect was the smallest. When the target was temporarily established with a self-referential shape and appeared in near space, the IOR effect disappeared. This study suggests that the IOR effect was affected by the SPE when attention was oriented or reoriented in 3D space and that the IOR effect disappeared or decreased when affected by the SPE in 3D space.


Perception ◽  
1993 ◽  
Vol 22 (7) ◽  
pp. 767-769 ◽  
Author(s):  
Richard D Wright ◽  
Russell W C Day

Two types of spiral-motion aftereffects were elicited by a single pattern: subjects reported seeing the pattern expand in the two-dimensional viewing plane or bulge toward them in three-dimensional space. Under binocular-viewing conditions reports of two-dimensional translations predominated. But when depth information was restricted under monocular-viewing conditions, reports of three-dimensional translations were more frequent. It appears that the bistability of these aftereffects can be influenced by the degree of depth information available about a stimulus pattern.


Sign in / Sign up

Export Citation Format

Share Document