scholarly journals What Visual Targets Are Viewed by Users With a Handheld Mobile Magnifier App

2021 ◽  
Vol 10 (3) ◽  
pp. 16
Author(s):  
Gang Luo
Keyword(s):  
Author(s):  
Sander Martens ◽  
Addie Johnson ◽  
Martje Bolle ◽  
Jelmer Borst

The human mind is severely limited in processing concurrent information at a conscious level of awareness. These temporal restrictions are clearly reflected in the attentional blink (AB), a deficit in reporting the second of two targets when it occurs 200–500 ms after the first. However, we recently reported that some individuals do not show a visual AB, and presented psychophysiological evidence that target processing differs between “blinkers” and “nonblinkers”. Here, we present evidence that visual nonblinkers do show an auditory AB, which suggests that a major source of attentional restriction as reflected in the AB is likely to be modality-specific. In Experiment 3, we show that when the difficulty in identifying visual targets is increased, nonblinkers continue to show little or no visual AB, suggesting that the presence of an AB in the auditory but not in the visual modality is not due to a difference in task difficulty.


2010 ◽  
Vol 6 (5) ◽  
pp. 639-645 ◽  
Author(s):  
Joshua M. Carlson ◽  
Karen S. Reinke ◽  
Pamela J. LaMontagne ◽  
Reza Habib

1991 ◽  
Vol 31 (4) ◽  
pp. 693-715 ◽  
Author(s):  
James W. Gnadt ◽  
R. Martyn Bracewell ◽  
Richard A. Andersen

2021 ◽  
Vol 49 (12) ◽  
pp. 1-11
Author(s):  
Cheng Kang ◽  
Nan Ye ◽  
Fangwen Zhang ◽  
Yanwen Wu ◽  
Guichun Jin ◽  
...  

Although studies have investigated the influence of the emotionality of primes on the cross-modal affective priming effect, it is unclear whether this effect is due to the contribution of the arousal or the valence of primes. We explored how the valence and arousal of primes influenced the cross-modal affective priming effect. In Experiment 1 we manipulated the valence of primes (positive and negative) that were matched by arousal. In Experiments 2 and 3 we manipulated the arousal of primes under the conditions of positive and negative valence, respectively. Affective words were used as auditory primes and affective faces were used as visual targets in a priming task. The results suggest that the valence of primes modulated the cross-modal affective priming effect but that the arousal of primes did not influence the priming effect. Only when the priming stimuli were positive did the cross-modal affective priming effect occur, but negative primes did not produce a priming effect. In addition, for positive but not negative primes, the arousal of primes facilitated the processing of subsequent targets. Our findings have great significance for understanding the interaction of different modal affective information.


2018 ◽  
Vol 27 (2) ◽  
pp. 349-360 ◽  
Author(s):  
Bahram Kheradmand ◽  
Julian Cassano ◽  
Selena Gray ◽  
James C. Nieh

1994 ◽  
Vol 71 (3) ◽  
pp. 1250-1253 ◽  
Author(s):  
G. S. Russo ◽  
C. J. Bruce

1. We studied neuronal activity in the monkey's frontal eye field (FEF) in conjunction with saccades directed to auditory targets. 2. All FEF neurons with movement activity preceding saccades to visual targets also were active preceding saccades to auditory targets, even when such saccades were made in the dark. Movement cells generally had comparable bursts for aurally and visually guided saccades; visuomovement cells often had weaker bursts in conjunction with aurally guided saccades. 3. When these cells were tested from different initial fixation directions, movement fields associated with aurally guided saccades, like fields mapped with visual targets, were a function of saccade dimensions, and not the speaker's spatial location. Thus, even though sound location cues are chiefly craniotopic, the crucial factor for a FEF discharge before aurally guided saccades was the location of auditory target relative to the current direction of gaze. 4. Intracortical microstimulation at the sites of these cells evoked constant-vector saccades, and not goal-directed saccades. The direction and size of electrically elicited saccades generally matched the cell's movement field for aurally guided saccades. 5. Thus FEF activity appears to have a role in aurally guided as well as visually guided saccades. Moreover, visual and auditory target representations, although initially obtained in different coordinate systems, appear to converge to a common movement vector representation at the FEF stage of saccadic processing that is appropriate for transmittal to saccade-related burst neurons in the superior colliculus and pons.


2007 ◽  
Vol 97 (2) ◽  
pp. 1068-1077 ◽  
Author(s):  
Nikolaos Smyrnis ◽  
Asimakis Mantas ◽  
Ioannis Evdokimidis

In previous studies we observed a pattern of systematic directional errors when humans pointed to memorized visual target locations in two-dimensional (2-D) space. This directional error was also observed in the initial direction of slow movements toward visual targets or movements to kinesthetically defined targets in 2-D space. In this study we used a perceptual experiment where subjects decide whether an arrow points in the direction of a visual target in 2-D space and observed a systematic distortion in direction discrimination known as the “oblique effect.” More specifically, direction discrimination was better for cardinal directions than for oblique. We then used an equivalent measure of direction discrimination in a task where subjects pointed to memorized visual target locations and showed the presence of a motor oblique effect. We finally modeled the oblique effect in the perceptual and motor task using a quadratic function. The model successfully predicted the observed direction discrimination differences in both tasks and, furthermore, the parameter of the model that was related to the shape of the function was not different between the motor and the perceptual tasks. We conclude that a similarly distorted representation of target direction is present for memorized pointing movements and perceptual direction discrimination.


Sign in / Sign up

Export Citation Format

Share Document