visual distractors
Recently Published Documents


TOTAL DOCUMENTS

65
(FIVE YEARS 13)

H-INDEX

14
(FIVE YEARS 2)

2021 ◽  
pp. 108238
Author(s):  
Vera Ferrari ◽  
Francesca Canturi ◽  
Maurizio Codispoti

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jacques Pesnot Lerousseau ◽  
Gabriel Arnold ◽  
Malika Auvray

AbstractSensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6386
Author(s):  
Tomer Elbaum ◽  
Yoram Braw ◽  
Astar Lev ◽  
Yuri Rassovsky

Clinical decision-making may be enhanced when combining psychophysiological sensors with computerized neuropsychological tests. The current study explored the utility of integrating an eye tracker with a commercially available continuous performance test (CPT), the MOXO-dCPT. As part of the study, the performance of adult attention-deficit/hyperactivity disorder (ADHD) patients and healthy controls (n = 43, n = 42, respectively) was compared in the integrated system. More specifically, the MOXO-dCPT has four stages, which differ in their combinations of ecological visual and auditory dynamic distractors. By exploring the participants’ performance in each of the stages, we were able to show that: (a) ADHD patients spend significantly more time gazing at irrelevant areas of interest (AOIs) compared to healthy controls; (b) visual distractors are particularly effective in impacting ADHD patients’ eye movements, suggesting their enhanced utility in diagnostic procedures; (c) combining gaze direction data and conventional CPT indices enhances group prediction, compared to the sole use of conventional indices. Overall, the findings indicate the utility of eye tracker-integrated CPTs and their enhanced diagnostic precision. They also suggest that the use of attention-grabbing visual distractors may be a promising path for the evolution of existing CPTs by shortening their duration and enhancing diagnostic precision.


2020 ◽  
Vol 82 (7) ◽  
pp. 3479-3489
Author(s):  
Lars-Michael Schöpper ◽  
Tarini Singh ◽  
Christian Frings

Abstract When responding to two events in a sequence, the repetition or change of stimuli and the accompanying response can benefit or interfere with response execution: Full repetition leads to benefits in performance while partial repetition leads to costs. Additionally, even distractor stimuli can be integrated with a response, and can, upon repetition, lead to benefits or interference. Recently it has been suggested that not only identical, but also perceptually similar distractors retrieve a previous response (Singh et al., Attention, Perception, & Psychophysics, 78(8), 2307-2312, 2016): Participants discriminated four visual shapes appearing in five different shades of grey, the latter being irrelevant for task execution. Exact distractor repetitions yielded the strongest distractor-based retrieval effect, which decreased with increasing dissimilarity between shades of grey. In the current study, we expand these findings by conceptually replicating Singh et al. (2016) using multimodal stimuli. In Experiment 1 (N=31), participants discriminated four visual targets accompanied by five auditory distractors. In Experiment 2 (N=32), participants discriminated four auditory targets accompanied by five visual distractors. We replicated the generalization of distractor-based retrieval – that is, the distractor-based retrieval effect decreased with increasing distractor-dissimilarity. These results not only show that generalization in distractor-based retrieval occurs in multimodal feature processing, but also that these processes can occur for distractors perceived in a different modality to that of the target.


2019 ◽  
Vol 82 (4) ◽  
pp. 1682-1694
Author(s):  
Siyi Chen ◽  
Zhuanghua Shi ◽  
Xuelian Zang ◽  
Xiuna Zhu ◽  
Leonardo Assumpção ◽  
...  

AbstractIt is well established that statistical learning of visual target locations in relation to constantly positioned visual distractors facilitates visual search. In the present study, we investigated whether such a contextual-cueing effect would also work crossmodally, from touch onto vision. Participants responded to the orientation of a visual target singleton presented among seven homogenous visual distractors. Four tactile stimuli, two to different fingers of each hand, were presented either simultaneously with or prior to the visual stimuli. The identity of the stimulated fingers provided the crossmodal context cue: in half of the trials, a given visual target location was consistently paired with a given tactile configuration. The visual stimuli were presented above the unseen fingers, ensuring spatial correspondence between vision and touch. We found no evidence of crossmodal contextual cueing when the two sets of items (tactile, visual) were presented simultaneously (Experiment 1). However, a reliable crossmodal effect emerged when the tactile distractors preceded the onset of visual stimuli 700 ms (Experiment 2). But crossmodal cueing disappeared again when, after an initial learning phase, participants flipped their hands, making the tactile distractors appear at different positions in external space while their somatotopic positions remained unchanged (Experiment 3). In all experiments, participants were unable to explicitly discriminate learned from novel multisensory arrays. These findings indicate that search-facilitating context memory can be established across vision and touch. However, in order to guide visual search, the (predictive) tactile configurations must be remapped from their initial somatotopic into a common external representational format.


Sign in / Sign up

Export Citation Format

Share Document