spatial cueing
Recently Published Documents


TOTAL DOCUMENTS

133
(FIVE YEARS 31)

H-INDEX

16
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Simon R. Steinkamp ◽  
Gereon R. Fink ◽  
Simone Vossel ◽  
Ralph Weidner

2021 ◽  
Author(s):  
Ebru Ger ◽  
Stephanie Wermelinger ◽  
Maxine de Ven ◽  
Moritz M. Daum

Adults and infants as young as 4 months old follow pointing gestures. Although adults are shown to orient faster to index-finger pointing compared to other hand shapes, it is not known whether hand shapes influence infants' following of pointing. In this study, we used a spatial cueing paradigm on an eye tracker to investigate whether and to what extent adults and 12-month-old infants orient their attention in the direction of pointing gestures with different hand shapes: index finger, whole hand, and pinky finger. Results revealed that adults showed a cueing effect, that is, shorter saccadic reaction times (SRTs) to congruent compared to incongruent targets, for all hand shapes. However, they did not show a larger cueing effect triggered by the index finger. This contradicts previous findings and is discussed with respect to the differences in methodology. Infants showed a cueing effect only for the whole hand but not for the index finger or the pinky finger. Infants predominantly point with the whole hand prior to 12 months. The current results thus suggest that infants' perception of pointing gestures may be linked to their own production of pointing gestures. Infants may show a cueing effect by the conventional index-finger pointing shape later than their first year, possibly when they start to point predominantly with their index finger.


2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


2021 ◽  
Author(s):  
◽  
Daniel Jenkins

<p>Multisensory integration describes the cognitive processes by which information from various perceptual domains is combined to create coherent percepts. For consciously aware perception, multisensory integration can be inferred when information in one perceptual domain influences subjective experience in another. Yet the relationship between integration and awareness is not well understood. One current question is whether multisensory integration can occur in the absence of perceptual awareness. Because there is subjective experience for unconscious perception, researchers have had to develop novel tasks to infer integration indirectly. For instance, Palmer and Ramsey (2012) presented auditory recordings of spoken syllables alongside videos of faces speaking either the same or different syllables, while masking the videos to prevent visual awareness. The conjunction of matching voices and faces predicted the location of a subsequent Gabor grating (target) on each trial. Participants indicated the location/orientation of the target more accurately when it appeared in the cued location (80% chance), thus the authors inferred that auditory and visual speech events were integrated in the absence of visual awareness. In this thesis, I investigated whether these findings generalise to the integration of auditory and visual expressions of emotion. In Experiment 1, I presented spatially informative cues in which congruent facial and vocal emotional expressions predicted the target location, with and without visual masking. I found no evidence of spatial cueing in either awareness condition. To investigate the lack of spatial cueing, in Experiment 2, I repeated the task with aware participants only, and had half of those participants explicitly report the emotional prosody. A significant spatial-cueing effect was found only when participants reported emotional prosody, suggesting that audiovisual congruence can cue spatial attention during aware perception. It remains unclear whether audiovisual congruence can cue spatial attention without awareness, and whether such effects genuinely imply multisensory integration.</p>


2021 ◽  
Vol 49 (9) ◽  
pp. 1-6
Author(s):  
Jiyou Gu ◽  
Huiqin Dong

Using a spatial-cueing paradigm in which trait words were set as visual cues and gender words were set as auditory targets, we examined whether cross-modal spatial attention was influenced by gender stereotypes. Results of an experiment conducted with 24 participants indicate that they tended to focus on targets in the valid-cue condition (i.e., the cues located at the same position as targets), regardless of the modality of cues and targets, which is consistent with the cross-modal attention effect found in previous studies. Participants tended to focus on targets that were stereotype-consistent with cues only when the cues were valid, which shows that stereotype-consistent information facilitated visual–auditory cross-modal spatial attention. These results suggest that cognitive schema, such as gender stereotypes, have an effect on cross-modal spatial attention.


Author(s):  
Christian Büsel ◽  
Christian Valuch ◽  
Harald R. Bliem ◽  
Pierre Sachse ◽  
Ulrich Ansorge

Abstract. In spatial cueing, cues presented at target position (valid condition) can capture visual attention and facilitate responses to the target relative to cues presented away from target position (invalid condition). If cues and targets carry different features, the necessary updating of the object representation from the cue to the target display sometimes counteracts and even reverses facilitation in valid conditions, resulting in an inverted validity effect. Previous studies reached partly divergent conclusions regarding the conditions under which object-file updating occurs, and little is known about the exact nature of the processes involved. Object-file updating has so far been investigated by manipulating cue–target similarities in task-relevant target features, but other features that change between the cue and target displays might also contribute to object-file updating. This study examined the conditions under which object-file updating could counteract validity effects by systematically varying task-relevant (color), response-relevant (identity), and response-irrelevant (orientation) features between cue and target displays. The results illustrate that object-file updating is largely restricted to task-relevant features. In addition, the difficulty of the search task affects the degree to which object-file updating costs interact with spatial cueing.


Sign in / Sign up

Export Citation Format

Share Document