Phonological Similarity Effects in Memory for Serial Order of Cued Speech

2001 ◽  
Vol 44 (5) ◽  
pp. 949-963 ◽  
Author(s):  
Jacqueline Leybaert ◽  
Josiane Lechat

Experiment I investigated memory for serial order by congenitally, profoundly deaf individuals, 6–22 years old, for words presented via Cued Speech (CS) without sound. CS is a system that resolves the ambiguity inherent in speechreading through the addition of manual cues. The phonological components of CS are mouth shape, hand shape, and hand placement. Of interest was whether the recall of serial order was lower for lists of words similar in both mouth shape and hand placement, or similar in mouth shape only, or in hand placement only than for control lists designed to minimize these similarities. Deaf participants showed lower performance on the three similar lists than the control lists, suggesting that deaf individuals use the phonology of CS to support their recall. In Experiment II, the same lists were administered to two groups of hearing participants. One group, experienced producers of CS, received the CS stimuli without sound; the other group, unfamiliar with CS, received the CS stimuli audiovisually. Participants experienced with CS showed no effect of hand placement similarity, suggesting that this effect may be related to the linguistic experience of deaf participants. The recency effect was greater in the hearing group provided with sound, indicating that the traces left by auditory stimuli are perceptually more salient than those left by the visual stimuli encountered in CS.

2015 ◽  
Vol 3 (1-2) ◽  
pp. 88-101 ◽  
Author(s):  
Kathleen M. Einarson ◽  
Laurel J. Trainor

Recent work examined five-year-old children’s perceptual sensitivity to musical beat alignment. In this work, children watched pairs of videos of puppets drumming to music with simple or complex metre, where one puppet’s drumming sounds (and movements) were synchronized with the beat of the music and the other drummed with incorrect tempo or phase. The videos were used to maintain children’s interest in the task. Five-year-olds were better able to detect beat misalignments in simple than complex metre music. However, adults can perform poorly when attempting to detect misalignment of sound and movement in audiovisual tasks, so it is possible that the moving stimuli actually hindered children’s performance. Here we compared children’s sensitivity to beat misalignment in conditions with dynamic visual movement versus still (static) visual images. Eighty-four five-year-old children performed either the same task as described above or a task that employed identical auditory stimuli accompanied by a motionless picture of the puppet with the drum. There was a significant main effect of metre type, replicating the finding that five-year-olds are better able to detect beat misalignment in simple metre music. There was no main effect of visual condition. These results suggest that, given identical auditory information, children’s ability to judge beat misalignment in this task is not affected by the presence or absence of dynamic visual stimuli. We conclude that at five years of age, children can tell if drumming is aligned to the musical beat when the music has simple metric structure.


Animals ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 2233
Author(s):  
Loïc Pougnault ◽  
Hugo Cousillas ◽  
Christine Heyraud ◽  
Ludwig Huber ◽  
Martine Hausberger ◽  
...  

Attention is defined as the ability to process selectively one aspect of the environment over others and is at the core of all cognitive processes such as learning, memorization, and categorization. Thus, evaluating and comparing attentional characteristics between individuals and according to situations is an important aspect of cognitive studies. Recent studies showed the interest of analyzing spontaneous attention in standardized situations, but data are still scarce, especially for songbirds. The present study adapted three tests of attention (towards visual non-social, visual social, and auditory stimuli) as tools for future comparative research in the European starling (Sturnus vulgaris), a species that is well known to present individual variations in social learning or engagement. Our results reveal that attentional characteristics (glances versus gazes) vary according to the stimulus broadcasted: more gazes towards unusual visual stimuli and species-specific auditory stimuli and more glances towards species-specific visual stimuli and hetero-specific auditory stimuli. This study revealing individual variations shows that these tests constitute a very useful and easy-to-use tool for evaluating spontaneous individual attentional characteristics and their modulation by a variety of factors. Our results also indicate that attentional skills are not a uniform concept and depend upon the modality and the stimulus type.


1954 ◽  
Vol 100 (419) ◽  
pp. 462-477 ◽  
Author(s):  
K. R. L. Hall ◽  
E. Stride

A number of studies on reaction time (R.T.) latency to visual and auditory stimuli in psychotic patients has been reported since the first investigations on the personal equation were carried out. The general trends from the work up to 1943 are well summarized by Hunt (1944), while Granger's (1953) review of “Personality and visual perception” contains a summary of the studies on R.T. to visual stimuli.


2017 ◽  
Vol 30 (7-8) ◽  
pp. 763-781 ◽  
Author(s):  
Jenni Heikkilä ◽  
Kimmo Alho ◽  
Kaisa Tiippana

Audiovisual semantic congruency during memory encoding has been shown to facilitate later recognition memory performance. However, it is still unclear whether this improvement is due to multisensory semantic congruency or just semantic congruencyper se. We investigated whether dual visual encoding facilitates recognition memory in the same way as audiovisual encoding. The participants memorized auditory or visual stimuli paired with a semantically congruent, incongruent or non-semantic stimulus in the same modality or in the other modality during encoding. Subsequent recognition memory performance was better when the stimulus was initially paired with a semantically congruent stimulus than when it was paired with a non-semantic stimulus. This congruency effect was observed with both audiovisual and dual visual stimuli. The present results indicate that not only multisensory but also unisensory semantically congruent stimuli can improve memory performance. Thus, the semantic congruency effect is not solely a multisensory phenomenon, as has been suggested previously.


2018 ◽  
Vol 7 ◽  
pp. 172-177
Author(s):  
Łukasz Tyburcy ◽  
Małgorzata Plechawska-Wójcik

The paper describes results of comparison of reactions times to visual and auditory stimuli using EEG evoked potentials. Two experiments were used to applied. The first one explored reaction times to visual stimulus and the second one to auditory stimulus. After conducting an analysis of data, received results enable determining that visual stimuli evoke faster reactions than auditory stimuli.


1975 ◽  
Vol 40 (1) ◽  
pp. 3-7 ◽  
Author(s):  
Gerda Smets

Ss take more time to perceive interesting/displeasing stimuli than uninteresting/pleasing ones. This is consistent with the results of former experiments. However we used a different operationalization of looking time, based on binocular rivalry. Each of six stimulus pairs was presented in a stereoscope. One member of each pair was interesting but displeasing in comparison to the other member. Stimulus complexity was under control. Due to binocular rivalry Ss perceived only one pattern a time. 20 Ss were asked to indicate which pattern they actually saw by pushing two buttons. For each stimulus pair was registered how long each button was pushed during each of six successive minutes. Unlike other operationalizations this one is less dependent on S's determination of what stimulus will be looked at or for how long. It has the advantage that it is bound up more exclusively with relations of similarity and dissimilarity between stimulus elements. It allows manipulation of exposure time in a systematic and continuous way. There is no significant interaction between looking and exposure time.


1974 ◽  
Vol 38 (2) ◽  
pp. 417-418 ◽  
Author(s):  
Robert Zenhausern ◽  
Claude Pompo ◽  
Michael Ciaiola

Simple and complex reaction time to visual stimuli was tested under 7 levels of accessory stimulation (white noise). Only the highest level of stimulation (70 db above threshold) lowered reaction time. The other levels had no effect.


2012 ◽  
Vol 25 (0) ◽  
pp. 65 ◽  
Author(s):  
Vanja Kovic ◽  
Jovana Pejovic

A number of studies have demonstrated sound-symbolism effects in adults and in children. Moreover, recently, ERP studies have shown that the sensitivity to sound-symbolic label–object associations occurs within 200 ms of object presentation (Kovic et al., 2010). It was argued that this effect may reflect a more general process of auditory–visual feature integration where properties of auditory stimuli facilitate a mapping to specific visual features. Here we demonstrate that the sound-symbolism effect is design dependent, namely — it occurs only when mapping from auditory to visual stimuli and not vice verse. Two groups of participants were recruited for solving the categorization task. They were presented them with 12 visual stimuli, half of which were rounded and another half of angular shapes. One group was trained to classify the rounded objects as ‘takete’ and the rounded ones as ‘maluma’, whereas the other group mapped ‘takete’ to rounded and ‘maluma’ to angular shapes. Moreover, half of these two groups heard the label before seeing the objects, whereas the other half was given the label after perceiving the object. The results revealed the sound-symbolism effect only in the group which was trained on the auditory–visual objects mapping and not in the one trained on the visual–auditory mappings. Thus, despite the previous findings we demonstrate that the sound-symbolism effect is not significant per se, but design-dependent and we argue that the sound brings up a mental image that is more constrained than the sounds brought up by a picture.


Sign in / Sign up

Export Citation Format

Share Document