scholarly journals The intuitive estimation of means with auditory presentation

1969 ◽  
Vol 17 (6) ◽  
pp. 331-332
Author(s):  
G. Lowe
Author(s):  
Ana Franco ◽  
Julia Eberlen ◽  
Arnaud Destrebecqz ◽  
Axel Cleeremans ◽  
Julie Bertels

Abstract. The Rapid Serial Visual Presentation procedure is a method widely used in visual perception research. In this paper we propose an adaptation of this method which can be used with auditory material and enables assessment of statistical learning in speech segmentation. Adult participants were exposed to an artificial speech stream composed of statistically defined trisyllabic nonsense words. They were subsequently instructed to perform a detection task in a Rapid Serial Auditory Presentation (RSAP) stream in which they had to detect a syllable in a short speech stream. Results showed that reaction times varied as a function of the statistical predictability of the syllable: second and third syllables of each word were responded to faster than first syllables. This result suggests that the RSAP procedure provides a reliable and sensitive indirect measure of auditory statistical learning.


Author(s):  
Wienke Wannagat ◽  
Gesine Waizenegger ◽  
Gerhild Nieding

AbstractIn an experiment with 114 children aged 9–12 years, we compared the ability to establish local and global coherence of narrative texts between auditory and audiovisual (auditory text and pictures) presentation. The participants listened to a series of short narrative texts, in each of which a protagonist pursued a goal. Following each text, we collected the response time to a query word that was either associated with a near or a distant causal antecedent of the final sentence. Analysis of these response times indicated that audiovisual presentation has advantages over auditory presentation for accessing information relevant for establishing both local and global coherence, but there are indications that this effect may be slightly more pronounced for global coherence.


Author(s):  
Todd D. Hollander ◽  
Michael S. Wogalter

Signal words, such as DANGER and WARNING have been used in print (visual) warnings with the intention of evoking different levels of perceived hazard. However, there is limited research on whether auditory presentation of these words connote different levels of perceived hazard. In the present study, five voiced signal words were used to produce sound clips each composed of the words spoken three times and were manipulated according to the following factors: speaker gender, word unit duration (fast, slow), inter-word interval, (short, long), with the sound level held constant. Results indicate that the sound clips with short word unit duration were given higher carefulness ratings than long word unit duration ( ps < .01). The results showed a similar pattern of ratings for the signal words as shown in research using print presentations. Implications for the design of voiced warnings are described.


2012 ◽  
Vol 107 (5) ◽  
pp. 1421-1430 ◽  
Author(s):  
Ruth M. Nicol ◽  
Sandra C. Chapman ◽  
Petra E. Vértes ◽  
Pradeep J. Nathan ◽  
Marie L. Smith ◽  
...  

How do human brain networks react to dynamic changes in the sensory environment? We measured rapid changes in brain network organization in response to brief, discrete, salient auditory stimuli. We estimated network topology and distance parameters in the immediate central response period, <1 s following auditory presentation of standard tones interspersed with occasional deviant tones in a mismatch-negativity (MMN) paradigm, using magnetoencephalography (MEG) to measure synchronization of high-frequency (gamma band; 33–64 Hz) oscillations in healthy volunteers. We found that global small-world parameters of the networks were conserved between the standard and deviant stimuli. However, surprising or unexpected auditory changes were associated with local changes in clustering of connections between temporal and frontal cortical areas and with increased interlobar, long-distance synchronization during the 120- to 250-ms epoch (coinciding with the MMN-evoked response). Network analysis of human MEG data can resolve fast local topological reconfiguration and more long-range synchronization of high-frequency networks as a systems-level representation of the brain's immediate response to salient stimuli in the dynamically changing sensory environment.


2001 ◽  
Vol 13 (1) ◽  
pp. 121-143 ◽  
Author(s):  
Nicolas Dumay ◽  
Abdelrhani Benraïss ◽  
Brian Barriol ◽  
Cécile Colin ◽  
Monique Radeau ◽  
...  

Phonological priming between bisyllabic (CV.CVC) spoken items was examined using both behavioral (reaction times, RTs) and electrophysiological (event-related potentials, ERPs) measures. Word and pseudoword targets were preceded by pseudoword primes. Different types of final phonological overlap between prime and target were compared. Critical pairs shared the last syllable, the rime or the coda, while unrelated pairs were used as controls. Participants performed a target shadowing task in Experiment 1 and a delayed lexical decision task in Experiment 2. RTs were measured in the first experiment and ERPs were recorded in the second experiment. The RT experiment was carried out under two presentation conditions. In Condition 1 both primes and targets were presented auditorily, while in Condition 2 the primes were presented visually and the targets auditorily. Priming effects were found in the unimodal condition only. RTs were fastest for syllable overlap, intermediate for rime overlap, and slowest for coda overlap and controls that did not differ from one another. ERPs were recorded under unimodal auditory presentation. ERP results showed that the amplitude of the auditory N400 component was smallest for syllable overlap, intermediate for rime overlap, and largest for coda overlap and controls that did not differ from one another. In both experiments, the priming effects were larger for word than for pseudoword targets. These results are best explained by the combined influences of nonlexical and lexical processes, and a comparison of the reported effects with those found in monosyllables suggests the involvement of rime and syllable representations.


2000 ◽  
Vol 10 (11) ◽  
pp. 2565-2586 ◽  
Author(s):  
BÄRBEL SCHACK ◽  
PETER RAPPELSBERGER ◽  
CHRISTOPH ANDERS ◽  
SABINE WEISS ◽  
EVA MÖLLER

Neuronal activity during information processing and muscle activity are generally characterized by oscillations. Mostly, widespread areas are involved and electrophysiological signals are measured on different sites of the cortex or of the muscle. In order to investigate functional relationships between different components of multidimensional electrophysiological signals, coherence and phase analyses turned out to be useful tools. These parameters allow the investigation of synchronization phenomena with regard to oscillations of defined frequencies or frequency bands. Coherence and phase are closely connected spectral parameters. Coherence may be understood as a measure of phase stability. Whereas coherence describes the amount of common information with regard to oscillations within certain frequency bands, the corresponding phase, from which time delays of these oscillations can be computed, hints at the direction of information transfer through oscillation. Coherence and phase analysis of surface EMG during continuous activity of deep and superficial muscles show distinct differences due to volume conduction properties of myoelectrical signals. Superficial activity therefore is characterized by significant coherence and stable phase relationships, which, additionally, can be used to determine motor unit action potential (MUAP) propagation velocity along the fibre direction without application of invasive methods. Deep muscle activity lacks significant coherence. Mental processes can be very brief and cooperation between different areas may be highly dynamic. For this reason in addition to usual Fourier estimation of coherence and phase, a two-dimensional approach of adaptive filtering was developed to estimate coherence and phase continuously in time. Statistical and dynamic properties of instantaneous phase are discussed. In order to demonstrate the value of this method for studying higher cognitive processes the method was applied to EEG recorded during word processing. During visual presentation of abstract nouns an information transfer through the propagation of oscillations from visual areas to frontal association areas in the α1-frequency band could be verified within the first 400 ms. In contrast, in case of auditory presentation positive phases from the temporal electrode locations T3 and T4 towards the occipital areas appear within the time interval of 300 ms–600 ms. The α1-band predominately seems to reflect sensory processing and attention processes.


Sign in / Sign up

Export Citation Format

Share Document