scholarly journals Influence of Regular Rhythmic Versus Textural Sound Sequences on Semantic and Conceptual Processing

2021 ◽  
Vol 39 (2) ◽  
pp. 145-159
Author(s):  
Laure-Hélène Canette ◽  
Philippe Lalitte ◽  
Barbara Tillmann ◽  
Emmanuel Bigand

Conceptual priming studies have shown that listening to musical primes triggers semantic activation. The present study further investigated with a free semantic evocation task, 1) how rhythmic vs. textural structures affect the amount of words evoked after a musical sequence, and 2) whether both features also affect the content of the semantic activation. Rhythmic sequences were composed of various percussion sounds with a strong underlying beat and metrical structure. Textural sound sequences consisted of blended timbres and sound sources evolving over time without identifiable pulse. Participants were asked to verbalize the concepts evoked by the musical sequences. We measured the number of words and lemmas produced after having listened to musical sequences of each condition, and we analyzed whether specific concepts were associated with each sequence type. Results showed that more words and lemmas were produced for textural sound sequences than for rhythmic sequences and that some concepts were specifically associated with each musical condition. Our findings suggest that listening to musical excerpts emphasizing different features influences semantic activation in different ways and extent. This might possibly be instantiated via cognitive mechanisms triggered by the acoustic characteristics of the excerpts as well as the perceived emotions.

Statyba ◽  
1998 ◽  
Vol 4 (4) ◽  
pp. 311-315
Author(s):  
V. Stauskis ◽  
V. Kunigėlis

2011 ◽  
Vol 23 (11) ◽  
pp. 3241-3253 ◽  
Author(s):  
Annett Schirmer ◽  
Yong Hao Soh ◽  
Trevor B. Penney ◽  
Lonce Wyse

It is still unknown whether sonic environments influence the processing of individual sounds in a similar way as discourse or sentence context influences the processing of individual words. One obstacle to answering this question has been the failure to dissociate perceptual (i.e., how similar are sonic environment and target sound?) and conceptual (i.e., how related are sonic environment and target?) priming effects. In this study, we dissociate these effects by creating prime–target pairs with a purely perceptual or both a perceptual and conceptual relationship. Perceptual prime–target pairs were derived from perceptual–conceptual pairs (i.e., meaningful environmental sounds) by shuffling the spectral composition of primes and targets so as to preserve their perceptual relationship while making them unrecognizable. Hearing both original and shuffled targets elicited a more positive N1/P2 complex in the ERP when targets were related to a preceding prime as compared with unrelated. Only related original targets reduced the N400 amplitude. Related shuffled targets tended to decrease the amplitude of a late temporo-parietal positivity. Taken together, these effects indicate that sonic environments influence first the perceptual and then the conceptual processing of individual sounds. Moreover, the influence on conceptual processing is comparable to the influence linguistic context has on the processing of individual words.


2021 ◽  
Vol 42 (03) ◽  
pp. 237-247
Author(s):  
Eric Branda ◽  
Tobias Wurzbacher

AbstractA requirement for modern hearing aids is to evaluate a listening environment for the user and automatically apply appropriate gain and feature settings for optimal hearing in that listening environment. This has been predominantly achieved by the hearing aids' acoustic sensors, which measure acoustic characteristics such as the amplitude and modulation of the incoming sound sources. However, acoustic information alone is not always sufficient for providing a clear indication of the soundscape and user's listening needs. User activity such as being stationary or being in motion can drastically change these listening needs. Recently, hearing aids have begun utilizing integrated motion sensors to provide further information to the hearing aid's decision-making process when determining the listening environment. Specifically, accelerometer technology has proven to be an appropriate solution for motion sensor integration in hearing aids. Recent investigations have shown benefits with integrated motion sensors for both laboratory and real-world ecological momentary assessment measurements. The combination of acoustic and motion sensors provides the hearing aids with data to better optimize the hearing aid features in anticipation of the hearing aid user's listening needs.


2017 ◽  
Vol 16 (4-5) ◽  
pp. 418-430 ◽  
Author(s):  
Gert Herold ◽  
Florian Zenger ◽  
Ennes Sarradj

Microphone arrays can be used to detect sound sources on rotating machinery. For this study, experiments with three different axial fans, featuring backward-skewed, unskewed, and forward-skewed blades, were conducted in a standardized fan test chamber. The measured data are processed using the virtual rotating array method. Subsequent application of beamforming and deconvolution in the frequency domain allows the localization and quantification of separate sources, as appear at different regions on the blades. Evaluating broadband spectra of the leading and trailing edges of the blades, phenomena governing the acoustic characteristics of the fans at different operating points are identified. This enables a detailed discussion of the influence of the blade design on the radiated noise.


2010 ◽  
Vol 22 (5) ◽  
pp. 1026-1035 ◽  
Author(s):  
Daniele Schön ◽  
Sølvi Ystad ◽  
Richard Kronland-Martinet ◽  
Mireille Besson

Two experiments were conducted to examine the conceptual relation between words and nonmeaningful sounds. In order to reduce the role of linguistic mediation, sounds were recorded in such a way that it was highly unlikely to identify the source that produced them. Related and unrelated sound–word pairs were presented in Experiment 1 and the order of presentation was reversed in Experiment 2 (word–sound). Results showed that, in both experiments, participants were sensitive to the conceptual relation between the two items. They were able to correctly categorize items as related or unrelated with good accuracy. Moreover, a relatedness effect developed in the event-related brain potentials between 250 and 600 msec, although with a slightly different scalp topography for word and sound targets. Results are discussed in terms of similar conceptual processing networks and we propose a tentative model of the semiotics of sounds.


2017 ◽  
Vol 29 (8) ◽  
pp. 1402-1414 ◽  
Author(s):  
Regine Bader ◽  
Axel Mecklinger

ERP old/new effects have been associated with different subprocesses of episodic recognition memory. The notion that recollection is reflected in the left parietal old/new effect seems to be uncontested. However, an association between episodic familiarity and the mid-frontal old/new effect is not uncontroversial. It has been argued that the mid-frontal old/new effect is functionally equivalent to the N400 and hence merely reflects differences in conceptual fluency between old and new items. Therefore, it is related to episodic familiarity only in situations in which conceptual fluency covaries with familiarity. Alternatively, the old/new effect in this time window reflects an interaction of episodic familiarity and conceptual processing with each making a unique functional contribution. To test this latter account, we manipulated conceptual fluency and episodic familiarity orthogonally in an incidental recognition test: Visually presented old and new words were preceded by either conceptually related or unrelated auditory prime words. If the mid-frontal old/new effect is functionally distinguishable from conceptual priming effects, an ERP contrast reflecting pure priming (correct rejections in the related vs. unrelated condition) and a contrast reflecting priming plus familiarity (hits in the related vs. correct rejections in the unrelated condition) should differ in scalp distribution. As predicted, the pure priming contrast had a right-parietal distribution, as typically observed for the N400 effect, whereas the priming plus familiarity contrast was significantly more frontally accentuated. These findings implicate that old/new effects in this time window are driven by unique functional contributions of episodic familiarity and conceptual processing.


1987 ◽  
Vol 96 (5) ◽  
pp. 573-577 ◽  
Author(s):  
Colin Painter ◽  
John M. Fredrickson ◽  
Timothy Kaiser ◽  
Roanne Karzon

An electromagnetic artificial larynx was implanted in two volunteer laryngectomees. Both patients were able to communicate well, but the voice quality still needed improving. Therefore, in this investigation, listener judgments were obtained of 22 different sound sources with a view to incorporating the preferred speech sound in a new version of the device. Electroglottograms were used as sound sources in a speech synthesizer and sentences were produced with different voice qualities for judgmental tests. The results of the listening tests showed a distinct preference for waveforms corresponding to a long completely open phase, a very brief completely closed phase, and an abrupt closing gesture. The optimum acoustic characteristics for the device will be used by electrical engineers to manufacture a new version of the artificial larynx with an improved voice quality.


2005 ◽  
Vol 41 (1) ◽  
pp. 33-75 ◽  
Author(s):  
VYVYAN EVANS

In this paper I argue that the lexeme time constitutes a lexical category of distinct senses instantiated in semantic memory. The array of distinct senses constitutes a motivated semantic network organised with respect to a central sense termed the SANCTIONING SENSE. The senses associated with time are derived by virtue of the interaction between the Sanctioning Sense, conceptual processing and structuring, and context. Hence, semantic representations, cognitive mechanisms, and situated language use are appealed to in accounting for the polysemy associated with time. The model adduced is termed PRINCIPLEDPOLYSEMY. The conclusion which emerges, in keeping with recent studies in lexical semantics, most notably Lakoff (1987), Pustejovsky (1995), Tyler & Evans (2003) and Evans (2004), is that the lexicon is not an arbitrary repository of unrelated lexemes; rather, the lexicon exhibits a significant degree of systematicity, and productivity. In order to adduce what constitutes a distinct sense, I introduce three criteria: (1) a meaning criterion, (2) a concept elaboration criterion and (3) a grammatical criterion. A further claim is that the lexicon exhibits significant redundancy. This position is at odds with SINGLE-MEANINGAPPROACHES to polysemy, which posit highly underspecified lexical META-ENTRIES, such as the generative approach of Pustejovsky (1995) or the monosemy position of Ruhl (1989). That is, I propose that lexical items constitute highly granular categories of senses, which are encoded in semantic memory (=the lexicon). This necessitates a set of criteria for determining what counts as a distinct sense without deriving a proliferation of unwarranted senses, a criticism which has been levelled at some studies of word-meaning in cognitive linguistics (e.g. Lakoff 1987).


2009 ◽  
Vol 21 (10) ◽  
pp. 1882-1892 ◽  
Author(s):  
Jérôme Daltrozzo ◽  
Daniele Schön

The cognitive processing of concepts, that is, abstract general ideas, has been mostly studied with language. However, other domains, such as music, can also convey concepts. Koelsch et al. [Koelsch, S., Kasper, E., Sammler, D., Schulze, K., Gunter, T., & Friederici, A. D. Music, language and meaning: Brain signatures of semantic processing. Nature Neuroscience, 7, 302–307, 2004] showed that 10 sec of music can influence the semantic processing of words. However, the length of the musical excerpts did not allow the authors to study the effect of words on musical targets. In this study, we decided to replicate Koelsch et al. findings using 1-sec musical excerpts (Experiment 1). This allowed us to study the reverse influence, namely, of a linguistic context on conceptual processing of musical excerpts (Experiment 2). In both experiments, we recorded behavioral and electrophysiological responses while participants were presented 50 related and 50 unrelated pairs (context/target). Experiments 1 and 2 showed a larger N400 component of the event-related brain potentials to targets following a conceptually unrelated compared to a related context. The presence of an N400 effect with musical targets suggests that music may convey concepts. The relevance of these results for the comprehension of music as a structured set of conceptual units and for the domain specificity of the mechanisms underlying N400 effects are discussed.


1998 ◽  
Vol 4 (4) ◽  
pp. 311-315 ◽  
Author(s):  
Vytautas Stauskis ◽  
Vytautas Kunigėlis

The paper examines the acoustic characteristics of explosion-type pulsed sound sources of four types. These include a Calibre 8 sound gun, a start gun, a Calibre 16 hunting gun, and a toy gun. The latter was included both because of its short pulse duration and for comparison purposes. Correct selection of a source is very important because it largely determines the results of acoustic measurements. Certain requirements are set for a sound source. In order to concentrate as much energy as possible at the given moment, the signal bandwidth-duration product must be as large as possible. The range frequencies to be excited depend on the pulse duration. The latter also determines whether interference phenomena will occur in the room and whether individual reflections will merge. The experiments were conducted in a room of 12 m2. The distance between the microphone and the pulsed sound source was 1 m. The structure of reflections depends on the pulse by means of which the sound field is excited. The smallest number of reflections is generated by a sound source. During a 20 ms experiment, the amplitudes of these reflections almost coincided with the direct sound amplitude. A sound gun emits more sound energy than other pulses. When the sound field is excited by means of a start gun and a hunting gun, the reflection structure, by amplitude, is very different from that produced by a sound gun. A dense reflection structure is formed by a toy gun but it emits less energy. The structure of reflections generated by a hunting gun is acceptable but its shots are very unstable, which is a major drawback in an experiment. The shots from a sound gun differ only by about 0.1% among themselves by amplitude, ie they are sufficiently stable. Among the four sound sources, the best reflection structure is produced by a sound gun. A sound gun is characterised both by the longest pulse duration (about 0.55 ms) and the highest levels of energy emitted. The pulse duration of the rest three guns is almost equal and is about 0.15 ms, ie is 3.6 times shorter than that of a sound gun. The forms of signals emitted by these sound sources are also very different. The spectrum of a sound source was established on Fourier transformation basis. The spectrum is largely dependent on the type of a gun by means of which the sound field is excited. The maximum width of the spectrum generated by a sound gun occupies almost two octaves, from 500 to 2000 Hz, and the radiation in this range is quite uniform. The spectra of a start gun and a hunting gun are similar but these guns emit less sound energy than a sound gun. The structure of reflections generated by them is also quite different. A toy gun radiates energy in a less narrower band, the width of which occupies about a half of octave, with a maximum at 2000 Hz. This is not very good because too small quantities of low- and medium-frequency sound energy are radiated.


Sign in / Sign up

Export Citation Format

Share Document