complex tones
Recently Published Documents


TOTAL DOCUMENTS

303
(FIVE YEARS 24)

H-INDEX

40
(FIVE YEARS 3)

2021 ◽  
Vol 10 (47) ◽  
pp. 125-141
Author(s):  
Vladyslava Akkurt ◽  
Tetiana Korolova ◽  
Oleksandra Popova

This paper presents the research of prosodic means conveying the persuasion modality in a prosecutor’s speech in court. The material under study consists of English and Ukrainian speeches of the prosecutors (the total duration time is 16 hours). The results of the experimental material examination demonstrate common and specific characteristics of prosody components (melody, loudness, tempo, timber and sentence stress) in English and Ukrainian. Pragmatics of prosody semantics and correlation between its parameters have been proved. It has been stated that in both English and Ukrainian an utterance becomes emphatic due to the prosodic means of persuasion in a prosecutor’s speech as follows: 1) changes of tempo; 2) changes of the pitch of a voice; 3) replacements of the rising tone with the falling one and vice versa; 4) usage of complex tones; 5) use of an interrupted ascending or descending scale; 6) change of sentence stress type; 7) division of a sense group into two or more parts. The above mentioned facts enable us to conclude that: while describing the first of these aspects of typological similarity of prosody in the compared languages, the parameters of the pitch component of intonation are most informative when differentiating attitudinal ones. The specificity of interaction between prosodic and grammar means when expressing persuasion in Ukrainian and English prosecutor’s speech is caused by a degree of distinction between the grammatical and vocabulary systems of the compared languages.


Author(s):  
Joseph D Wagner ◽  
Alice Gelman ◽  
Kenneth E. Hancock ◽  
Yoojin Chung ◽  
Bertrand Delgutte

The pitch of harmonic complex tones (HCT) common in speech, music and animal vocalizations plays a key role in the perceptual organization of sound. Unraveling the neural mechanisms of pitch perception requires animal models but little is known about complex pitch perception by animals, and some species appear to use different pitch mechanisms than humans. Here, we tested rabbits' ability to discriminate the fundamental frequency (F0) of HCTs with missing fundamentals using a behavioral paradigm inspired by foraging behavior in which rabbits learned to harness a spatial gradient in F0 to find the location of a virtual target within a room for a food reward. Rabbits were initially trained to discriminate HCTs with F0s in the range 400-800 Hz and with harmonics covering a wide frequency range (800-16,000 Hz), and then tested with stimuli differing either in spectral composition to test the role of harmonic resolvability (Experiment 1), or in F0 range (Experiment 2), or both F0 and spectral content (Experiment 3). Together, these experiments show that rabbits can discriminate HCTs over a wide F0 range (200-1600 Hz) encompassing the range of conspecific vocalizations, and can use either the spectral pattern of harmonics resolved by the cochlea for higher F0s or temporal envelope cues resulting from interaction between unresolved harmonics for lower F0s. The qualitative similarity of these results to human performance supports using rabbits as an animal model for studies of pitch mechanisms providing species differences in cochlear frequency selectivity and F0 range of vocalizations are taken into account.


2021 ◽  
Vol 11 (12) ◽  
pp. 1592
Author(s):  
Devin Inabinet ◽  
Jan De La Cruz ◽  
Justin Cha ◽  
Kevin Ng ◽  
Gabriella Musacchia

The perception of harmonic complexes provides important information for musical and vocal communication. Numerous studies have shown that musical training and expertise are associated with better processing of harmonic complexes, however, it is unclear whether the perceptual improvement associated with musical training is universal to different pitch models. The current study addresses this issue by measuring discrimination thresholds of musicians (n = 20) and non-musicians (n = 18) to diotic (same sound to both ears) and dichotic (different sounds to each ear) sounds of four stimulus types: (1) pure sinusoidal tones, PT; (2) four-harmonic complex tones, CT; (3) iterated rippled noise, IRN; and (4) interaurally correlated broadband noise, called the “Huggins” or “dichotic” pitch, DP. Frequency difference limens (DLF) for each stimulus type were obtained via a three-alternative-forced-choice adaptive task requiring selection of the interval with the highest pitch, yielding the smallest perceptible fundamental frequency (F0) distance (in Hz) between two sounds. Music skill was measured by an online test of musical pitch, melody and timing maintained by the International Laboratory for Brain Music and Sound Research. Musicianship, length of music experience and self-evaluation of musical skill were assessed by questionnaire. Results showed musicians had smaller DLFs in all four conditions with the largest group difference in the dichotic condition. DLF thresholds were related to both subjective and objective musical ability. In addition, subjective self-report of musical ability was shown to be a significant variable in group classification. Taken together, the results suggest that music-related plasticity benefits multiple mechanisms of pitch encoding and that self-evaluation of musicality can be reliably associated with objective measures of perception.


2021 ◽  
Author(s):  
Emma Holmes

Pitch discrimination is better for complex tones than for pure tones, but how more subtle differences in timbre affect pitch discrimination is not fully understood. This study compared pitch discrimination thresholds of flat-spectrum harmonic complex tones with those of natural sounds played by musical instruments of three different timbres (violin, trumpet, and flute). To investigate whether natural familiarity with sounds of particular timbres affects pitch discrimination thresholds, this study recruited musicians who were trained on one of the three instruments. We found that flautists and trumpeters could discriminate smaller differences in pitch for artificial flat-spectrum tones, despite their unfamiliar timbre, than for sounds played by musical instruments, which are regularly heard in everyday life (particularly by musicians who play those instruments). Furthermore, thresholds were no better for the instrument a musician was trained to play than for other instruments, suggesting that even extensive experience listening to and producing sounds of particular timbres does not reliably improve pitch discrimination thresholds for those timbres. The results show that timbre familiarity provides minimal improvements to auditory acuity, and physical acoustics (i.e., the presence of equal-amplitude harmonics) determine pitch-discrimination thresholds more than does experience with natural sounds and timbre-specific training.


Symmetry ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1748
Author(s):  
Dawei Shen ◽  
Claude Alain ◽  
Bernhard Ross

The presence of binaural low-level background noise has been shown to enhance the transient evoked N1 response at about 100 ms after sound onset. This increase in N1 amplitude is thought to reflect noise-mediated efferent feedback facilitation from the auditory cortex to lower auditory centers. To test this hypothesis, we recorded auditory-evoked fields using magnetoencephalography while participants were presented with binaural harmonic complex tones embedded in binaural or monaural background noise at signal-to-noise ratios of 25 dB (low noise) or 5 dB (higher noise). Half of the stimuli contained a gap in the middle of the sound. The source activities were measured in bilateral auditory cortices. The onset and gap N1 response increased with low binaural noise, but high binaural and low monaural noise did not affect the N1 amplitudes. P1 and P2 onset and gap responses were consistently attenuated by background noise, and noise level and binaural/monaural presentation showed distinct effects. Moreover, the evoked gamma synchronization was also reduced by background noise, and it showed a lateralized reduction for monaural noise. The effects of noise on the N1 amplitude follow a bell-shaped characteristic that could reflect an optimal representation of acoustic information for transient events embedded in noise.


2021 ◽  
pp. 1-9
Author(s):  
Yang-Soo Yoon ◽  
Ivy Mills ◽  
BaileyAnn Toliver ◽  
Christine Park ◽  
George Whitaker ◽  
...  

Purpose We compared frequency difference limens (FDLs) in normal-hearing listeners under two listening conditions: sequential and simultaneous. Method Eighteen adult listeners participated in three experiments. FDL was measured using a method of limits for comparison frequency. In the sequential listening condition, the tones were presented with a half-second time interval in between, but for the simultaneous listening condition, the tones were presented simultaneously. For the first experiment, one of four reference tones (125, 250, 500, or 750 Hz), which was presented to the left ear, was paired with one of four starting comparison tones (250, 500, 750, or 1000 Hz), which was presented to the right ear. The second and third experiments had the same testing conditions as the first experiment except with two- and three-tone complexes, comparison tones. The subjects were asked if the tones sounded the same or different. When a subject chose “different,” the comparison frequency decreased by 10% of the frequency difference between the reference and comparison tones. FDLs were determined when the subjects chose “same” 3 times in a row. Results FDLs were significantly broader (worse) with simultaneous listening than with sequential listening for the two- and three-tone complex conditions but not for the single-tone condition. The FDLs were narrowest (best) with the three-tone complex under both listening conditions. FDLs broadened as the testing frequencies increased for the single tone and the two-tone complex. The FDLs were not broadened at frequencies > 250 Hz for the three-tone complex. Conclusion The results suggest that sequential and simultaneous frequency discriminations are mediated by different processes at different stages in the auditory pathway for complex tones, but not for pure tones.


Acta Acustica ◽  
2021 ◽  
Vol 5 ◽  
pp. 49
Author(s):  
Jussi Jaatinen ◽  
Jukka Pätynen ◽  
Tapio Lokki

The relationship between perceived pitch and harmonic spectrum in complex tones is ambiguous. In this study, 31 professional orchestra musicians participated in a listening experiment where they adjusted the pitch of complex low-register successively presented tones to unison. Tones ranged from A0 to A2 (27.6–110 Hz) and were derived from acoustic instrument samples at three different dynamic levels. Four orchestra instruments were chosen as sources of the stimuli; double bass, bass tuba, contrabassoon, and contrabass clarinet. In addition, a sawtooth tone with 13 harmonics was included as a synthetic reference stimulus. The deviation of subjects’ tuning adjustments from unison tuning was greatest for the lowest tones, but remained unexpectedly high also for higher tones, even though all participants had long experience in accurate tuning. Preceding studies have proposed spectral centroid and Terhardt’s virtual pitch theory as useful predictors of the influence of the envelope of a harmonic spectrum on the perceived pitch. However, neither of these concepts were supported by our results. According to the principal component analysis of spectral differences between the presented tone pairs, the contrabass clarinet-type spectrum, where every second harmonic is attenuated, lowered the perceived pitch of a tone compared with tones with the same fundamental frequency but a different spectral envelope. In summary, the pitches of the stimuli were perceived as undefined and highly dependent on the listener, spectrum, and dynamic level. Despite their high professional level, the subjects did not perceive a common, unambiguous pitch of any of the stimuli. The contrabass clarinet-type spectrum lowered the perceived pitch.


2020 ◽  
Vol 148 (4) ◽  
pp. 2463-2463
Author(s):  
Satoshi Okazaki ◽  
Minoru Tsuzaki
Keyword(s):  

2020 ◽  
Author(s):  
Emily J. Allen ◽  
Juraj Mesik ◽  
Kendrick N. Kay ◽  
Andrew J. Oxenham

SUMMARYFrequency-to-place mapping, or tonotopy, is a fundamental organizing principle from the earliest stages of auditory processing in the cochlea to subcortical and cortical regions. Although cortical maps are referred to as tonotopic, previous studies employed sounds that covary in spectral content and higher-level perceptual features such as pitch, making it unclear whether these maps are inherited from cochlear organization and are indeed tonotopic, or instead reflect transformations based on higher-level features. We used high-resolution fMRI to measure BOLD responses in 10 participants as they listened to pure tones that varied in frequency or complex tones that independently varied in either spectral content or fundamental frequency (pitch). We show that auditory cortical gradients are in fact a mixture of maps organized both by spectral content and pitch. Consistent with hierarchical organization, primary regions were tuned predominantly to spectral content, whereas higher-level pitch tuning was observed bilaterally in surrounding non-primary regions.


Sign in / Sign up

Export Citation Format

Share Document