Female listeners’ personality attributions to male speakers: The role of acoustic parameters of speech

2009 ◽  
Vol 4 (3) ◽  
pp. 155-165 ◽  
Author(s):  
Ákos Gocsál
Phonology ◽  
2018 ◽  
Vol 35 (1) ◽  
pp. 79-114 ◽  
Author(s):  
Alessandro Vietti ◽  
Birgit Alber ◽  
Barbara Vogt

In the Southern Bavarian variety of Tyrolean, laryngeal contrasts undergo a typologically interesting process of neutralisation in word-initial position. We undertake an acoustic analysis of Tyrolean stops in word-initial, word-medial intersonorant and word-final contexts, as well as in obstruent clusters, investigating the role of the acoustic parameters VOT, prevoicing, closure duration and F0 and H1–H2* on following vowels in implementing contrast, if any. Results show that stops contrast word-medially via [voice] (supported by the acoustic cues of closure duration and F0), and are neutralised completely in word-final position and in obstruent clusters. Word-initially, neutralisation is subject to inter- and intraspeaker variability, and is sensitive to place of articulation. Aspiration plays no role in implementing laryngeal contrasts in Tyrolean.


Author(s):  
Emilia Parada-Cabaleiro ◽  
Anton Batliner ◽  
Markus Schedl

Musical listening is broadly used as an inexpensive and safe method to reduce self-perceived anxiety. This strategy is based on the emotivist assumption claiming that emotions are not only recognised in music but induced by it. Yet, the acoustic properties of musical work capable of reducing anxiety are still under-researched. To fill this gap, we explore whether the acoustic parameters relevant in music emotion recognition are also suitable to identify music with relaxing properties. As an anxiety indicator, the positive statements from the six-item Spielberger State-Trait Anxiety Inventory, a self-reported score from 3 to 12, are taken. A user-study with 50 participants assessing the relaxing potential of four musical pieces was conducted; subsequently, the acoustic parameters were evaluated. Our study shows that when using classical Western music to reduce self-perceived anxiety, tonal music should be considered. In addition, it also indicates that harmonicity is a suitable indicator of relaxing music, while the role of scoring and dynamics in reducing non-pathological listener distress should be further investigated.


2019 ◽  
Vol 5 (1) ◽  
pp. 274-283
Author(s):  
Zorana Đorđević ◽  
Dragan Novković

AbstractThe overall experience of religious practice is significantly affected by the acoustical properties of temples. Divine service is the most important act in the Orthodox Church, which equally demands intelligibility of speech for preaching and as well as adequate acoustics for Byzantine chanting as a form of a song-prayer. In order to better understand and contribute to unlocking the role of sound in these historical sacral spaces, this paper explores the acoustics of two well-preserved Orthodox churches, from Ljubostinja and Naupara monastery, built in the last building period of medieval Serbia (1371–1459). These represent two types of the Morava architectural style – triconch combined with a developed and compressed inscribed cross, respectively. Using EASERA software, we measured the impulse response for two sound source positions – in the altar and in the southern chanting apse, as the main points from which the Orthodox service is carried out. Thus obtained acoustic parameters (RT, EDT, C50 and STI) were further analysed, pointing out the differences in experiencing sound between naos and narthex, as well as how the position of the sound source influenced the experience of sound. Finally, we compared the results with previous archaeoacoustic research of the churches from the same building period.


2019 ◽  
Vol 32 (4-5) ◽  
pp. 401-427 ◽  
Author(s):  
Kosuke Motoki ◽  
Toshiki Saito ◽  
Rui Nouchi ◽  
Ryuta Kawashima ◽  
Motoaki Sugiura

Abstract We have seen a rapid growth of interest in cross-modal correspondences between sound and taste over recent years. People consistently associate higher-pitched sounds with sweet/sour foods, while lower-pitched sounds tend to be associated with bitter foods. The human voice is key in broadcast advertising, and the role of voice in communication generally is partly characterized by acoustic parameters of pitch. However, it remains unknown whether voice pitch and taste interactively influence consumer behavior. Since consumers prefer congruent sensory information, it is plausible that voice pitch and taste interactively influence consumers’ responses to advertising stimuli. Based on the cross-modal correspondence phenomenon, this study aimed to elucidate the role played by voice pitch–taste correspondences in advertising effectiveness. Participants listened to voiceover advertisements (at a higher or lower pitch than the original narrator’s voice) for three food products with distinct tastes (sweet, sour, and bitter) and rated their buying intention (an indicator of advertising effectiveness). The results show that the participants were likely to exhibit greater buying intention toward both sweet and sour food when they listened to higher-pitched (vs lower-pitched) voiceover advertisements. The influence of a higher pitch on sweet and sour food preferences was observed in only two of the three studies: studies 1 and 2 for sour food, and studies 2 and 3 for sweet food. These findings emphasize the role that voice pitch–taste correspondence plays in preference formation, and advance the applicability of cross-modal correspondences to business.


2015 ◽  
Vol 43 (4) ◽  
pp. 890-913 ◽  
Author(s):  
EVA MURILLO ◽  
ALMUDENA CAPILLA

ABSTRACTGestures and vocal elements interact from the early stages of language development, but the role of this interaction in the language learning process is not yet completely understood. The aim of this study is to explore gestural accompaniment's influence on the acoustic properties of vocalizations in the transition to first words. Eleven Spanish children aged 0;9 to 1;3 were observed longitudinally in a semi-structured play situation with an adult. Vocalizations were analyzed using several acoustic parameters based on those described by Olleret al.(2010). Results indicate that declarative vocalizations have fewer protosyllables than imperative ones, but only when they are produced with a gesture. Protosyllables duration andf(0) are more similar to those of mature speech when produced with pointing and declarative function than when produced with reaching gestures and imperative purposes. The proportion of canonical syllables produced increases with age, but only when combined with a gesture.


2010 ◽  
Vol 104 (3) ◽  
pp. 1426-1437 ◽  
Author(s):  
Katherine I. Nagel ◽  
Helen M. McLendon ◽  
Allison J. Doupe

Songbirds, which, like humans, learn complex vocalizations, provide an excellent model for the study of acoustic pattern recognition. Here we examined the role of three basic acoustic parameters in an ethologically relevant categorization task. Female zebra finches were first trained to classify songs as belonging to one of two males and then asked whether they could generalize this knowledge to songs systematically altered with respect to frequency, timing, or intensity. Birds' performance on song categorization fell off rapidly when songs were altered in frequency or intensity, but they generalized well to songs that were changed in duration by >25%. Birds were not deaf to timing changes, however; they detected these tempo alterations when asked to discriminate between the same song played back at two different speeds. In addition, when birds were retrained with songs at many intensities, they could correctly categorize songs over a wide range of volumes. Thus although they can detect all these cues, birds attend less to tempo than to frequency or intensity cues during song categorization. These results are unexpected for several reasons: zebra finches normally encounter a wide range of song volumes but most failed to generalize across volumes in this task; males produce only slight variations in tempo, but females generalized widely over changes in song duration; and all three acoustic parameters are critical for auditory neurons. Thus behavioral data place surprising constraints on the relationship between previous experience, behavioral task, neural responses, and perception. We discuss implications for models of auditory pattern recognition.


Loquens ◽  
2017 ◽  
Vol 3 (2) ◽  
pp. 033 ◽  
Author(s):  
Joaquim Llisterri ◽  
María J. Machuca ◽  
Antonio Ríos ◽  
Sandra Schwab

The acoustic and perceptual correlates of stress in Spanish have been usually studied at the word level, but few investigations have considered them in a wider context. The aim of the present work is to assess the role of fundamental frequency, duration and amplitude in the perception of lexical stress in Spanish when the word is part of a sentence. An experiment has been carried out in which the participants (39 listeners, 20 from Costa Rica and 19 from Spain) had to identify the position of the lexical stress in words presented in isolation and in the same words embedded in sentences. The stimuli in which the position of the stress was not correctly identified have been acoustically analysed to determine the cause of identification errors. Results suggest that the perception of lexical stress in words within a sentence depends on the stress pattern and on the relationship between the values of the acoustic parameters responsible for the prominence of the stressed vowel and those corresponding to the adjacent unstressed vowels.


2016 ◽  
Vol 17 (1) ◽  
pp. 26-47 ◽  
Author(s):  
Péter Pongrácz ◽  
Nikolett Czinege ◽  
Thaissa Menezes Pavan Haynes ◽  
Rosana Suemi Tokumaru ◽  
Ádám Miklósi ◽  
...  

Abstract Excessive dog barking is among the leading sources of noise pollution world-wide; however, the reasons for the annoyance of barking to people remained uninvestigated. Our questions were: is the annoyance rating affected by the acoustic parameters of barks; does the attributed inner state of the dog and the nuisance caused by its barks correlate; does the gender and country of origin affect the subjects’ sensitivity to barking. Participants from Hungary (N = 100) and Brazil (N = 60) were tested with sets of 27 artificial bark sequences. Subjects rated each bark according to the inner state of the dog and the annoyance caused by the particular bark. Subjects from both countries found high-pitched barks the most annoying: however, harsh, fast-pulsing, low-pitched barks were also unpleasant. Men found high-pitched barks more annoying than the women did. Annoyance ratings showed positive correlation with assumed negative inner states of the dog, positive emotional ratings showed negative correlation with the annoyance level. This is the first indication that acoustic features that were selected for effective vocal signalling may be annoying for human listeners. Among the explanations for this effect the role of affective communication and similar bioacoustics of particular animal vocalizations and baby cries are discussed.


Sign in / Sign up

Export Citation Format

Share Document