paralinguistic information
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 7)

H-INDEX

5
(FIVE YEARS 2)

2021 ◽  
Vol 15 ◽  
Author(s):  
Hatice Zora ◽  
Valéria Csépe

How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Rami S. Alkhawaldeh

The speech entailed in human voice comprises essentially paralinguistic information used in many voice-recognition applications. Gender voice is considered one of the pivotal parts to be detected from a given voice, a task that involves certain complications. In order to distinguish gender from a voice signal, a set of techniques have been employed to determine relevant features to be utilized for building a model from a training set. This model is useful for determining the gender (i.e., male or female) from a voice signal. The contributions are three-fold including (i) providing analysis information about well-known voice signal features using a prominent dataset, (ii) studying various machine learning models of different theoretical families to classify the voice gender, and (iii) using three prominent feature selection algorithms to find promisingly optimal features for improving classification models. The experimental results show the importance of subfeatures over others, which are vital for enhancing the efficiency of classification models’ performance. Experimentation reveals that the best recall value is equal to 99.97%; the best recall value is 99.7% for two models of deep learning (DL) and support vector machine (SVM), and with feature selection, the best recall value is 100% for SVM techniques.


IBRO Reports ◽  
2019 ◽  
Vol 6 ◽  
pp. S545
Author(s):  
Haruka Yamasato ◽  
Yuta Tamai ◽  
Kazuyuki Matsumoto ◽  
Shizuko Hiryu ◽  
Khota Kobayasi

Author(s):  
Susanne Schötz ◽  
Joost van de Weijer ◽  
Robert Eklund

This study investigates domestic cat meows in different contexts and mental states. Measures of fundamental frequency (f0) and duration as well as f0 contours of 780 meows from 40 cats were analysed. We found significant effects of recording context and of mental state on f0 and duration. Moreover, positive (e.g. affiliative) contexts and mental states tended to have rising f0 contours while meows produced in negative (e.g. stressed) contexts and mental states had predominantly falling f0 contours. Our results suggest that cats use biological codes and paralinguistic information to signal mental state.


2019 ◽  
Author(s):  
Susanne Schötz ◽  
Joost van de Weijer ◽  
Robert Eklund

This study investigates domestic cat meows in different contexts and mental states. Measures of fundamental frequency (f0) and duration as well as f0 contours of 780 meows from 40 cats were analysed. We found significant effects of recording context and of mental state on f0 and duration. Moreover, positive (e.g. affiliative) contexts and mental states tended to have rising f0 contours while meows produced in negative (e.g. stressed) contexts and mental states had predominantly falling f0 contours. Our results suggest that cats use biological codes and paralinguistic information to signal mental state.


Author(s):  
Bernd J. Kröger

This chapter outlines a comprehensive neurocomputational model of voice and speech perception based on (i) already established computational models, as well as on (ii) neurophysiological data of the underlying neural processes. Neurocomputational models of speech perception comprise auditory as well as cognitive modules, in order to extract sound features as well as linguistic information (linguistic content). A model of voice and speech perception in addition needs to process paralinguistic information like gender, age, emotional or affective state of speaker, etc. It is argued here that modules of a neurocomputational model of voice and speech perception need to interact with modules which go beyond unimodal auditory processing because, for example, processing of paralinguistic information is closely related to such as visual facial perception. Thus, this chapter describes neural modelling of voice and speech perception in relation to general communication and social-interaction processes, which makes it necessary to develop a hypermodal processing approach.


Author(s):  
Claudia Roswandowitz ◽  
Corrina Maguinness ◽  
Katharina von Kriegstein

The voice contains elementary social communication cues, conveying speech, as well as paralinguistic information pertaining to the emotional state and the identity of the speaker. In contrast to vocal-speech and vocal-emotion processing, voice-identity processing has been less explored. This seems surprising, given the day-to-day significance of person recognition by voice. A valuable approach to unravel how voice-identity processing is accomplished is to investigate people who have a selective deficit in recognizing voices. Such a deficit has been termed phonagnosia. This chapter provides a systematic overview of studies on phonagnosia and how they relate to current neurocognitive models of person recognition. It reviews studies that have characterized people who suffer from phonagnosia following brain damage (i.e. acquired phonagnosia) and also studies which have examined phonagnosia cases without apparent brain lesion (i.e. developmental phonagnosia). Based on the reviewed literature, the chapter emphasizes the need for a careful behavioural characterization of phonagnosia cases by taking into consideration the multistage nature of voice-identity processing and the resulting behavioural phonagnosia subtypes.


Author(s):  
Jennifer L. Agustus ◽  
Julia C. Hailstone ◽  
Jason D. Warren

This chapter summarizes the clinical features, cognitive mechanisms, and neuroanatomical substrates of voice-processing disorders associated with the major dementias. Although disturbances of voice processing are rarely the leading feature of these diseases, impaired perception or recognition of voice identity and non-verbal vocal signals contributes to daily-life disability in the dementias and constitutes a significant source of distress for patients and caregivers. The brain networks targeted in particular diseases provide a substrate for the characteristic clinico-anatomical phenotypes that define different dementias and, more particularly, for the development of voice-processing deficits, as the networks overlap closely those implicated in the processing of voices in the healthy brain. The chapter firstly reviews key clinical and neuroanatomical characteristics of common dementias that affect voice processing, and considers the challenges of assessing voice processing in these diseases. It then outlines a taxonomy of voice-processing symptoms and deficits in the dementias, related to the perception and recognition of voices as complex ‘auditory objects’ that signal speaker identity as well as much other paralinguistic information. The extent to which deficits may be selective for voice attributes versus other domains of non-verbal sound and person knowledge, and the demands of integrating vocal with other sensory information, are considered. The chapter surveys the neuroanatomical correlates of disordered voice processing in neurodegenerative syndromes, and concludes by proposing a framework for understanding voice processing in the dementias and by indicating directions for future work.


Sign in / Sign up

Export Citation Format

Share Document