voice production
Recently Published Documents


TOTAL DOCUMENTS

378
(FIVE YEARS 81)

H-INDEX

32
(FIVE YEARS 2)

2021 ◽  
Vol 3 (2) ◽  
pp. 87-97
Author(s):  
Adrián Castillo-Allendes ◽  
Francisco Contreras-Ruston ◽  
Jeff Searl

This reflection paper addresses the importance of the interaction between voice perception and voice production, emphasizing the processes of auditory-vocal integration that are not yet widely reported in the context of voice clinicians. Given the above, this article seeks to 1) highlight the important link between voice production and voice perception and 2) consider whether this relationship might be exploited clinically for diagnostic purposes and therapeutic benefit. Existing theories on speech production and its interaction with auditory perception provide context for discussing why the evaluation of auditory-vocal processes could help identify associated origins of dysphonia and inform the clinician around appropriate management strategies. Incorporating auditory-vocal integration assessment through sensorimotor adaptation paradigm testing could prove to be an important addition to voice assessment protocols at the clinical level. Further, if future studies can specify the means to manipulate and enhance a person’s auditory-vocal integration, the efficiency of voice therapy could be increased, leading to improved quality of life for people with voice disorders.


2021 ◽  
Vol 3 (2) ◽  
pp. 47-56
Author(s):  
Bruno Murmura ◽  
Filippo Barbiera ◽  
Francesco Mecorio ◽  
Giovanni Bortoluzzi ◽  
Ilaria Orefice ◽  
...  

Introduction. The rapid technological evolution in Magnetic Resonance Imaging (MRI) has recently offered a great opportunity for the analysis of voice production. Objectives. This article is aimed to describe main physiological principles at the base of voice production (in particular of vocal tract), and an overview about literature on MRI of the vocal tract. This is presented in order to analyze both present results and future perspectives. Method. A narrative review was performed by searching the MeSH terms “vocal tract” and “MRI” in PubMed database. Then, the obtained studies were subsequently selected by relevancy. Results. Main fields described in literature concern technical feasibility and optimization of MRI sequences, modifications of vocal tract in vowel or articulatory phonetics, modifications of vocal tract in singing, 3D reproduction of vocal tract and segmentation, and describing vocal tract in pathological conditions. Conclusions. MRI is potentially the best method to study the vocal tract physiology during voice production. Most recent studies have achieved good results in representation of changes in the vocal tract during emission of vowels and singing. Further developments in MR technique are necessary to allow an equally detailed study of faster movements that participate in the articulation of speaking, which will allow fascinating perspectives in clinical use.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Louisa Traser ◽  
Carmen Schwab ◽  
Fabian Burk ◽  
Ali Caglar Özen ◽  
Michael Burdumy ◽  
...  

AbstractRespiratory kinematics are important for the regulation of voice production. Dynamic MRI is an excellent tool to study respiratory motion providing high-resolution cross-sectional images. Unfortunately, in clinical MRI systems images can only be acquired in a horizontal subject position, which does not take into account gravitational effects on the respiratory apparatus. To study the effect of body posture on respiratory kinematics during phonation, 8 singers were examined both in an open-configuration MRI with a rotatable gantry and a conventional horizontal MRI system. During dynamic MRI the subjects sang sustained tones at different pitches in both supine and upright body positions. Sagittal images of the respiratory system were obtained at 1–3 images per second, from which 6 anatomically defined distances were extracted to characterize its movements in the anterior, medium and posterior section of the diaphragm as well as the rip cage (diameter at the height of the 3rd and 5th rip) and the anterior–posterior position of the diaphragm cupola. Regardless of body position, singers maintained their general principles of respiratory kinematics with combined diaphragm and thorax muscle activation for breath support. This was achieved by expanding their chest an additional 20% during inspiration when singing in the supine position but not for sole breathing. The diaphragm was cranially displaced in supine position for both singing and breathing and its motion range increased. These results facilitate a more realistic extrapolation of research data obtained in a supine position.


2021 ◽  
Author(s):  
◽  
Snehal Poojary

<p>Numerous studies over the past decade have investigated to making human animation as realistic as possible, especially facial animation. Let’s consider facial animation for human speech. Animating a face, to match up to a speech, requires a lot of effort. Most of the process has now been automated to make it easier for the artist to create facial animation along with lip sync based on a speech provided by the user. While these systems concentrate on the mouth and tongue, where articulation of speech takes place, very little effort has gone to understand and to recreate the exact motion of the neck during speech. The neck plays an important role in voice production and hence it is essential to study the motion created by it.  The purpose of this research is to study the motion of the neck during speech. This research makes two contributions. First, predicting the motion of the neck around the strap muscles for a given speech. This is achieved by training a program with position data of marker placed on the neck along with its speech analysis data. Second, understanding the basic neck motion during speech. This will help an artist understand how the neck should be animated during speech.</p>


2021 ◽  
Author(s):  
◽  
Snehal Poojary

<p>Numerous studies over the past decade have investigated to making human animation as realistic as possible, especially facial animation. Let’s consider facial animation for human speech. Animating a face, to match up to a speech, requires a lot of effort. Most of the process has now been automated to make it easier for the artist to create facial animation along with lip sync based on a speech provided by the user. While these systems concentrate on the mouth and tongue, where articulation of speech takes place, very little effort has gone to understand and to recreate the exact motion of the neck during speech. The neck plays an important role in voice production and hence it is essential to study the motion created by it.  The purpose of this research is to study the motion of the neck during speech. This research makes two contributions. First, predicting the motion of the neck around the strap muscles for a given speech. This is achieved by training a program with position data of marker placed on the neck along with its speech analysis data. Second, understanding the basic neck motion during speech. This will help an artist understand how the neck should be animated during speech.</p>


2021 ◽  
Author(s):  
Ana Laura Cazarin ◽  
Eladio Cardiel ◽  
Laura I. Garay-Jimenez ◽  
Pablo Rogelio Hernandez ◽  
Victor Manuel Valadez Jimenez

Author(s):  
Camilo Rodriguez Fandiño ◽  
Ana Maria Salazar Montes

Introducción: investigaciones recientes han descrito que en la adultez mayor pueden presentarse cambios en la producción del tono y timbre de la voz. Dichos cambios pueden ser indicadores de alteraciones cognitivas tempranas, incluso en estadios preclínicos del deterioro cognitivo. El propósito de este estudio fue identificar en la literatura hallazgos relevantes sobre el análisis acústico en personas mayores con deterioro cognitivo. Materiales y métodos: se llevó a cabo un estudio de revisión sistemática de la literatura, en el que se consultaron las siguientes bases de datos: PlosOne, Science Direct, PubMed/PMC y Google Scholar. Se utilizaron metabuscadores como: acoustic analysis, Alzheimer’s disease, mild cognitive impairment, prosody, voice analysis y voice production; además, se incluyeron artículos empíricos que describieran un análisis acústico en población adulta mayor con deterioro cognitivo. La evaluación fue realizada de manera independiente por dos evaluadores, quienes determinaron el riesgo de sesgo en la revisión. Se encontraron 59 artículos relacionados con el tema, de los cuales solo 25 cumplieron con los criterios de inclusión. Resultados: los artículos revisados identificaron cambios en la prosodia lingüística y paralingüística, el timbre y la tonalidad de la voz, asociados con el deterioro cognitivo del adulto mayor. Conclusión: los protocolos de estudio en el análisis acústico podrían ser una buena herramienta para el apoyo en el diagnóstico clínico diferencial del deterioro cognitivo en la vejez y una buena oportunidad para la identificación de riesgo en etapas preclínicas de las demencias.


Author(s):  
Raissa Bezerra Rocha ◽  
Wamberto José Lira de Queiroz ◽  
Marcelo Sampaio de Alencar

AbstractThis paper presents a proposal to a source-filter theory of voice production, more precisely related to voiced sounds. It is a proposal of a model to generate signal using linear and time-invariant systems and takes into account the phonation biophysics and the cyclostationary characteristics of the voice signal, related to the vibrational behavior of the vocal cords. The model suggests that the oscillation frequency of the vocal cords is a function of its mass and length, but controlled by the longitudinal tension applied to them. The mathematical description of the model of glottal excitation is presented, along with a mathematical closed expression for the power spectral density of the signal that excites the glottis. The voice signal, whose parameters can be adjusted for detection and classification of glottis pathologies, is also present. As a result, the output of each block diagram that represents the proposed model is analysed, including a power spectral density comparison between emulated voice, original voice, and classic source-filter model. The Log Spectral Distortion is computed, providing values below 1.40 dB, indicating an acceptable distortion for all cases.


Sign in / Sign up

Export Citation Format

Share Document