scholarly journals Tongue Postures and Tongue Centers: A Study of Acoustic-Articulatory Correspondences Across Different Head Angles

2022 ◽  
Vol 12 ◽  
Author(s):  
Chenhao Chiu ◽  
Yining Weng ◽  
Bo-wei Chen

Recent research on body and head positions has shown that postural changes may induce varying degrees of changes on acoustic speech signals and articulatory gestures. While the preservation of formant profiles across different postures is suitably accounted for by the two-tube model and perturbation theory, it remains unclear whether it is resulted from the accommodation of tongue postures. Specifically, whether the tongue accommodates the changes in head angle to maintain the target acoustics is yet to be determined. The present study examines vowel acoustics and their correspondence with the articulatory maneuvers of the tongue, including both tongue postures and movements of the tongue center, across different head angles. The results show that vowel acoustics, including pitch and formants, are largely unaffected by upward or downward tilting of the head. These preserved acoustics may be attributed to the lingual gestures that compensate for the effects of gravity. Our results also reveal that the tongue postures in response to head movements appear to be vowel-dependent, and the tongue center may serve as an underlying drive that covariates with the head angle changes. These results imply a close relationship between vowel acoustics and tongue postures as well as a target-oriented strategy for different head angles.

2019 ◽  
Vol 14 (1) ◽  
Author(s):  
Julian Mangesius ◽  
Thomas Seppi ◽  
Rocco Weigel ◽  
Christoph Reinhold Arnold ◽  
Danijela Vasiljevic ◽  
...  

Abstract Background The present study investigates the intrafractional accuracy of a frameless thermoplastic mask used for head immobilization during stereotactic radiotherapy. Non-invasive masks cannot completely prohibit head movements. Previous studies attempted to estimate the magnitude of intrafractional inaccuracy by means of pre- and postfractional measurements only. However, this might not be sufficient to accurately map also intrafractional head movements. Materials and methods Intrafractional deviation of mask-fixed head positions was measured in five patients during a total of 94 fractions by means of close-meshed repeated ExacTrac measurements (every 1.4 min) conducted during the entire treatment session. A median of six (range: 4 to 11) measurements were recorded per fraction, delivering a dataset of 453 measurements. Results Random errors (SD) for the x, y and z axes were 0.27 mm, 0.29 mm and 0.29 mm, respectively. Median 3D deviation was 0.29 mm. Of all 3D intrafractional motions, 5.5 and 0.4% exceeded 1 mm and 2 mm, respectively. A moderate correlation between treatment duration and mean 3D displacement was determined (rs = 0.45). Mean 3D deviation increased from 0.21 mm (SD = 0.26 mm) in the first 2 min to a maximum of 0.53 mm (SD = 0.31 mm) after 10 min of treatment time. Conclusion Pre- and post-treatment measurement is not sufficient to adequately determine the range of intrafractional head motion. Thermoplastic masks provide both reliable interfractional and intrafractional immobilization for image-guided stereotactic hypofractionated radiotherapy. Greater positioning accuracy may be obtained by reducing treatment duration (< 6 min) and applying intrafractional correction. Trial registration Clinicaltrials.gov, NCT03896555, Registered 01 April 2019 - retrospectively registered.


2009 ◽  
pp. 439-461
Author(s):  
Lynne E. Bernstein ◽  
Jintao Jiang

The information in optical speech signals is phonetically impoverished compared to the information in acoustic speech signals that are presented under good listening conditions. But high lipreading scores among prelingually deaf adults inform us that optical speech signals are in fact rich in phonetic information. Hearing lipreaders are not as accurate as deaf lipreaders, but they too demonstrate perception of detailed optical phonetic information. This chapter briefly sketches the historical context of and impediments to knowledge about optical phonetics and visual speech perception (lipreading). The authors review findings on deaf and hearing lipreaders. Then we review recent results on relationships between optical speech signals and visual speech perception. We extend the discussion of these relationships to the development of visual speech synthesis. We advocate for a close relationship between visual speech perception research and development of synthetic visible speech.


2011 ◽  
Vol 115 (4) ◽  
pp. 733-742 ◽  
Author(s):  
Siveshigan Pillay ◽  
Jeannette A. Vizuete ◽  
J. Bruce McCallum ◽  
Anthony G. Hudetz

Background The nucleus basalis of Meynert of the basal forebrain has been implicated in the regulation of the state of consciousness across normal sleep-wake cycles. Its role in the modulation of general anesthesia was investigated. Methods Rats were chronically implanted with bilateral infusion cannulae in the nucleus basalis of Meynert and epidural electrodes to record the electroencephalogram in frontal and visual cortices. Animals were anesthetized with desflurane at a concentration required for the loss of righting reflex (4.6 ± 0.5%). Norepinephrine (17.8 nmol) or artificial cerebrospinal fluid was infused at 0.2 μl/min (1 μl total). Behavioral response to infusion was measured by scoring the orofacial, limb, and head movements, and postural changes. Results Behavioral responses were higher after norepinephrine (2.1 ± 1) than artificial cerebrospinal fluid (0.63 ± 0.8) infusion (P &lt; 0.01, Student t test). Responses were brief (1-2 min), repetitive, and more frequent after norepinephrine infusion (P &lt; 0.0001, chi-square test). Electroencephalogram delta power decreased after norepinephrine in frontal (70 ± 7%) but not in visual cortex (P &lt; 0.05, Student t test). Simultaneously, electroencephalogram cross-approximate entropy between frontal and visual cortices increased from 3.17 ± 0.56 to 3.85 ± 0.29 after norepinephrine infusion (P &lt; 0.01, Student t test). Behavioral activation was predictable by the decrease in frontal delta power (logistic regression, P &lt; 0.05). Conclusions Norepinephrine infusion into the nucleus basalis of Meynert can modulate anesthetic depth presumably by ascending activation of the cortex. The transient nature of the responses suggests a similarity with microarousals normally observed during natural sleep, and may imply a mechanism for transient awareness under light anesthesia.


2016 ◽  
Vol 12 (1) ◽  
pp. 27-33 ◽  
Author(s):  
M.T. Engell ◽  
H.M. Clayton ◽  
A. Egenvall ◽  
M.A. Weishaupt ◽  
L. Roepstorff

The objectives were to compare sagittal plane posture of the pelvis, trunk and head of elite dressage riders when they ride actively to train the horse versus sitting passively and following the horses’ movements at trot, and to evaluate the effects of these changes in rider posture on load distribution on the horse’s back. Synchronised motion capture and saddle mat data of seven elite dressage riders were used to measure minimal and maximal angles and range of motion (ROM) for the pelvic, trunk and head segments, the angle between pelvis and trunk segments, phase-shift between pitching motions of pelvis and trunk, and pelvic translation relative to the saddle. Non-parametric statistical tests compared variables between the two rider postures. In the passive rider posture the pelvis, trunk and head showed two pitching cycles per stride. Maximal posterior and anterior pelvic rotation occurred, respectively, early and late in the horse’s diagonal stance phase. Compared with pelvic movements, trunk movements were slightly delayed and head movements were out-of-phase. In the active rider posture the pelvis and trunk pitched further posteriorly throughout the stride. Most of the riders showed similar sagittal plane movements of the axial body segments but with some notable individual variations.


2001 ◽  
Vol 86 (4) ◽  
pp. 1729-1749 ◽  
Author(s):  
Brian D. Corneil ◽  
Etienne Olivier ◽  
Frances J. R. Richmond ◽  
Gerald E. Loeb ◽  
Douglas P. Munoz

Electromyographic (EMG) activity was recorded in ≤12 neck muscles in four alert monkeys whose heads were unrestrained to describe the spatial and temporal patterns of neck muscle activation accompanying a large range of head postures and movements. Some head postures and movements were elicited by training animals to generate gaze shifts to visual targets. Other spontaneous head movements were made during orienting, tracking, feeding, expressive, and head-shaking behaviors. These latter movements exhibited a wider range of kinematic patterns. Stable postures and small head movements of only a few degrees were associated with activation of a small number of muscles in a reproducible synergy. Additional muscles were recruited for more eccentric postures and larger movements. For head movements during trained gaze shifts, movement amplitude, velocity, and acceleration were correlated linearly and agonist muscles were recruited without antagonist muscles. Complex sequences of reciprocal bursts in agonist and antagonist muscles were observed during very brisk movements. Turning movements of similar amplitudes that began from different initial head positions were associated with systematic variations in the activities of different muscles and in the relative timings of these activities. Unique recruitment synergies were observed during feeding and head-shaking behaviors. Our results emphasize that the recruitment of a given muscle was generally ordered and consistent but that strategies for coordination among various neck muscles were often complex and appeared to depend on the specifics of musculoskeletal architecture, posture, and movement kinematics that differ substantially among species.


2020 ◽  
Author(s):  
Walter F. Bischof ◽  
Nicola C Anderson ◽  
Michael T. Doswell ◽  
Alan Kingstone

How do we explore the visual environment around us, and how are head and eye movements coordinated during our exploration? To investigate this question, we had observers look at omni-directional panoramic scenes, composed of both landscape and fractal images, using a virtual-reality (VR) viewer while their eye and head movements were tracked. We analyzed the spatial distribution of eye fixations and the distribution of saccade directions; the spatial distribution of head positions and the distribution of head shifts; as well as the relation between eye and head movements. The results show that, for landscape scenes, eye and head behaviour best fit the allocentric frame defined by the scene horizon, especially when head tilt (i.e., head rotation around the view axis) is considered. For fractal scenes, which have an isotropic texture, eye and head movements were executed primarily along the cardinal directions in world coordinates. The results also show that eye and head movements are closely linked in space and time in a complementary way, with stimulus-driven eye movements predominantly leading the head movements. Our study is the first to systematically examine eye and head movements in a panoramic VRenvironment, and the results demonstrate that a VR environment constitutes a powerful and informative research alternative to traditional methods for investigating looking behaviour.


2021 ◽  
Vol 15 ◽  
Author(s):  
Omid Abbasi ◽  
Nadine Steingräber ◽  
Joachim Gross

Recording brain activity during speech production using magnetoencephalography (MEG) can help us to understand the dynamics of speech production. However, these measurements are challenging due to the induced artifacts coming from several sources such as facial muscle activity, lower jaw and head movements. Here, we aimed to characterize speech-related artifacts, focusing on head movements, and subsequently present an approach to remove these artifacts from MEG data. We recorded MEG from 11 healthy participants while they pronounced various syllables in different loudness. Head positions/orientations were extracted during speech production to investigate its role in MEG distortions. Finally, we present an artifact rejection approach using the combination of regression analysis and signal space projection (SSP) in order to correct the induced artifact from MEG data. Our results show that louder speech leads to stronger head movements and stronger MEG distortions. Our proposed artifact rejection approach could successfully remove the speech-related artifact and retrieve the underlying neurophysiological signals. As the presented artifact rejection approach was shown to remove artifacts arising from head movements, induced by overt speech in the MEG, it will facilitate research addressing the neural basis of speech production with MEG.


2020 ◽  
Author(s):  
Omid Abbasi ◽  
Nadine Steingräber ◽  
Joachim Gross

AbstractRecording brain activity during speech production using magnetoencephalography (MEG) can help us to understand the dynamics of speech production. However, these measurements are challenging due to the induced artifacts coming from several sources such as facial muscle activity, lower jaw and head movements. Here, we aimed to characterise speech-related artifacts and subsequently present an approach to remove these artifacts from MEG data. We recorded MEG from 11 healthy participants while they pronounced various syllables in different loudness. Head positions/orientations were extracted during speech production to investigate its role in MEG distortions. Finally, we present an artifact rejection approach using the combination of regression analysis and signal space projection (SSP) in order to correct the induced artifact from MEG data. Our results show that louder speech leads to stronger head movements and stronger MEG distortions. Our proposed artifact rejection approach could successfully remove the speech-related artifact and retrieve the underlying neurophysiological signals. As the presented artifact rejection approach was shown to remove artifacts induced by overt speech in the MEG, it will facilitate research addressing the neural basis of speech production with MEG.


1988 ◽  
Vol 102 ◽  
pp. 343-347
Author(s):  
M. Klapisch

AbstractA formal expansion of the CRM in powers of a small parameter is presented. The terms of the expansion are products of matrices. Inverses are interpreted as effects of cascades.It will be shown that this allows for the separation of the different contributions to the populations, thus providing a natural classification scheme for processes involving atoms in plasmas. Sum rules can be formulated, allowing the population of the levels, in some simple cases, to be related in a transparent way to the quantum numbers.


Sign in / Sign up

Export Citation Format

Share Document