The effect of visual feedback and training in auditory-perceptual judgment of voice quality

2015 ◽  
Vol 42 (1) ◽  
pp. 1-8 ◽  
Author(s):  
Ben Barsties ◽  
Mieke Beers ◽  
Liesbeth Ten Cate ◽  
Karin Van Ballegooijen ◽  
Lilian Braam ◽  
...  
2017 ◽  
Vol 60 (6S) ◽  
pp. 1818-1825 ◽  
Author(s):  
Yana Yunusova ◽  
Elaine Kearney ◽  
Madhura Kulkarni ◽  
Brandon Haworth ◽  
Melanie Baljko ◽  
...  

Purpose The purpose of this pilot study was to demonstrate the effect of augmented visual feedback on acquisition and short-term retention of a relatively simple instruction to increase movement amplitude during speaking tasks in patients with dysarthria due to Parkinson's disease (PD). Method Nine patients diagnosed with PD, hypokinetic dysarthria, and impaired speech intelligibility participated in a training program aimed at increasing the size of their articulatory (tongue) movements during sentences. Two sessions were conducted: a baseline and training session, followed by a retention session 48 hr later. At baseline, sentences were produced at normal, loud, and clear speaking conditions. Game-based visual feedback regarding the size of the articulatory working space (AWS) was presented during training. Results Eight of nine participants benefited from training, increasing their sentence AWS to a greater degree following feedback as compared with the baseline loud and clear conditions. The majority of participants were able to demonstrate the learned skill at the retention session. Conclusions This study demonstrated the feasibility of augmented visual feedback via articulatory kinematics for training movement enlargement in patients with hypokinesia due to PD. Supplemental Materials https://doi.org/10.23641/asha.5116840


Author(s):  
Rania Hodhod ◽  
Shamim Khan ◽  
Shuangbao Wang

The growing number of reported cyber-attacks pose a difficult challenge to individuals, governments and organizations. Adequate protection of information systems urgently requires a cybersecurity-educated workforce trained using a curriculum that covers the essential skills required for different cybersecurity work roles.  The goal of the CyberMaster expert system is to assist inexperienced instructors with cybersecurity course design. It is an intelligent system that uses visual feedback to guide the user through the design process. Initial test executions show the promise of such a system in addressing the enormous shortage of cybersecurity experts currently available for designing courses and training programs.


2020 ◽  
Vol 238 (12) ◽  
pp. 2857-2864
Author(s):  
D. Quarona ◽  
M. Raffuzzi ◽  
M. Costantini ◽  
C. Sinigaglia

Abstract Action and vision are known to be tightly coupled with each other. In a previous study, we found that repeatedly grasping an object without any visual feedback might result in a perceptual aftereffect when the object was visually presented in the context of a perceptual judgement task. In this study, we explored whether and how such an effect could be modulated by presenting the object behind a transparent barrier. Our conjecture was that if perceptual judgment relies, in part at least, on the same processes and representations as those involved in action, then one should expect to find a slowdown in judgment performance when the target object looks to be out of reach. And this was what we actually found. This indicates that not only acting upon an object but also being prevented from acting upon it can affect how the object is perceptually judged.


Revista CEFAC ◽  
2017 ◽  
Vol 19 (6) ◽  
pp. 831-841 ◽  
Author(s):  
Maria Fabiana Bonfim de Lima Silva ◽  
Sandra Madureira ◽  
Luiz Carlos Rusilo ◽  
Zuleica Camargo

ABSTRACT Purpose: to present a methodological approach for interpreting perceptual judgments of vocal quality by a group of evaluators using the script Vocal Profile Analysis Scheme. Methods: a cross-sectional study based on 90 speech samples from 25 female teachers with voice disorders and/or laryngeal changes. Prior to the perceptual judgment, three perceptual tasks were performed to select samples to be presented to five evaluators using the Experiment script MFC 3.2 (software PRAAT). Next, a sequence of tests was applied, based on successive approaches of inter- and intra-evaluators’ behavior. Data were treated by statistical analysis (Cochran and Selenor tests). Results: with respect to the analysis of the evaluators' performance, it was possible to define those that presented the best results, in terms of reliability and proximity of analyses, as compared to the most experienced evaluator, excluding one. The results of the cluster analysis also allowed designing a voice quality profile of the group of speakers studied. Conclusions: the proposal of a methodological approach allowed defining evaluators whose judgments were based on phonetic knowledge, and drawing a vocal quality profile of the group of samples analyzed.


2019 ◽  
Vol 4 (3) ◽  
pp. 538-541
Author(s):  
Jessica S. Kisenwether ◽  
Denis Anson

Purpose The purpose of this study was to determine if the use of visual feedback can overcome the absence of side tone to control for vocal quality changes, specifically loudness, with speakerphone use. Method Ten men and 10 women held two 5-min conversations in pairs under audio-only and audiovisual communication conditions. Acoustical data and a number of conversational collisions (communication partners trying to speak at the same time) under each condition were compared. Results There were no statistically significant differences in acoustical measures of voice quality between audio-only and audiovisual conversations; however, vocal intensity was consistently 4 times more powerful than average face-to-face conversational intensity during both conditions. The number of conversational collisions was significantly less for the audiovisual condition as compared to the audio-only condition. Conclusion Results suggest that visual feedback did allow for modulation of conversational flow (fewer conversational collisions) but did not allow for modulation of vocal quality. Visual feedback did not overcome the absence of side tone and resulted in the same increased conversational loudness observed during the audio-only condition. As a result, remote conversational partners such as clients and telehealth practitioners are more susceptible to developing vocal health issues.


2018 ◽  
Vol 14 (01) ◽  
pp. 186 ◽  
Author(s):  
Robinson Jiménez ◽  
Oscar Avies Sanchez ◽  
Mauricio Mauledeox

<span lang="EN-US">This article describes the development of a remote lab environment used to test and training sessions for robotics tasks. This environment is made up of the components and devices based on two robotic arms, a network link, Arduino card and Arduino shield for Ethernet, as well as an IP camera. The remote laboratory is implemented to perform remote control of the robotic arms with visual feedback by camera, of the robots actions, where, with a group of test users, it was possible to obtain performance ranges in tasks of telecontrol of up to 92%.</span>


Sign in / Sign up

Export Citation Format

Share Document