scholarly journals Speech-Driven Facial Animations Improve Speech-in-Noise Comprehension of Humans

2022 ◽  
Vol 15 ◽  
Author(s):  
Enrico Varano ◽  
Konstantinos Vougioukas ◽  
Pingchuan Ma ◽  
Stavros Petridis ◽  
Maja Pantic ◽  
...  

Understanding speech becomes a demanding task when the environment is noisy. Comprehension of speech in noise can be substantially improved by looking at the speaker’s face, and this audiovisual benefit is even more pronounced in people with hearing impairment. Recent advances in AI have allowed to synthesize photorealistic talking faces from a speech recording and a still image of a person’s face in an end-to-end manner. However, it has remained unknown whether such facial animations improve speech-in-noise comprehension. Here we consider facial animations produced by a recently introduced generative adversarial network (GAN), and show that humans cannot distinguish between the synthesized and the natural videos. Importantly, we then show that the end-to-end synthesized videos significantly aid humans in understanding speech in noise, although the natural facial motions yield a yet higher audiovisual benefit. We further find that an audiovisual speech recognizer (AVSR) benefits from the synthesized facial animations as well. Our results suggest that synthesizing facial motions from speech can be used to aid speech comprehension in difficult listening environments.

2021 ◽  
Author(s):  
Enrico Varano ◽  
Konstantinos Vougioukas ◽  
Pingchuan Ma ◽  
Stavros Petridis ◽  
Maja Pantic ◽  
...  

Understanding speech becomes a demanding task when the environment is noisy. Comprehension of speech in noise can be substantially improved by looking at the speake's face, and this audiovisual benefit is even more pronounced in people with hearing impairment. Recent advances in AI have allowed to synthesize photorealistic talking faces from a speech recording and a still image of a person's face in an end-to-end manner. However, it has remained unknown whether such facial animations improve speech-in-noise comprehension. Here we consider facial animations produced by a recently introduced generative adversarial network (GAN), and show that humans cannot distinguish between the synthesized and the natural videos. Importantly, we then show that the end-to-end synthesized videos significantly aid humans in understanding speech in noise, although the natural facial motions yield a yet higher audiovisual benefit. We further find that an audiovisual speech recognizer benefits from the synthesized facial animations as well. Our results suggest that synthesizing facial motions from speech can be used to aid speech comprehension in difficult listening environments.


Author(s):  
Shuaitao Zhang ◽  
Yuliang Liu ◽  
Lianwen Jin ◽  
Yaoxiong Huang ◽  
Songxuan Lai

A new method is proposed for removing text from natural images. The challenge is to first accurately localize text on the stroke-level and then replace it with a visually plausible background. Unlike previous methods that require image patches to erase scene text, our method, namely ensconce network (EnsNet), can operate end-to-end on a single image without any prior knowledge. The overall structure is an end-to-end trainable FCN-ResNet-18 network with a conditional generative adversarial network (cGAN). The feature of the former is first enhanced by a novel lateral connection structure and then refined by four carefully designed losses: multiscale regression loss and content loss, which capture the global discrepancy of different level features; texture loss and total variation loss, which primarily target filling the text region and preserving the reality of the background. The latter is a novel local-sensitive GAN, which attentively assesses the local consistency of the text erased regions. Both qualitative and quantitative sensitivity experiments on synthetic images and the ICDAR 2013 dataset demonstrate that each component of the EnsNet is essential to achieve a good performance. Moreover, our EnsNet can significantly outperform previous state-of-the-art methods in terms of all metrics. In addition, a qualitative experiment conducted on the SBMNet dataset further demonstrates that the proposed method can also preform well on general object (such as pedestrians) removal tasks. EnsNet is extremely fast, which can preform at 333 fps on an i5-8600 CPU device.


2020 ◽  
Vol 29 (1) ◽  
pp. 6-17 ◽  
Author(s):  
Frank Iglehart

Purpose The classroom acoustic standard ANSI/ASA S12.60-2010/Part 1 requires a reverberation time (RT) for children with hearing impairment of 0.3 s, shorter than its requirement of 0.6 s for children with typical hearing. While preliminary data from conference proceedings support this new RT requirement of 0.3 s, peer-reviewed data that support 0.3-s RT are not available on those wearing hearing aids. To help address this, this article compares speech perception performance by children with hearing aids in RTs, including those specified in the ANSI/ASA-2010 standard. A related clinical issue is whether assessments of speech perception conducted in near-anechoic sound booths, which may overestimate performance in reverberant classrooms, may now provide a more reliable estimate when the child is in a classroom with a short RT of 0.3 s. To address this, this study compared speech perception by children with hearing aids in a sound booth to listening in 0.3-s RT. Method Participants listened in classroom RTs of 0.3, 0.6, and 0.9 s and in a near-anechoic sound booth. All conditions also included a 21-dB range of speech-to-noise ratios (SNRs) to further represent classroom listening environments. Performance measures using the Bamford–Kowal–Bench Speech-in-Noise (BKB-SIN) test were 50% correct word recognition across these acoustic conditions, with supplementary analyses of percent correct. Results Each reduction in RT from 0.9 to 0.6 to 0.3 s significantly benefited the children's perception of speech. Scores obtained in a sound booth were significantly better than those measured in 0.3-s RT. Conclusion These results support the acoustic standard of 0.3-s RT for children with hearing impairment in learning spaces ≤ 283 m 3 , as specified in ANSI/ASA S12.60-2010/Part 1. Additionally, speech perception testing in a sound booth did not predict accurately listening ability in a classroom with 0.3-s RT. Supplemental Material https://doi.org/10.23641/asha.11356487


2020 ◽  
Vol 34 (04) ◽  
pp. 4140-4149
Author(s):  
Zhiwei Hong ◽  
Xiaocheng Fan ◽  
Tao Jiang ◽  
Jianxing Feng

Image denoising is a classic low level vision problem that attempts to recover a noise-free image from a noisy observation. Recent advances in deep neural networks have outperformed traditional prior based methods for image denoising. However, the existing methods either require paired noisy and clean images for training or impose certain assumptions on the noise distribution and data types. In this paper, we present an end-to-end unpaired image denoising framework (UIDNet) that denoises images with only unpaired clean and noisy training images. The critical component of our model is a noise learning module based on a conditional Generative Adversarial Network (cGAN). The model learns the noise distribution from the input noisy images and uses it to transform the input clean images to noisy ones without any assumption on the noise distribution and data types. This process results in pairs of clean and pseudo-noisy images. Such pairs are then used to train another denoising network similar to the existing denoising methods based on paired images. The noise learning and denoising components are integrated together so that they can be trained end-to-end. Extensive experimental evaluation has been performed on both synthetic and real data including real photographs and computer tomography (CT) images. The results demonstrate that our model outperforms the previous models trained on unpaired images as well as the state-of-the-art methods based on paired training data when proper training pairs are unavailable.


Sign in / Sign up

Export Citation Format

Share Document