A Teacher-Student Learning Approach for Unsupervised Domain Adaptation of Sequence-Trained ASR Models

Author(s):  
Vimal Manohar ◽  
Pegah Ghahremani ◽  
Daniel Povey ◽  
Sanjeev Khudanpur
2022 ◽  
Vol 8 ◽  
Author(s):  
Hongyu Wang ◽  
Hong Gu ◽  
Pan Qin ◽  
Jia Wang

Deep learning has achieved considerable success in medical image segmentation. However, applying deep learning in clinical environments often involves two problems: (1) scarcity of annotated data as data annotation is time-consuming and (2) varying attributes of different datasets due to domain shift. To address these problems, we propose an improved generative adversarial network (GAN) segmentation model, called U-shaped GAN, for limited-annotated chest radiograph datasets. The semi-supervised learning approach and unsupervised domain adaptation (UDA) approach are modeled into a unified framework for effective segmentation. We improve GAN by replacing the traditional discriminator with a U-shaped net, which predicts each pixel a label. The proposed U-shaped net is designed with high resolution radiographs (1,024 × 1,024) for effective segmentation while taking computational burden into account. The pointwise convolution is applied to U-shaped GAN for dimensionality reduction, which decreases the number of feature maps while retaining their salient features. Moreover, we design the U-shaped net with a pretrained ResNet-50 as an encoder to reduce the computational burden of training the encoder from scratch. A semi-supervised learning approach is proposed learning from limited annotated data while exploiting additional unannotated data with a pixel-level loss. U-shaped GAN is extended to UDA by taking the source and target domain data as the annotated data and the unannotated data in the semi-supervised learning approach, respectively. Compared to the previous models dealing with the aforementioned problems separately, U-shaped GAN is compatible with varying data distributions of multiple medical centers, with efficient training and optimizing performance. U-shaped GAN can be generalized to chest radiograph segmentation for clinical deployment. We evaluate U-shaped GAN with two chest radiograph datasets. U-shaped GAN is shown to significantly outperform the state-of-the-art models.


Author(s):  
Manish Sahu ◽  
Anirban Mukhopadhyay ◽  
Stefan Zachow

Abstract Purpose Segmentation of surgical instruments in endoscopic video streams is essential for automated surgical scene understanding and process modeling. However, relying on fully supervised deep learning for this task is challenging because manual annotation occupies valuable time of the clinical experts. Methods We introduce a teacher–student learning approach that learns jointly from annotated simulation data and unlabeled real data to tackle the challenges in simulation-to-real unsupervised domain adaptation for endoscopic image segmentation. Results Empirical results on three datasets highlight the effectiveness of the proposed framework over current approaches for the endoscopic instrument segmentation task. Additionally, we provide analysis of major factors affecting the performance on all datasets to highlight the strengths and failure modes of our approach. Conclusions We show that our proposed approach can successfully exploit the unlabeled real endoscopic video frames and improve generalization performance over pure simulation-based training and the previous state-of-the-art. This takes us one step closer to effective segmentation of surgical instrument in the annotation scarce setting.


Author(s):  
Jinyu Li ◽  
Michael L. Seltzer ◽  
Xi Wang ◽  
Rui Zhao ◽  
Yifan Gong

Author(s):  
J. Hu ◽  
L. Mou ◽  
X. X. Zhu

Abstract. A machine learning algorithm in remote sensing often fails in the inference of a data set which has a different geographic location than the training data. This is because data of different locations have different underlying distributions caused by complicated reasons, such as the climate and the culture. For a large scale or a global scale task, this issue becomes relevant since it is extremely expensive to collect training data over all regions of interest. Unsupervised domain adaptation is a potential solution for this issue. Its goal is to train an algorithm in a source domain and generalize it to a target domain without using any label from the target domain. Those domains can be associated to geographic locations in remote sensing. In this paper, we attempt to adapt the unsupervised domain adaptation strategy by using a teacher-student network, mean teacher model, to investigate a cross-city classification problem in remote sensing. The mean teacher model consists of two identical networks, a teacher network and a student network. The objective function is a combination of a classification loss and a consistent loss. The classification loss works within the source domain (a city) and aims at accomplishing the goal of classification. The consistent loss works within the target domain (another city) and aims at transferring the knowledge learned from the source to the target. In this paper, two cross-city scenarios are set up. First, we train the model with the data of the city Munich, Germany, and test it on the data of the city Moscow, Russia. The second one is carried out by switching the training and testing data. For comparison, the baseline algorithm is a ResNet-18 which is also chosen as the backbone for the teacher and student networks in the mean teacher model. With 10 independent runs, in the first scenario, the mean teacher model has a mean overall accuracy of 53.38% which is slightly higher than the mean overall accuracy of the baseline, 52.21%. However, in the second scenario, the mean teacher model has a mean overall accuracy of 62.71% which is 5% higher than the mean overall accuracy of the baseline, 57.76%. This work demonstrates that it is worthy to explore the potential of the mean teacher model to solve the domain adaptation issues in remote sensing.


2020 ◽  
Vol 155 ◽  
pp. 113404 ◽  
Author(s):  
Peng Liu ◽  
Ting Xiao ◽  
Cangning Fan ◽  
Wei Zhao ◽  
Xianglong Tang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document