scholarly journals A Practical Approach towards Causality Mining in Clinical Text using Active Transfer Learning

2021 ◽  
pp. 103932
Author(s):  
Musarrat Hussain ◽  
Fahad Ahmed Satti ◽  
Jamil Hussain ◽  
Taqdir Ali ◽  
Syed Imran Ali ◽  
...  
2015 ◽  
Vol 9 (4) ◽  
pp. 595-607 ◽  
Author(s):  
Jie Xin ◽  
Zhiming Cui ◽  
Pengpeng Zhao ◽  
Tianxu He

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2760
Author(s):  
Seungmin Oh ◽  
Akm Ashiquzzaman ◽  
Dongsu Lee ◽  
Yeonggwang Kim ◽  
Jinsul Kim

In recent years, various studies have begun to use deep learning models to conduct research in the field of human activity recognition (HAR). However, there has been a severe lag in the absolute development of such models since training deep learning models require a lot of labeled data. In fields such as HAR, it is difficult to collect data and there are high costs and efforts involved in manual labeling. The existing methods rely heavily on manual data collection and proper labeling of the data, which is done by human administrators. This often results in the data gathering process often being slow and prone to human-biased labeling. To address these problems, we proposed a new solution for the existing data gathering methods by reducing the labeling tasks conducted on new data based by using the data learned through the semi-supervised active transfer learning method. This method achieved 95.9% performance while also reducing labeling compared to the random sampling or active transfer learning methods.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Yongtae Kim ◽  
Youngsoo Kim ◽  
Charles Yang ◽  
Kundo Park ◽  
Grace X. Gu ◽  
...  

AbstractNeural network-based generative models have been actively investigated as an inverse design method for finding novel materials in a vast design space. However, the applicability of conventional generative models is limited because they cannot access data outside the range of training sets. Advanced generative models that were devised to overcome the limitation also suffer from the weak predictive power on the unseen domain. In this study, we propose a deep neural network-based forward design approach that enables an efficient search for superior materials far beyond the domain of the initial training set. This approach compensates for the weak predictive power of neural networks on an unseen domain through gradual updates of the neural network with active transfer learning and data augmentation methods. We demonstrate the potential of our framework with a grid composite optimization problem that has an astronomical number of possible design configurations. Results show that our proposed framework can provide excellent designs close to the global optima, even with the addition of a very small dataset corresponding to less than 0.5% of the initial training dataset size.


2020 ◽  
Vol 27 (4) ◽  
pp. 584-591 ◽  
Author(s):  
Chen Lin ◽  
Steven Bethard ◽  
Dmitriy Dligach ◽  
Farig Sadeque ◽  
Guergana Savova ◽  
...  

Abstract Introduction Classifying whether concepts in an unstructured clinical text are negated is an important unsolved task. New domain adaptation and transfer learning methods can potentially address this issue. Objective We examine neural unsupervised domain adaptation methods, introducing a novel combination of domain adaptation with transformer-based transfer learning methods to improve negation detection. We also want to better understand the interaction between the widely used bidirectional encoder representations from transformers (BERT) system and domain adaptation methods. Materials and Methods We use 4 clinical text datasets that are annotated with negation status. We evaluate a neural unsupervised domain adaptation algorithm and BERT, a transformer-based model that is pretrained on massive general text datasets. We develop an extension to BERT that uses domain adversarial training, a neural domain adaptation method that adds an objective to the negation task, that the classifier should not be able to distinguish between instances from 2 different domains. Results The domain adaptation methods we describe show positive results, but, on average, the best performance is obtained by plain BERT (without the extension). We provide evidence that the gains from BERT are likely not additive with the gains from domain adaptation. Discussion Our results suggest that, at least for the task of clinical negation detection, BERT subsumes domain adaptation, implying that BERT is already learning very general representations of negation phenomena such that fine-tuning even on a specific corpus does not lead to much overfitting. Conclusion Despite being trained on nonclinical text, the large training sets of models like BERT lead to large gains in performance for the clinical negation detection task.


Author(s):  
David Kale ◽  
Marjan Ghazvininejad ◽  
Anil Ramakrishna ◽  
Jingrui He ◽  
Yan Liu

2020 ◽  
Vol 24 (2) ◽  
pp. 363-383 ◽  
Author(s):  
Jingmei Li ◽  
Weifei Wu ◽  
Di Xue

Sign in / Sign up

Export Citation Format

Share Document