scholarly journals BERTtoCNN: Similarity-preserving enhanced knowledge distillation for stance detection

PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257130
Author(s):  
Yang Li ◽  
Yuqing Sun ◽  
Nana Zhu

In recent years, text sentiment analysis has attracted wide attention, and promoted the rise and development of stance detection research. The purpose of stance detection is to determine the author’s stance (favor or against) towards a specific target or proposition in the text. Pre-trained language models like BERT have been proven to perform well in this task. However, in many reality scenes, they are usually very expensive in computation, because such heavy models are difficult to implement with limited resources. To improve the efficiency while ensuring the performance, we propose a knowledge distillation model BERTtoCNN, which combines the classic distillation loss and similarity-preserving loss in a joint knowledge distillation framework. On the one hand, BERTtoCNN provides an efficient distillation process to train a novel ‘student’ CNN structure from a much larger ‘teacher’ language model BERT. On the other hand, based on the similarity-preserving loss function, BERTtoCNN guides the training of a student network, so that input pairs with similar (dissimilar) activation in the teacher network have similar (dissimilar) activation in the student network. We conduct experiments and test the proposed model on the open Chinese and English stance detection datasets. The experimental results show that our model outperforms the competitive baseline methods obviously.

Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 533
Author(s):  
Qin Zhao ◽  
Chenguang Hou ◽  
Changjian Liu ◽  
Peng Zhang ◽  
Ruifeng Xu

Quantum-inspired language models have been introduced to Information Retrieval due to their transparency and interpretability. While exciting progresses have been made, current studies mainly investigate the relationship between density matrices of difference sentence subspaces of a semantic Hilbert space. The Hilbert space as a whole which has a unique density matrix is lack of exploration. In this paper, we propose a novel Quantum Expectation Value based Language Model (QEV-LM). A unique shared density matrix is constructed for the Semantic Hilbert Space. Words and sentences are viewed as different observables in this quantum model. Under this background, a matching score describing the similarity between a question-answer pair is naturally explained as the quantum expectation value of a joint question-answer observable. In addition to the theoretical soundness, experiment results on the TREC-QA and WIKIQA datasets demonstrate the computational efficiency of our proposed model with excellent performance and low time consumption.


2021 ◽  
Author(s):  
Yoojoong Kim ◽  
Jeong Moon Lee ◽  
Moon Joung Jang ◽  
Yun Jin Yum ◽  
Jong-Ho Kim ◽  
...  

BACKGROUND With advances in deep learning and natural language processing, analyzing medical texts is becoming increasingly important. Nonetheless, a study on medical-specific language models has not yet been conducted given the importance of medical texts. OBJECTIVE Korean medical text is highly difficult to analyze because of the agglutinative characteristics of the language as well as the complex terminologies in the medical domain. To solve this problem, we collected a Korean medical corpus and used it to train language models. METHODS In this paper, we present a Korean medical language model based on deep learning natural language processing. The proposed model was trained using the pre-training framework of BERT for the medical context based on a state-of-the-art Korean language model. RESULTS After pre-training, the proposed method showed increased accuracies of 0.147 and 0.148 for the masked language model with next sentence prediction. In the intrinsic evaluation, the next sentence prediction accuracy improved by 0.258, which is a remarkable enhancement. In addition, the extrinsic evaluation of Korean medical semantic textual similarity data showed a 0.046 increase in the Pearson correlation. CONCLUSIONS The results demonstrated the superiority of the proposed model for Korean medical natural language processing. We expect that our proposed model can be extended for application to various languages and domains.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Wen-Ting Li ◽  
Shang-Bing Gao ◽  
Jun-Qiang Zhang ◽  
Shu-Xing Guo

Recent advances in pretraining language models have obtained state-of-the-art results in various natural language processing tasks. However, these huge pretraining language models are difficult to be used in practical applications, such as mobile devices and embedded devices. Moreover, there is no pretraining language model for the chemical industry. In this work, we propose a method to pretrain a smaller language representation model of the chemical industry domain. First, a huge number of chemical industry texts are used as pretraining corpus, and nontraditional knowledge distillation technology is used to build a simplified model to learn the knowledge in the BERT model. By learning the embedded layer, the middle layer, and the prediction layer at different stages, the simplified model not only learns the probability distribution of the prediction layer but also learns the embedded layer and the middle layer at the same time, to acquire the learning ability of BERT model. Finally, it is applied to the downstream tasks. Experiments show that, compared with the current BERT model distillation method, our method makes full use of the rich feature knowledge in the middle layer of the teacher model while building a student model based on the BiLSTM architecture, which effectively solves the problem that the traditional student model based on the transformer architecture is too large and improves the accuracy of the language model in the chemical domain.


2014 ◽  
Vol 6 (1) ◽  
pp. 1032-1035 ◽  
Author(s):  
Ramzi Suleiman

The research on quasi-luminal neutrinos has sparked several experimental studies for testing the "speed of light limit" hypothesis. Until today, the overall evidence favors the "null" hypothesis, stating that there is no significant difference between the observed velocities of light and neutrinos. Despite numerous theoretical models proposed to explain the neutrinos behavior, no attempt has been undertaken to predict the experimentally produced results. This paper presents a simple novel extension of Newton's mechanics to the domain of relativistic velocities. For a typical neutrino-velocity experiment, the proposed model is utilized to derive a general expression for . Comparison of the model's prediction with results of six neutrino-velocity experiments, conducted by five collaborations, reveals that the model predicts all the reported results with striking accuracy. Because in the proposed model, the direction of the neutrino flight matters, the model's impressive success in accounting for all the tested data, indicates a complete collapse of the Lorentz symmetry principle in situation involving quasi-luminal particles, moving in two opposite directions. This conclusion is support by previous findings, showing that an identical Sagnac effect to the one documented for radial motion, occurs also in linear motion.


2020 ◽  
Vol 23 (4) ◽  
pp. 274-284 ◽  
Author(s):  
Jingang Che ◽  
Lei Chen ◽  
Zi-Han Guo ◽  
Shuaiqun Wang ◽  
Aorigele

Background: Identification of drug-target interaction is essential in drug discovery. It is beneficial to predict unexpected therapeutic or adverse side effects of drugs. To date, several computational methods have been proposed to predict drug-target interactions because they are prompt and low-cost compared with traditional wet experiments. Methods: In this study, we investigated this problem in a different way. According to KEGG, drugs were classified into several groups based on their target proteins. A multi-label classification model was presented to assign drugs into correct target groups. To make full use of the known drug properties, five networks were constructed, each of which represented drug associations in one property. A powerful network embedding method, Mashup, was adopted to extract drug features from above-mentioned networks, based on which several machine learning algorithms, including RAndom k-labELsets (RAKEL) algorithm, Label Powerset (LP) algorithm and Support Vector Machine (SVM), were used to build the classification model. Results and Conclusion: Tenfold cross-validation yielded the accuracy of 0.839, exact match of 0.816 and hamming loss of 0.037, indicating good performance of the model. The contribution of each network was also analyzed. Furthermore, the network model with multiple networks was found to be superior to the one with a single network and classic model, indicating the superiority of the proposed model.


Author(s):  
Xuhui Hu

This chapter summarizes the major points developed throughout the book. The theoretical points of the syntax of events proposed in Chapter 2 are listed. The conclusions on the syntax of English and Chinese resultatives, applicative constructions in various languages, and Chinese non-canonical object and motion event constructions are presented, together with the implications for the verb/satellite-framed typology. The explanation of diachronic change and cross-linguistic variation is summarized, including both the historical development of Chinese resultatives, the variation of resultatives between Chinese and English on the one hand, and English and Romance on the other hand. The Synchronic Grammaticalisation Hypothesis is also summarized.


Author(s):  
Zihang Wei ◽  
Yunlong Zhang ◽  
Xiaoyu Guo ◽  
Xin Zhang

Through movement capacity is an essential factor used to reflect intersection performance, especially for signalized intersections, where a large proportion of vehicle demand is making through movements. Generally, left-turn spillback is considered a key contributor to affect through movement capacity, and blockage to the left-turn bay is known to decrease left-turn capacity. Previous studies have focused primarily on estimating the through movement capacity under a lagging protected only left-turn (lagging POLT) signal setting, as a left-turn spillback is more likely to happen under such a condition. However, previous studies contained assumptions (e.g., omit spillback), or were dedicated to one specific signal setting. Therefore, in this study, through movement capacity models based on probabilistic modeling of spillback and blockage scenarios are established under four different signal settings (i.e., leading protected only left-turn [leading POLT], lagging left-turn, protected plus permitted left-turn, and permitted plus protected left-turn). Through microscopic simulations, the proposed models are validated, and compared with existing capacity models and the one in the Highway Capacity Manual (HCM). The results of the comparisons demonstrate that the proposed models achieved significant advantages over all the other models and obtained high accuracies in all signal settings. Each proposed model for a given signal setting maintains consistent accuracy across various left-turn bay lengths. The proposed models of this study have the potential to serve as useful tools, for practicing transportation engineers, when determining the appropriate length of a left-turn bay with the consideration of spillback and blockage, and the adequate cycle length with a given bay length.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-22
Author(s):  
Yashen Wang ◽  
Huanhuan Zhang ◽  
Zhirun Liu ◽  
Qiang Zhou

For guiding natural language generation, many semantic-driven methods have been proposed. While clearly improving the performance of the end-to-end training task, these existing semantic-driven methods still have clear limitations: for example, (i) they only utilize shallow semantic signals (e.g., from topic models) with only a single stochastic hidden layer in their data generation process, which suffer easily from noise (especially adapted for short-text etc.) and lack of interpretation; (ii) they ignore the sentence order and document context, as they treat each document as a bag of sentences, and fail to capture the long-distance dependencies and global semantic meaning of a document. To overcome these problems, we propose a novel semantic-driven language modeling framework, which is a method to learn a Hierarchical Language Model and a Recurrent Conceptualization-enhanced Gamma Belief Network, simultaneously. For scalable inference, we develop the auto-encoding Variational Recurrent Inference, allowing efficient end-to-end training and simultaneously capturing global semantics from a text corpus. Especially, this article introduces concept information derived from high-quality lexical knowledge graph Probase, which leverages strong interpretability and anti-nose capability for the proposed model. Moreover, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence concept dependence. Experiments conducted on several NLP tasks validate the superiority of the proposed approach, which could effectively infer meaningful hierarchical concept structure of document and hierarchical multi-scale structures of sequences, even compared with latest state-of-the-art Transformer-based models.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1815
Author(s):  
Diego I. Gallardo ◽  
Mário de Castro ◽  
Héctor W. Gómez

A cure rate model under the competing risks setup is proposed. For the number of competing causes related to the occurrence of the event of interest, we posit the one-parameter Bell distribution, which accommodates overdispersed counts. The model is parameterized in the cure rate, which is linked to covariates. Parameter estimation is based on the maximum likelihood method. Estimates are computed via the EM algorithm. In order to compare different models, a selection criterion for non-nested models is implemented. Results from simulation studies indicate that the estimation method and the model selection criterion have a good performance. A dataset on melanoma is analyzed using the proposed model as well as some models from the literature.


Author(s):  
Junshu Wang ◽  
Guoming Zhang ◽  
Wei Wang ◽  
Ka Zhang ◽  
Yehua Sheng

AbstractWith the rapid development of hospital informatization and Internet medical service in recent years, most hospitals have launched online hospital appointment registration systems to remove patient queues and improve the efficiency of medical services. However, most of the patients lack professional medical knowledge and have no idea of how to choose department when registering. To instruct the patients to seek medical care and register effectively, we proposed CIDRS, an intelligent self-diagnosis and department recommendation framework based on Chinese medical Bidirectional Encoder Representations from Transformers (BERT) in the cloud computing environment. We also established a Chinese BERT model (CHMBERT) trained on a large-scale Chinese medical text corpus. This model was used to optimize self-diagnosis and department recommendation tasks. To solve the limited computing power of terminals, we deployed the proposed framework in a cloud computing environment based on container and micro-service technologies. Real-world medical datasets from hospitals were used in the experiments, and results showed that the proposed model was superior to the traditional deep learning models and other pre-trained language models in terms of performance.


Sign in / Sign up

Export Citation Format

Share Document