Keyphrase Generation with Word Attention

Author(s):  
Hai Huang ◽  
Tianshuo Huang ◽  
Longxuan Ma ◽  
Lei Zhang
Keyword(s):  
Animals ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 1009
Author(s):  
Javiera Lagos ◽  
Manuel Rojas ◽  
Joao B. Rodrigues ◽  
Tamara Tadich

Mules are essential for pack work in mountainous areas, but there is a lack of research on this species. This study intends to assess the perceptions, attitudes, empathy and pain perception of soldiers about mules, to understand the type of human–mule relationship. For this, a survey was applied with closed-ended questions where the empathy and pain perception tools were included and later analyzed through correlations. Open-ended questions were analyzed through text mining. A total of 73 soldiers were surveyed. They had a wide range of ages and years of experience working with equids. Significant positive correlations were found between human empathy, animal empathy and pain perception. Soldiers show a preference for working with mules over donkeys and horses. Text mining analysis shows three clusters associated with the mules’ nutritional, environmental and health needs. In the same line, relevant relations were found for the word “attention” with “load”, “food”, and “harness”. When asked what mules signify for them, two clusters were found, associated with mules’ working capacity and their role in the army. Relevant relations were found between the terms “mountain”, “support”, and “logistics”, and also between “intelligent” and “noble”. To secure mules’ behavioral and emotional needs, future training strategies should include behavior and welfare concepts.


2020 ◽  
Vol 34 (05) ◽  
pp. 8504-8511
Author(s):  
Arindam Mitra ◽  
Ishan Shrivastava ◽  
Chitta Baral

Natural Language Inference (NLI) plays an important role in many natural language processing tasks such as question answering. However, existing NLI modules that are trained on existing NLI datasets have several drawbacks. For example, they do not capture the notion of entity and role well and often end up making mistakes such as “Peter signed a deal” can be inferred from “John signed a deal”. As part of this work, we have developed two datasets that help mitigate such issues and make the systems better at understanding the notion of “entities” and “roles”. After training the existing models on the new dataset we observe that the existing models do not perform well on one of the new benchmark. We then propose a modification to the “word-to-word” attention function which has been uniformly reused across several popular NLI architectures. The resulting models perform as well as their unmodified counterparts on the existing benchmarks and perform significantly well on the new benchmarks that emphasize “roles” and “entities”.


2021 ◽  
Author(s):  
Kaixin Ma ◽  
Meiling Liu ◽  
Tiejun Zhao ◽  
Jiyun Zhou ◽  
Yang Yu

2019 ◽  
Vol 23 (1) ◽  
pp. 267-287
Author(s):  
Chengzhe Yuan ◽  
Zhifeng Bao ◽  
Mark Sanderson ◽  
Yong Tang

2020 ◽  
Vol 18 (1) ◽  
pp. 51
Author(s):  
Xu Zhang ◽  
Gang Liu ◽  
Yuanfeng Yang ◽  
Zhaobin Liu ◽  
Jinxiang Li ◽  
...  

Author(s):  
Thanh Thi Ha ◽  
Atsuhiro Takasu ◽  
Thanh Chinh Nguyen ◽  
Kiem Hieu Nguyen ◽  
Van Nha Nguyen ◽  
...  

<span class="fontstyle0">Answer selection is an important task in Community Question Answering (CQA). In recent years, attention-based neural networks have been extensively studied in various natural language processing problems, including question answering. This paper explores </span><span class="fontstyle2">matchLSTM </span><span class="fontstyle0">for answer selection in CQA. A lexical gap in CQA is more challenging as questions and answers typical contain multiple sentences, irrelevant information, and noisy expressions. In our investigation, word-by-word attention in the original model does not work well on social question-answer pairs. We propose integrating supervised attention into </span><span class="fontstyle2">matchLSTM</span><span class="fontstyle0">. Specifically, we leverage lexical-semantic from external to guide the learning of attention weights for question-answer pairs. The proposed model learns more meaningful attention that allows performing better than the basic model. Our performance is among the top on SemEval datasets.</span> <br /><br />


2020 ◽  
Vol 18 (1) ◽  
pp. 51
Author(s):  
Liang Zhang ◽  
Zhaobin Liu ◽  
Jinxiang Li ◽  
Gang Liu ◽  
Yuanfeng Yang ◽  
...  

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Guofeng Ren ◽  
Guicheng Shao ◽  
Jianmei Fu

In the recent years, along with the development of artificial intelligence (AI) and man-machine interaction technology, speech recognition and production have been asked to adapt to the rapid development of AI and man-machine technology, which need to improve recognition accuracy through adding novel features, fusing the feature, and improving recognition methods. Aiming at developing novel recognition feature and application to speech recognition, this paper presents a new method for articulatory-to-acoustic conversion. In the study, we have converted articulatory features (i.e., velocities of tongue and motion of lips) into acoustic features (i.e., the second formant and Mel-Cepstra). By considering the graphical representation of the articulators’ motion, this study combined Bidirectional Long Short-Term Memory (BiLSTM) with convolution neural network (CNN) and adopted the idea of word attention in Mandarin to extract semantic features. In this paper, we used the electromagnetic articulography (EMA) database designed by Taiyuan University of Technology, which contains ten speakers’ 299 disyllables and sentences of Mandarin, and extracted 8-dimensional articulatory features and 1-dimensional semantic feature relying on the word-attention layer; we then trained 200 samples and tested 99 samples for the articulatory-to-acoustic conversion. Finally, Root Mean Square Error (RMSE), Mean Mel-Cepstral Distortion (MMCD), and correlation coefficient have been used to evaluate the conversion effect and for comparison with Gaussian Mixture Model (GMM) and BiLSTM of recurrent neural network (BiLSTM-RNN). The results illustrated that the MMCD of Mel-Frequency Cepstrum Coefficient (MFCC) was 1.467 dB, and the RMSE of F2 was 22.10 Hz. The research results of this study can be used in the features fusion and speech recognition to improve the accuracy of recognition.


Sign in / Sign up

Export Citation Format

Share Document