scholarly journals Evaluating Commonsense in Pre-Trained Language Models

2020 ◽  
Vol 34 (05) ◽  
pp. 9733-9740 ◽  
Author(s):  
Xuhui Zhou ◽  
Yue Zhang ◽  
Leyang Cui ◽  
Dandan Huang

Contextualized representations trained over large raw text data have given remarkable improvements for NLP tasks including question answering and reading comprehension. There have been works showing that syntactic, semantic and word sense knowledge are contained in such representations, which explains why they benefit such tasks. However, relatively little work has been done investigating commonsense knowledge contained in contextualized representations, which is crucial for human question answering and reading comprehension. We study the commonsense ability of GPT, BERT, XLNet, and RoBERTa by testing them on seven challenging benchmarks, finding that language modeling and its variants are effective objectives for promoting models' commonsense ability while bi-directional context and larger training set are bonuses. We additionally find that current models do poorly on tasks require more necessary inference steps. Finally, we test the robustness of models by making dual test cases, which are correlated so that the correct prediction of one sample should lead to correct prediction of the other. Interestingly, the models show confusion on these test cases, which suggests that they learn commonsense at the surface rather than the deep level. We release a test set, named CATs publicly, for future research.

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Changchang Zeng ◽  
Shaobo Li

Machine reading comprehension (MRC) is a challenging natural language processing (NLP) task. It has a wide application potential in the fields of question answering robots, human-computer interactions in mobile virtual reality systems, etc. Recently, the emergence of pretrained models (PTMs) has brought this research field into a new era, in which the training objective plays a key role. The masked language model (MLM) is a self-supervised training objective widely used in various PTMs. With the development of training objectives, many variants of MLM have been proposed, such as whole word masking, entity masking, phrase masking, and span masking. In different MLMs, the length of the masked tokens is different. Similarly, in different machine reading comprehension tasks, the length of the answer is also different, and the answer is often a word, phrase, or sentence. Thus, in MRC tasks with different answer lengths, whether the length of MLM is related to performance is a question worth studying. If this hypothesis is true, it can guide us on how to pretrain the MLM with a relatively suitable mask length distribution for MRC tasks. In this paper, we try to uncover how much of MLM’s success in the machine reading comprehension tasks comes from the correlation between masking length distribution and answer length in the MRC dataset. In order to address this issue, herein, (1) we propose four MRC tasks with different answer length distributions, namely, the short span extraction task, long span extraction task, short multiple-choice cloze task, and long multiple-choice cloze task; (2) four Chinese MRC datasets are created for these tasks; (3) we also have pretrained four masked language models according to the answer length distributions of these datasets; and (4) ablation experiments are conducted on the datasets to verify our hypothesis. The experimental results demonstrate that our hypothesis is true. On four different machine reading comprehension datasets, the performance of the model with correlation length distribution surpasses the model without correlation.


Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2656
Author(s):  
Ayato Kuwana ◽  
Atsushi Oba ◽  
Ranto Sawai ◽  
Incheon Paik

In recent years, automatic ontology generation has received significant attention in information science as a means of systemizing vast amounts of online data. As our initial attempt of ontology generation with a neural network, we proposed a recurrent neural network-based method. However, updating the architecture is possible because of the development in natural language processing (NLP). By contrast, the transfer learning of language models trained by a large, unlabeled corpus has yielded a breakthrough in NLP. Inspired by these achievements, we propose a novel workflow for ontology generation comprising two-stage learning. Our results showed that our best method improved accuracy by over 12.5%. As an application example, we applied our model to the Stanford Question Answering Dataset to show ontology generation in a real field. The results showed that our model can generate a good ontology, with some exceptions in the real field, indicating future research directions to improve the quality.


2019 ◽  
Author(s):  
Amanda Goodwin ◽  
Yaacov Petscher ◽  
Jamie Tock

Various models have highlighted the complexity of language. Building on foundational ideas regarding three key aspects of language, our study contributes to the literature by 1) exploring broader conceptions of morphology, vocabulary, and syntax, 2) operationalizing this theoretical model into a gamified, standardized, computer-adaptive assessment of language for fifth to eighth grade students entitled Monster, PI, and 3) uncovering further evidence regarding the relationship between language and standardized reading comprehension via this assessment. Multiple-group item response theory (IRT) across grades show that morphology was best fit by a bifactor model of task specific factors along with a global factor related to each skill. Vocabulary was best fit by a bifactor model that identifies performance overall and on specific words. Syntax, though, was best fit by a unidimensional model. Next, Monster, PI produced reliable scores suggesting language can be assessed efficiently and precisely for students via this model. Lastly, performance on Monster, PI explained more than 50% of variance in standardized reading, suggesting operationalizing language via Monster, PI can provide meaningful understandings of the relationship between language and reading comprehension. Specifically, considering just a subset of a construct, like identification of units of meaning, explained significantly less variance in reading comprehension. This highlights the importance of considering these broader constructs. Implications indicate that future work should consider a model of language where component areas are considered broadly and contributions to reading comprehension are explored via general performance on components as well as skill level performance.


Author(s):  
Zahra Mousavi ◽  
Heshaam Faili

Nowadays, wordnets are extensively used as a major resource in natural language processing and information retrieval tasks. Therefore, the accuracy of wordnets has a direct influence on the performance of the involved applications. This paper presents a fully-automated method for extending a previously developed Persian wordnet to cover more comprehensive and accurate verbal entries. At first, by using a bilingual dictionary, some Persian verbs are linked to Princeton WordNet synsets. A feature set related to the semantic behavior of compound verbs as the majority of Persian verbs is proposed. This feature set is employed in a supervised classification system to select the proper links for inclusion in the wordnet. We also benefit from a pre-existing Persian wordnet, FarsNet, and a similarity-based method to produce a training set. This is the largest automatically developed Persian wordnet with more than 27,000 words, 28,000 PWN synsets and 67,000 word-sense pairs that substantially outperforms the previous Persian wordnet with about 16,000 words, 22,000 PWN synsets and 38,000 word-sense pairs.


2020 ◽  
Vol 34 (05) ◽  
pp. 7700-7707
Author(s):  
G P Shrivatsa Bhargav ◽  
Michael Glass ◽  
Dinesh Garg ◽  
Shirish Shevade ◽  
Saswati Dana ◽  
...  

Research on the task of Reading Comprehension style Question Answering (RCQA) has gained momentum in recent years due to the emergence of human annotated datasets and associated leaderboards, for example CoQA, HotpotQA, SQuAD, TriviaQA, etc. While state-of-the-art has advanced considerably, there is still ample opportunity to advance it further on some important variants of the RCQA task. In this paper, we propose a novel deep neural architecture, called TAP (Translucent Answer Prediction), to identify answers and evidence (in the form of supporting facts) in an RCQA task requiring multi-hop reasoning. TAP comprises two loosely coupled networks – Local and Global Interaction eXtractor (LoGIX) and Answer Predictor (AP). LoGIX predicts supporting facts, whereas AP consumes these predicted supporting facts to predict the answer span. The novel design of LoGIX is inspired by two key design desiderata – local context and global interaction– that we identified by analyzing examples of multi-hop RCQA task. The loose coupling between LoGIX and the AP reveals the set of sentences used by the AP in predicting an answer. Therefore, answer predictions of TAP can be interpreted in a translucent manner. TAP offers state-of-the-art performance on the HotpotQA (Yang et al. 2018) dataset – an apt dataset for multi-hop RCQA task – as it occupies Rank-1 on its leaderboard (https://hotpotqa.github.io/) at the time of submission.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Haosen Liu ◽  
Youwei Wang ◽  
Xiabing Zhou ◽  
Zhengzheng Lou ◽  
Yangdong Ye

Purpose The railway signal equipment failure diagnosis is a vital element to keep the railway system operating safely. One of the most difficulties in signal equipment failure diagnosis is the uncertainty of causality between the consequence and cause for the accident. The traditional method to solve this problem is based on Bayesian Network, which needs a rigid and independent assumption basis and prior probability knowledge but ignoring the semantic relationship in causality analysis. This paper aims to perform the uncertainty of causality in signal equipment failure diagnosis through a new way that emphasis on mining semantic relationships. Design/methodology/approach This study proposes a deterministic failure diagnosis (DFD) model based on the question answering system to implement railway signal equipment failure diagnosis. It includes the failure diagnosis module and deterministic diagnosis module. In the failure diagnosis module, this paper exploits the question answering system to recognise the cause of failure consequences. The question answering is composed of multi-layer neural networks, which extracts the position and part of speech features of text data from lower layers and acquires contextual features and interactive features of text data by Bi-LSTM and Match-LSTM, respectively, from high layers, subsequently generates the candidate failure cause set by proposed the enhanced boundary unit. In the second module, this study ranks the candidate failure cause set in the semantic matching mechanism (SMM), choosing the top 1st semantic matching degree as the deterministic failure causative factor. Findings Experiments on real data set railway maintenance signal equipment show that the proposed DFD model can implement the deterministic diagnosis of railway signal equipment failure. Comparing massive existing methods, the model achieves the state of art in the natural understanding semantic of railway signal equipment diagnosis domain. Originality/value It is the first time to use a question answering system executing signal equipment failure diagnoses, which makes failure diagnosis more intelligent than before. The EMU enables the DFD model to understand the natural semantic in long sequence contexture. Then, the SMM makes the DFD model acquire the certainty failure cause in the failure diagnosis of railway signal equipment.


2020 ◽  
Author(s):  
Marie-Anne Xu ◽  
Rahul Khanna

Recent progress in machine reading comprehension and question-answering has allowed machines to reach and even surpass human question-answering. However, the majority of these questions have only one answer, and more substantial testing on questions with multiple answers, or multi-span questions, has not yet been applied. Thus, we introduce a newly compiled dataset consisting of questions with multiple answers that originate from previously existing datasets. In addition, we run BERT-based models pre-trained for question-answering on our constructed dataset to evaluate their reading comprehension abilities. Among the three of BERT-based models we ran, RoBERTa exhibits the highest consistent performance, regardless of size. We find that all our models perform similarly on this new, multi-span dataset (21.492% F1) compared to the single-span source datasets (~33.36% F1). While the models tested on the source datasets were slightly fine-tuned, performance is similar enough to judge that task formulation does not drastically affect question-answering abilities. Our evaluations indicate that these models are indeed capable of adjusting to answer questions that require multiple answers. We hope that our findings will assist future development in questionanswering and improve existing question-answering products and methods.


Author(s):  
Hakan Cangır

The chapter starts with a definition and models of mental dictionary. It then builds on the bilingual lexical activation models and goes on to discuss formulaic language (collocations in particular). After explaining the basics of formulaic language processing, the author attempts to address the issue of lexical and collocational priming theory by Hoey, which has its roots in cognitive linguistics and usage-based language models. Last but not least, some suggestions for future research are provided in an attempt to address the needs of the lexical research literature in the Turkish setting.


2020 ◽  
Vol 12 (18) ◽  
pp. 7783
Author(s):  
Yilong Hao ◽  
Kai Ding ◽  
Yaoyang Xu ◽  
Yuting Tang ◽  
Dong Liu ◽  
...  

As an environmentally-friendly aquaculture and planting system, aquaponics has attracted attention in various fields, such as fisheries, agriculture, and ecology. The existing review qualitatively described the development and challenges of aquaponics but lacked data support. This study selected 513 related documents (2000–2019) in the Web of Science database (WOS) to mine and quantitatively analyze its text data. The keyword co-occurrence network shows that the current aquaponics research mainly focuses on the system components, wastewater treatment, nutrient management, and system production. Research areas reflect obvious regional characteristics. China, the United States and Europe are dedicated to the application of new technologies, the optimization of system production, and the exploration of multiple roles. At present, the aquaponics development is facing many pressures from management and market. Future research requires more in-depth research in the system construction, nutrient management, and microbial community structure to provide a theoretical basis. Moreover, the identity construction within the conceptual framework of green infrastructure is a research direction worth exploring to solve low social recognition for aquaponics.


Sign in / Sign up

Export Citation Format

Share Document