scholarly journals The Go Transformer: Natural Language Modeling for Game Play

Author(s):  
Matthew Ciolino ◽  
Josh Kalin ◽  
David Noever
2021 ◽  
Vol 11 (7) ◽  
pp. 3095
Author(s):  
Suhyune Son ◽  
Seonjeong Hwang ◽  
Sohyeun Bae ◽  
Soo Jun Park ◽  
Jang-Hwan Choi

Multi-task learning (MTL) approaches are actively used for various natural language processing (NLP) tasks. The Multi-Task Deep Neural Network (MT-DNN) has contributed significantly to improving the performance of natural language understanding (NLU) tasks. However, one drawback is that confusion about the language representation of various tasks arises during the training of the MT-DNN model. Inspired by the internal-transfer weighting of MTL in medical imaging, we introduce a Sequential and Intensive Weighted Language Modeling (SIWLM) scheme. The SIWLM consists of two stages: (1) Sequential weighted learning (SWL), which trains a model to learn entire tasks sequentially and concentrically, and (2) Intensive weighted learning (IWL), which enables the model to focus on the central task. We apply this scheme to the MT-DNN model and call this model the MTDNN-SIWLM. Our model achieves higher performance than the existing reference algorithms on six out of the eight GLUE benchmark tasks. Moreover, our model outperforms MT-DNN by 0.77 on average on the overall task. Finally, we conducted a thorough empirical investigation to determine the optimal weight for each GLUE task.


Author(s):  
Nia Shafira ◽  
◽  
Etin Martiana ◽  
Rengga Asmara

As the main train service provider company in Indonesia, PT Kereta Api Indonesia (PT KAI) has many customers who need information. In order to maintain customer loyalty, PT KAI must respond quickly and be adaptive to technology to provide the best service to customers. Limited human resources make PT KAI unable to serve customers simultaneously, so customers often have to wait for a response. In order to provide the best service, automatic messages are needed in order to help customer service performance respond quickly and at the same time with no cost, access anytime and anywhere. This study proposes a new approach with chatbots as a medium for conveying automatic information quickly and simultaneously. This chatbot is made with a computational language that focuses on natural language modeling and cosine similarity as a method for calculating the proximity of inputs and databases. This research can help PT KAI's customer service workers to answer customer needs automatically.


2021 ◽  
Author(s):  
Abdul Wahab ◽  
Rafet Sifa

<div> <div> <div> <p> </p><div> <div> <div> <p>In this paper, we propose a new model named DIBERT which stands for Dependency Injected Bidirectional Encoder Representations from Transformers. DIBERT is a variation of the BERT and has an additional third objective called Parent Prediction (PP) apart from Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). PP injects the syntactic structure of a dependency tree while pre-training the DIBERT which generates syntax-aware generic representations. We use the WikiText-103 benchmark dataset to pre-train both BERT- Base and DIBERT. After fine-tuning, we observe that DIBERT performs better than BERT-Base on various downstream tasks including Semantic Similarity, Natural Language Inference and Sentiment Analysis. </p> </div> </div> </div> </div> </div> </div>


2021 ◽  
Author(s):  
Abdul Wahab ◽  
Rafet Sifa

<div> <div> <div> <p> </p><div> <div> <div> <p>In this paper, we propose a new model named DIBERT which stands for Dependency Injected Bidirectional Encoder Representations from Transformers. DIBERT is a variation of the BERT and has an additional third objective called Parent Prediction (PP) apart from Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). PP injects the syntactic structure of a dependency tree while pre-training the DIBERT which generates syntax-aware generic representations. We use the WikiText-103 benchmark dataset to pre-train both BERT- Base and DIBERT. After fine-tuning, we observe that DIBERT performs better than BERT-Base on various downstream tasks including Semantic Similarity, Natural Language Inference and Sentiment Analysis. </p> </div> </div> </div> </div> </div> </div>


2021 ◽  
Vol 12 (5-2021) ◽  
pp. 57-66
Author(s):  
Dzavdet Sh. Suleimanov ◽  
◽  
Alexander Ya. Fridman ◽  
Rinat A. Gilmullin ◽  
Boris A. Kulik ◽  
...  

System analysis of the problem of modeling a natural language (NL) made it possible to formulate the root cause of the low efficiency of modern means for accumulating and processing knowledge in such languages. This is the complexity of intellectualization for such tools, which are created on the basis of primitive artificial programming languages that practically represent a subset of flectional analytical languages or artificial constructions based on them. To reduce the severity of the identified problem, it is proposed to build NL modeling systems on the basis of technological tools for verbalization and recognition of sense. These tools consist of semiotic models of NL lexical and grammatical means. This approach seems to be especially promising for agglutinative languages; it is supposed to be implemented on the example of the Tatar language.


2021 ◽  
Author(s):  
Sanjar Adilov

Generative neural networks have shown promising results in <i>de novo</i> drug design. Recent studies suggest that one of the efficient ways to produce novel molecules matching target properties is to model SMILES sequences using deep learning in a way similar to language modeling in natural language processing. In this paper, we present a survey of various machine learning methods for SMILES-based language modeling and propose our benchmarking results on a standardized subset of ChEMBL database.


Sign in / Sign up

Export Citation Format

Share Document