scholarly journals Stockgram : Deep Learning Model for Digitizing Financial Communications via Natural Language Generation

2020 ◽  
Vol 9 (4) ◽  
pp. 1-10
Author(s):  
Purva Singh
Author(s):  
Nilesh Ade ◽  
Noor Quddus ◽  
Trent Parker ◽  
S.Camille Peres

One of the major implications of Industry 4.0 will be the application of digital procedures in process industries. Digital procedures are procedures that are accessed through a smart gadget such as a tablet or a phone. However, like paper-based procedures their usability is limited by their access. The issue of accessibility is magnified in tasks such as loading a hopper car with plastic pellets wherein the operators typically place the procedure at a safe distance from the worksite. This drawback can be tackled in the case of digital procedures using artificial intelligence-based voice enabled conversational agent (chatbot). As a part of this study, we have developed a chatbot for assisting digital procedure adherence. The chatbot is trained using the possible set of queries from the operator and text from the digital procedures through deep learning and provides responses using natural language generation. The testing of the chatbot is performed using a simulated conversation with an operator performing the task of loading a hopper car.


Author(s):  
Soo-Han Kang ◽  
Ji-Hyeong Han

AbstractRobot vision provides the most important information to robots so that they can read the context and interact with human partners successfully. Moreover, to allow humans recognize the robot’s visual understanding during human-robot interaction (HRI), the best way is for the robot to provide an explanation of its understanding in natural language. In this paper, we propose a new approach by which to interpret robot vision from an egocentric standpoint and generate descriptions to explain egocentric videos particularly for HRI. Because robot vision equals to egocentric video on the robot’s side, it contains as much egocentric view information as exocentric view information. Thus, we propose a new dataset, referred to as the global, action, and interaction (GAI) dataset, which consists of egocentric video clips and GAI descriptions in natural language to represent both egocentric and exocentric information. The encoder-decoder based deep learning model is trained based on the GAI dataset and its performance on description generation assessments is evaluated. We also conduct experiments in actual environments to verify whether the GAI dataset and the trained deep learning model can improve a robot vision system


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Chinh Trong Nguyen ◽  
Dang Tuan Nguyen

Recently, many deep learning models have archived high results in question answering task with overall F1 scores above 0.88 on SQuAD datasets. However, many of these models have quite low F1 scores on why-questions. These F1 scores range from 0.57 to 0.7 on SQuAD v1.1 development set. This means these models are more appropriate to the extraction of answers for factoid questions than for why-questions. Why-questions are asked when explanations are needed. These explanations are possibly arguments or simply subjective opinions. Therefore, we propose an approach to finding the answer for why-question using discourse analysis and natural language inference. In our approach, natural language inference is applied to identify implicit arguments at sentence level. It is also applied in sentence similarity calculation. Discourse analysis is applied to identify the explicit arguments and the opinions at sentence level in documents. The results from these two methods are the answer candidates to be selected as the final answer for each why-question. We also implement a system with our approach. Our system can provide an answer for a why-question and a document as in reading comprehension test. We test our system with a Vietnamese translated test set which contains all why-questions of SQuAD v1.1 development set. The test results show that our system cannot beat a deep learning model in F1 score; however, our system can answer more questions (answer rate of 77.0%) than the deep learning model (answer rate of 61.0%).


Sign in / Sign up

Export Citation Format

Share Document