An End-to-End Algorithm for Solving Circuit Problems

Author(s):  
Pengpeng Jian ◽  
Chao Sun ◽  
Xinguo Yu ◽  
Bin He ◽  
Meng Xia

This paper presents an end-to-end algorithm for solving circuit problems in secondary physics. A key challenge in solving circuit problems is to automatically understand circuit problems over the modals of both text and schematic. Existing methods have a limited capacity in problem understanding due to the they cannot deal with the numerous expressions of problems in natural language and the various circuit diagrams. In fact that this paper, a batch of methods is proposed to work against the challenge of solving circuit problems. The problem understanding is modeled as a problem of relation extraction and a scheme is proposed to extract relations from both text and schematic. A syntax–semantics model is adopted to extract explicit relations from text, whereas a unit-theorem-based method is proposed to extract implicit relations. And a mesh search method is proposed to extract relations from schematic. Based on the result of problem understanding, an algorithm is proposed to produce the solutions of circuit problems, in which the solutions are presented in a readable way. The experimental results demonstrate the effectiveness of the proposed algorithm in solving circuit problems. To the best of our knowledge, this paper is the first literature which reports the quantitative results in understanding and solving circuit problems.

2020 ◽  
Vol 34 (10) ◽  
pp. 13969-13970
Author(s):  
Atsuki Yamaguchi ◽  
Katsuhide Fujita

In human-human negotiation, reaching a rational agreement can be difficult, and unfortunately, the negotiations sometimes break down because of conflicts of interests. If artificial intelligence can play a role in assisting with human-human negotiation, it can assist in avoiding negotiation breakdown, leading to a rational agreement. Therefore, this study focuses on end-to-end tasks for predicting the outcome of a negotiation dialogue in natural language. Our task is modeled using a gated recurrent unit and a pre-trained language model: BERT as the baseline. Experimental results demonstrate that the proposed tasks are feasible on two negotiation dialogue datasets, and that signs of a breakdown can be detected in the early stages using the baselines even if the models are used in a partial dialogue history.


2020 ◽  
Vol 34 (05) ◽  
pp. 7375-7382
Author(s):  
Prithviraj Ammanabrolu ◽  
Ethan Tien ◽  
Wesley Cheung ◽  
Zhaochen Luo ◽  
William Ma ◽  
...  

Neural network based approaches to automated story plot generation attempt to learn how to generate novel plots from a corpus of natural language plot summaries. Prior work has shown that a semantic abstraction of sentences called events improves neural plot generation and and allows one to decompose the problem into: (1) the generation of a sequence of events (event-to-event) and (2) the transformation of these events into natural language sentences (event-to-sentence). However, typical neural language generation approaches to event-to-sentence can ignore the event details and produce grammatically-correct but semantically-unrelated sentences. We present an ensemble-based model that generates natural language guided by events. We provide results—including a human subjects study—for a full end-to-end automated story generation system showing that our method generates more coherent and plausible stories than baseline approaches 1.


Author(s):  
Anton Dries ◽  
Angelika Kimmig ◽  
Jesse Davis ◽  
Vaishak Belle ◽  
Luc de Raedt

The ability to solve probability word problems such as those found in introductory discrete mathematics textbooks, is an important cognitive and intellectual skill. In this paper, we develop a two-step end-to-end fully automated approach for solving such questions that is able to automatically provide answers to exercises about probability formulated in natural language.In the first step, a question formulated in natural language is analysed and transformed into a high-level model specified in a declarative language. In the second step, a solution to the high-level model is computed using a probabilistic programming system. On a dataset of 2160 probability problems, our solver is able to correctly answer 97.5% of the questions given a correct model. On the end-to-end evaluation, we are able to answer 12.5% of the questions (or 31.1% if we exclude examples not supported by design).


Author(s):  
Fangrui Wu ◽  
Menglong Yang

Recent end-to-end CNN-based stereo matching algorithms obtain disparities through regression from a cost volume, which is formed by concatenating the features of stereo pairs. Some downsampling steps are often embedded in constructing cost volume for global information aggregation and computational efficiency. However, many edge details are hard to recover due to the imprudent upsampling process and ambiguous boundary predictions. To tackle this problem without training another edge prediction sub-network, we developed a novel tightly-coupled edge refinement pipeline composed of two modules. The first module implements a gentle upsampling process by a cascaded cost volume filtering method, aggregating global information without losing many details. On this basis, the second module concentrates on generating a disparity residual map for boundary pixels by sub-pixel disparity consistency check, to further recover the edge details. The experimental results on public datasets demonstrate the effectiveness of the proposed method.


Author(s):  
Nicolas Bougie ◽  
Ryutaro Ichise

Deep reinforcement learning (DRL) methods traditionally struggle with tasks where environment rewards are sparse or delayed, which entails that exploration remains one of the key challenges of DRL. Instead of solely relying on extrinsic rewards, many state-of-the-art methods use intrinsic curiosity as exploration signal. While they hold promise of better local exploration, discovering global exploration strategies is beyond the reach of current methods. We propose a novel end-to-end intrinsic reward formulation that introduces high-level exploration in reinforcement learning. Our curiosity signal is driven by a fast reward that deals with local exploration and a slow reward that incentivizes long-time horizon exploration strategies. We formulate curiosity as the error in an agent’s ability to reconstruct the observations given their contexts. Experimental results show that this high-level exploration enables our agents to outperform prior work in several Atari games.


Author(s):  
Shuming Ma ◽  
Xu Sun ◽  
Junyang Lin ◽  
Xuancheng Ren

Text summarization and sentiment classification both aim to capture the main ideas of the text but at different levels. Text summarization is to describe the text within a few sentences, while sentiment classification can be regarded as a special type of summarization which ``summarizes'' the text into a even more abstract fashion, i.e., a sentiment class. Based on this idea, we propose a hierarchical end-to-end model for joint learning of text summarization and sentiment classification, where the sentiment classification label is treated as the further ``summarization'' of the text summarization output. Hence, the sentiment classification layer is put upon the text summarization layer, and a hierarchical structure is derived. Experimental results on Amazon online reviews datasets show that our model achieves better performance than the strong baseline systems on both abstractive summarization and sentiment classification.


Author(s):  
Parisa Kordjamshidi ◽  
Paolo Frasconi ◽  
Martijn Van Otterlo ◽  
Marie-Francine Moens ◽  
Luc De Raedt

2021 ◽  
pp. 100686
Author(s):  
Wenfei Hu ◽  
Lu Liu ◽  
Yupeng Sun ◽  
Yu Wu ◽  
Zhicheng Liu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document