inductive bias
Recently Published Documents


TOTAL DOCUMENTS

82
(FIVE YEARS 45)

H-INDEX

9
(FIVE YEARS 3)

2021 ◽  
Author(s):  
◽  
Alexander Telfar

<p>Successful reinforcement learning requires large amounts of data, compute, and some luck. We explore the ability of abstraction(s) to reduce these dependencies. Abstractions for reinforcement learning share the goals of this abstract: to capture essential details, while leaving out the unimportant. By throwing away inessential details, there will be less to compute, less to explore, and less variance in observations. But, does this always aid reinforcement learning? More specifically, we start by looking for abstractions that are easily solvable. This leads us to a type of linear abstraction. We show that, while it does allow efficient solutions, it also gives erroneous solutions, in the general case. We then attempt to improve the sample efficiency of a reinforcment learner. We do so by constructing a measure of symmetry and using it as an inductive bias. We design and run experiments to test the advantage provided by this inductive bias, but must leave conclusions to future work.</p>


2021 ◽  
Author(s):  
◽  
Alexander Telfar

<p>Successful reinforcement learning requires large amounts of data, compute, and some luck. We explore the ability of abstraction(s) to reduce these dependencies. Abstractions for reinforcement learning share the goals of this abstract: to capture essential details, while leaving out the unimportant. By throwing away inessential details, there will be less to compute, less to explore, and less variance in observations. But, does this always aid reinforcement learning? More specifically, we start by looking for abstractions that are easily solvable. This leads us to a type of linear abstraction. We show that, while it does allow efficient solutions, it also gives erroneous solutions, in the general case. We then attempt to improve the sample efficiency of a reinforcment learner. We do so by constructing a measure of symmetry and using it as an inductive bias. We design and run experiments to test the advantage provided by this inductive bias, but must leave conclusions to future work.</p>


2021 ◽  
Author(s):  
Shikha Dubey ◽  
Farrukh Olimov ◽  
Muhammad Aasim Rafique ◽  
Moongu Jeon

General artificial intelligence is a trade-off between the inductive bias of an algorithm and its out-of-distribution generalization performance. The conspicuous impact of inductive bias is an unceasing trend of improved predictions in various problems in computer vision like object detection. Although a recently introduced object detection technique, based on transformers (DETR), shows results competitive to the conventional and modern object detection models, its accuracy deteriorates for detecting small-sized objects (in perspective). This study examines the inductive bias of DETR and proposes a normalized inductive bias for object detection using a transformer (SOF-DETR). It uses a lazy-fusion of features to sustain deep contextual information of objects present in the image. The features from multiple subsequent deep layers are fused with element-wise-summation and input to a transformer network for object queries that learn the long and short-distance spatial association in the image by the attention mechanism.<br>SOF-DETR uses a global set-based prediction for object detection, which directly produces a set of bounding boxes. The experimental results on the MS COCO dataset show the effectiveness of the added normalized inductive bias and feature fusion techniques by detecting more small-sized objects than DETR. <br>


2021 ◽  
Vol 3 (1) ◽  
pp. 2
Author(s):  
Marnix Van Soom ◽  
Bart de Boer

We derive a weakly informative prior for a set of ordered resonance frequencies from Jaynes’ principle of maximum entropy. The prior facilitates model selection problems in which both the number and the values of the resonance frequencies are unknown. It encodes a weakly inductive bias, provides a reasonable density everywhere, is easily parametrizable, and is easy to sample. We hope that this prior can enable the use of robust evidence-based methods for a new class of problems, even in the presence of multiplets of arbitrary order.


2021 ◽  
Author(s):  
Shikha Dubey ◽  
Farrukh Olimov ◽  
Muhammad Aasim Rafique ◽  
Moongu Jeon

General artificial intelligence is a trade-off between the inductive bias of an algorithm and its out-of-distribution generalization performance. The conspicuous impact of inductive bias is an unceasing trend of improved predictions in various problems in computer vision like object detection. Although a recently introduced object detection technique, based on transformers (DETR), shows results competitive to the conventional and modern object detection models, its accuracy deteriorates for detecting small-sized objects (in perspective). This study examines the inductive bias of DETR and proposes a normalized inductive bias for object detection using a transformer (SOF-DETR). It uses a lazy-fusion of features to sustain deep contextual information of objects present in the image. The features from multiple subsequent deep layers are fused with element-wise-summation and input to a transformer network for object queries that learn the long and short-distance spatial association in the image by the attention mechanism.<br>SOF-DETR uses a global set-based prediction for object detection, which directly produces a set of bounding boxes. The experimental results on the MS COCO dataset show the effectiveness of the added normalized inductive bias and feature fusion techniques by detecting more small-sized objects than DETR. <br>


2021 ◽  
Author(s):  
Shikha Dubey ◽  
Farrukh Olimov ◽  
Muhammad Aasim Rafique ◽  
Moongu Jeon

General artificial intelligence is a trade-off between the inductive bias of an algorithm and its out-of-distribution generalization performance. The conspicuous impact of inductive bias is an unceasing trend of improved predictions in various problems in computer vision like object detection. Although a recently introduced object detection technique, based on transformers (DETR), shows results competitive to the conventional and modern object detection models, its accuracy deteriorates for detecting small-sized objects (in perspective). This study examines the inductive bias of DETR and proposes a normalized inductive bias for object detection using a transformer (SOF-DETR). It uses a lazy-fusion of features to sustain deep contextual information of objects present in the image. The features from multiple subsequent deep layers are fused with element-wise-summation and input to a transformer network for object queries that learn the long and short-distance spatial association in the image by the attention mechanism.<br>SOF-DETR uses a global set-based prediction for object detection, which directly produces a set of bounding boxes. The experimental results on the MS COCO dataset show the effectiveness of the added normalized inductive bias and feature fusion techniques by detecting more small-sized objects than DETR. <br>


2021 ◽  
pp. 1-16
Author(s):  
Hiromi Nakagawa ◽  
Yusuke Iwasawa ◽  
Yutaka Matsuo

Recent advancements in computer-assisted learning systems have caused an increase in the research in knowledge tracing, wherein student performance is predicted over time. Student coursework can potentially be structured as a graph. Incorporating this graph-structured nature into a knowledge tracing model as a relational inductive bias can improve its performance; however, previous methods, such as deep knowledge tracing, did not consider such a latent graph structure. Inspired by the recent successes of graph neural networks (GNNs), we herein propose a GNN-based knowledge tracing method, i.e., graph-based knowledge tracing. Casting the knowledge structure as a graph enabled us to reformulate the knowledge tracing task as a time-series node-level classification problem in the GNN. As the knowledge graph structure is not explicitly provided in most cases, we propose various implementations of the graph structure. Empirical validations on two open datasets indicated that our method could potentially improve the prediction of student performance and demonstrated more interpretable predictions compared to those of the previous methods, without the requirement of any additional information.


2021 ◽  
Author(s):  
Martin Ringsquandl ◽  
Houssem Sellami ◽  
Marcel Hildebrandt ◽  
Dagmar Beyer ◽  
Sylwia Henselmeyer ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document