scholarly journals Efficient Black-Box Planning Using Macro-Actions with Focused Effects

Author(s):  
Cameron Allen ◽  
Michael Katz ◽  
Tim Klinger ◽  
George Konidaris ◽  
Matthew Riemer ◽  
...  

The difficulty of deterministic planning increases exponentially with search-tree depth. Black-box planning presents an even greater challenge, since planners must operate without an explicit model of the domain. Heuristics can make search more efficient, but goal-aware heuristics for black-box planning usually rely on goal counting, which is often quite uninformative. In this work, we show how to overcome this limitation by discovering macro-actions that make the goal-count heuristic more accurate. Our approach searches for macro-actions with focused effects (i.e. macros that modify only a small number of state variables), which align well with the assumptions made by the goal-count heuristic. Focused macros dramatically improve black-box planning efficiency across a wide range of planning domains, sometimes beating even state-of-the-art planners with access to a full domain model.

2021 ◽  
pp. 1-11
Author(s):  
Tianshi Mu ◽  
Kequan Lin ◽  
Huabing Zhang ◽  
Jian Wang

Deep learning is gaining significant traction in a wide range of areas. Whereas, recent studies have demonstrated that deep learning exhibits the fatal weakness on adversarial examples. Due to the black-box nature and un-transparency problem of deep learning, it is difficult to explain the reason for the existence of adversarial examples and also hard to defend against them. This study focuses on improving the adversarial robustness of convolutional neural networks. We first explore how adversarial examples behave inside the network through visualization. We find that adversarial examples produce perturbations in hidden activations, which forms an amplification effect to fool the network. Motivated by this observation, we propose an approach, termed as sanitizing hidden activations, to help the network correctly recognize adversarial examples by eliminating or reducing the perturbations in hidden activations. To demonstrate the effectiveness of our approach, we conduct experiments on three widely used datasets: MNIST, CIFAR-10 and ImageNet, and also compare with state-of-the-art defense techniques. The experimental results show that our sanitizing approach is more generalized to defend against different kinds of attacks and can effectively improve the adversarial robustness of convolutional neural networks.


Information ◽  
2019 ◽  
Vol 10 (1) ◽  
pp. 30 ◽  
Author(s):  
Rosa Senatore ◽  
Antonio Della Cioppa ◽  
Angelo Marcelli

Background: The use of Artificial Intelligence (AI) systems for automatic diagnoses is increasingly in the clinical field, being a useful support for the identification of several diseases. Nonetheless, the acceptance of AI-based diagnoses by the physicians is hampered by the black-box approach implemented by most performing systems, which do not clearly state the classification rules adopted. Methods: In this framework we propose a classification method based on a Cartesian Genetic Programming (CGP) approach, which allows for the automatic identification of the presence of the disease, and concurrently, provides the explicit classification model used by the system. Results: The proposed approach has been evaluated on the publicly available HandPD dataset, which contains handwriting samples drawn by Parkinson’s disease patients and healthy controls. We show that our approach compares favorably with state-of-the-art methods, and more importantly, allows the physician to identify an explicit model relevant for the diagnosis based on the most informative subset of features. Conclusion: The obtained results suggest that the proposed approach is particularly appealing in that, starting from the explicit model, it allows the physicians to derive a set of guidelines for defining novel testing protocols and intervention strategies.


Author(s):  
Florent Teichteil-Königsbuch ◽  
Miquel Ramirez ◽  
Nir Lipovetzky

Width-based planning algorithms have been demonstrated to be competitive with state-of-the-art heuristic search and SAT-based approaches, without requiring access to a model of action effects and preconditions, just access to a black-box simulator. Width-based planners search is guided by a measure of the novelty of states, that requires observations on simulator states to be given as a set of features. This paper proposes agnostic feature mapping mechanisms that define the features online, as exploration progresses and the domain of continuous state variables is revealed. We demonstrate the effectiveness of these features on the OpenAI gym "classical control" suite of benchmarks. We compare our online planners with state-of-the-art deep reinforcement learning algorithms, and show that width-based planners using our features can find policies of the same quality with significantly less computational resources.


2021 ◽  
Vol 15 (8) ◽  
pp. 898-911
Author(s):  
Yongqing Zhang ◽  
Jianrong Yan ◽  
Siyu Chen ◽  
Meiqin Gong ◽  
Dongrui Gao ◽  
...  

Rapid advances in biological research over recent years have significantly enriched biological and medical data resources. Deep learning-based techniques have been successfully utilized to process data in this field, and they have exhibited state-of-the-art performances even on high-dimensional, nonstructural, and black-box biological data. The aim of the current study is to provide an overview of the deep learning-based techniques used in biology and medicine and their state-of-the-art applications. In particular, we introduce the fundamentals of deep learning and then review the success of applying such methods to bioinformatics, biomedical imaging, biomedicine, and drug discovery. We also discuss the challenges and limitations of this field, and outline possible directions for further research.


2020 ◽  
Vol 12 ◽  
Author(s):  
Francisco Basílio ◽  
Ricardo Jorge Dinis-Oliveira

Background: Pharmacobezoars are specific types of bezoars formed when medicines, such as tablets, suspensions, and/or drug delivery systems, aggregate and may cause death by occluding airways with tenacious material or by eluting drugs resulting in toxic or lethal blood concentrations. Objective: This work aims to fully review the state-of-the-art regarding pathophysiology, diagnosis, treatment and other relevant clinical and forensic features of pharmacobezoars. Results: patients of a wide range of ages and in both sexes present with signs and symptoms of intoxications or more commonly gastrointestinal obstructions. The exact mechanisms of pharmacobezoar formation are unknown but is likely multifactorial. The diagnosis and treatment depend on the gastrointestinal segment affected and should be personalized to the medication and the underlying factor. A good and complete history, physical examination, image tests, upper endoscopy and surgery through laparotomy of the lower tract are useful for diagnosis and treatment. Conclusion: Pharmacobezoars are rarely seen in clinical and forensic practice. They are related to controlled or immediate-release formulations, liquid or non-digestible substances, in normal or altered digestive motility/anatomy tract, and in overdoses or therapeutic doses, and should be suspected in the presence of risk factors or patients taking drugs which may form pharmacobezoars.


This volume vividly demonstrates the importance and increasing breadth of quantitative methods in the earth sciences. With contributions from an international cast of leading practitioners, chapters cover a wide range of state-of-the-art methods and applications, including computer modeling and mapping techniques. Many chapters also contain reviews and extensive bibliographies which serve to make this an invaluable introduction to the entire field. In addition to its detailed presentations, the book includes chapters on the history of geomathematics and on R.G.V. Eigen, the "father" of mathematical geology. Written to commemorate the 25th anniversary of the International Association for Mathematical Geology, the book will be sought after by both practitioners and researchers in all branches of geology.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-32
Author(s):  
Quang-huy Duong ◽  
Heri Ramampiaro ◽  
Kjetil Nørvåg ◽  
Thu-lan Dam

Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.


2021 ◽  
Vol 50 (1) ◽  
pp. 33-40
Author(s):  
Chenhao Ma ◽  
Yixiang Fang ◽  
Reynold Cheng ◽  
Laks V.S. Lakshmanan ◽  
Wenjie Zhang ◽  
...  

Given a directed graph G, the directed densest subgraph (DDS) problem refers to the finding of a subgraph from G, whose density is the highest among all the subgraphs of G. The DDS problem is fundamental to a wide range of applications, such as fraud detection, community mining, and graph compression. However, existing DDS solutions suffer from efficiency and scalability problems: on a threethousand- edge graph, it takes three days for one of the best exact algorithms to complete. In this paper, we develop an efficient and scalable DDS solution. We introduce the notion of [x, y]-core, which is a dense subgraph for G, and show that the densest subgraph can be accurately located through the [x, y]-core with theoretical guarantees. Based on the [x, y]-core, we develop both exact and approximation algorithms. We have performed an extensive evaluation of our approaches on eight real large datasets. The results show that our proposed solutions are up to six orders of magnitude faster than the state-of-the-art.


2021 ◽  
Author(s):  
Danila Piatov ◽  
Sven Helmer ◽  
Anton Dignös ◽  
Fabio Persia

AbstractWe develop a family of efficient plane-sweeping interval join algorithms for evaluating a wide range of interval predicates such as Allen’s relationships and parameterized relationships. Our technique is based on a framework, components of which can be flexibly combined in different manners to support the required interval relation. In temporal databases, our algorithms can exploit a well-known and flexible access method, the Timeline Index, thus expanding the set of operations it supports even further. Additionally, employing a compact data structure, the gapless hash map, we utilize the CPU cache efficiently. In an experimental evaluation, we show that our approach is several times faster and scales better than state-of-the-art techniques, while being much better suited for real-time event processing.


Sign in / Sign up

Export Citation Format

Share Document