scholarly journals An Information-Motivated Exploration Agent to Locate Stationary Persons with Wireless Transmitters in Unknown Environments

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7695
Author(s):  
Daniel Barry ◽  
Andreas Willig ◽  
Graeme Woodward

Unmanned Aerial Vehicles (UAVs) show promise in a variety of applications and recently were explored in the area of Search and Rescue (SAR) for finding victims. In this paper we consider the problem of finding multiple unknown stationary transmitters in a discrete simulated unknown environment, where the goal is to locate all transmitters in as short a time as possible. Existing solutions in the UAV search space typically search for a single target, assume a simple environment, assume target properties are known or have other unrealistic assumptions. We simulate large, complex environments with limited a priori information about the environment and transmitter properties. We propose a Bayesian search algorithm, Information Exploration Behaviour (IEB), that maximizes predicted information gain at each search step, incorporating information from multiple sensors whilst making minimal assumptions about the scenario. This search method is inspired by the information theory concept of empowerment. Our algorithm shows significant speed-up compared to baseline algorithms, being orders of magnitude faster than a random agent and 10 times faster than a lawnmower strategy, even in complex scenarios. The IEB agent is able to make use of received transmitter signals from unknown sources and incorporate both an exploration and search strategy.

Author(s):  
Michael Oeljeklaus ◽  
H. Günther Natke

Abstract An interval analytical approach to parameter identification in the frequency domain of mathematical models for linear elasto-mechanical systems is described. A priori information, measurement errors and — if possible — unmeasurable degrees of freedom are modelled in terms of intervals. A parallel iterative update method — based on the interval analytical Gauss-Seidel method — is used to reduce the volume of the parameter search space initially given. The search for the global minimum of the WLS objective function using output residuals is performed on the reduced parameter space in a last step1. Subsystem identification and sub-model synthesis are used in the case of realistic models with a large number of degrees of freedom. Parallelization of the algorithm with respect to subsystems is applied in the case of large structures to reduce the amount of memory and to speed-up the computation. Test results for some simulations for a test structure are given in order to illustrate the method.


2010 ◽  
Vol 10 (1) ◽  
pp. 183-211 ◽  
Author(s):  
S. Ceccherini ◽  
U. Cortesi ◽  
S. Del Bianco ◽  
P. Raspollini ◽  
B. Carli

Abstract. The combination of data obtained with different sensors (data fusion) is a powerful technique that can provide target products of the best quality in terms of precision and accuracy, as well as spatial and temporal coverage and resolution. In this paper the results are presented of the data fusion of measurements of ozone vertical profile performed by two space-borne interferometers (IASI on METOP and MIPAS on ENVISAT) using the new measurement-space-solution method. With this method both the loss of information due to interpolation and the propagation of possible biases (caused by a priori information) are avoided. The data fusion products are characterized by means of retrieval errors, information gain, averaging kernels and number of degrees of freedom. The analysis is performed both on simulated and real measurements and the results demonstrate and quantify the improvement of data fusion products with respect to measurements of a single instrument.


Author(s):  
Karem A. Sakallah

Symmetry is at once a familiar concept (we recognize it when we see it!) and a profoundly deep mathematical subject. At its most basic, a symmetry is some transformation of an object that leaves the object (or some aspect of the object) unchanged. For example, a square can be transformed in eight different ways that leave it looking exactly the same: the identity “do-nothing” transformation, 3 rotations, and 4 mirror images (or reflections). In the context of decision problems, the presence of symmetries in a problem’s search space can frustrate the hunt for a solution by forcing a search algorithm to fruitlessly explore symmetric subspaces that do not contain solutions. Recognizing that such symmetries exist, we can direct a search algorithm to look for solutions only in non-symmetric parts of the search space. In many cases, this can lead to significant pruning of the search space and yield solutions to problems which are otherwise intractable. This chapter explores the symmetries of Boolean functions, particularly the symmetries of their conjunctive normal form (CNF) representations. Specifically, it examines what those symmetries are, how to model them using the mathematical language of group theory, how to derive them from a CNF formula, and how to utilize them to speed up CNF SAT solvers.


Author(s):  
Nacéra Bennacer ◽  
Guy Vidal-Naquet

This paper proposes an Ontology-driven and Community-based Web Services (OCWS) framework which aims at automating discovery, composition and execution of web services. The purpose is to validate and to execute a user’s request built from the composition of a set of OCWS descriptions and a set of user constraints. The defined framework separates clearly the OCWS external descriptions from internal realistic implementations of e-services. It identifies three levels: the knowledge level, the community level and e-services level and uses different participant agents deployed in a distributed architecture. First, the reasoner agent uses a description logic extended for actions in order to reason about: (i) consistency of the pre-conditions and post-conditions of OCWS descriptions and the user constraints with ontologies semantics, (ii) consistency of the workflow matching assertions and the execution dependency graph. Then the execution plan model is generated automatically to be run by the composer agents using the dynamic execution plan algorithm (DEPA), according to the workflow matching and the established execution order. The community composer agents invoke the appropriate e-services and ensure that the non functional constraints are satisfied. DEPA algorithm works dynamically without a priori information about e-services states and has interesting properties such as taking into account the non-determinism of e-services and reducing the search space.


2020 ◽  
Author(s):  
Fulei Ji ◽  
Wentao Zhang ◽  
Tianyou Ding

Abstract Automatic search methods have been widely used for cryptanalysis of block ciphers, especially for the most classic cryptanalysis methods—differential and linear cryptanalysis. However, the automatic search methods, no matter based on MILP, SMT/SAT or CP techniques, can be inefficient when the search space is too large. In this paper, we propose three new methods to improve Matsui’s branch-and-bound search algorithm, which is known as the first generic algorithm for finding the best differential and linear trails. The three methods, named reconstructing DDT and LAT according to weight, executing linear layer operations in minimal cost and merging two 4-bit S-boxes into one 8-bit S-box, respectively, can efficiently speed up the search process by reducing the search space as much as possible and reducing the cost of executing linear layer operations. We apply our improved algorithm to DESL and GIFT, which are still the hard instances for the automatic search methods. As a result, we find the best differential trails for DESL (up to 14-round) and GIFT-128 (up to 19-round). The best linear trails for DESL (up to 16-round), GIFT-128 (up to 10-round) and GIFT-64 (up to 15-round) are also found. To the best of our knowledge, these security bounds for DESL and GIFT under single-key scenario are given for the first time. Meanwhile, it is the longest exploitable (differential or linear) trails for DESL and GIFT. Furthermore, benefiting from the efficiency of the improved algorithm, we do experiments to demonstrate that the clustering effect of differential trails for 13-round DES and DESL are both weak.


2014 ◽  
Vol 24 (4) ◽  
pp. 901-916
Author(s):  
Zoltán Ádám Mann ◽  
Tamás Szép

Abstract Backtrack-style exhaustive search algorithms for NP-hard problems tend to have large variance in their runtime. This is because “fortunate” branching decisions can lead to finding a solution quickly, whereas “unfortunate” decisions in another run can lead the algorithm to a region of the search space with no solutions. In the literature, frequent restarting has been suggested as a means to overcome this problem. In this paper, we propose a more sophisticated approach: a best-firstsearch heuristic to quickly move between parts of the search space, always concentrating on the most promising region. We describe how this idea can be efficiently incorporated into a backtrack search algorithm, without sacrificing optimality. Moreover, we demonstrate empirically that, for hard solvable problem instances, the new approach provides significantly higher speed-up than frequent restarting.


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. WB19-WB35 ◽  
Author(s):  
Cyril Schamper ◽  
Fayçal Rejiba ◽  
Roger Guérin

Electromagnetic induction (EMI) methods are widely used to determine the distribution of the electrical conductivity and are well adapted to the delimitation of aquifers and clayey layers because the electromagnetic field is strongly perturbed by conductive media. The multicomponent EMI device that was used allowed the three components of the secondary magnetic field (the radial [Formula: see text], the tangential [Formula: see text], and the vertical [Formula: see text]) to be measured at 10 frequencies ranging from 110 to 56 kHz in one single sounding with offsets ranging from 20 to 400 m. In a continuing endeavor to improve the reliability with which the thickness and conductivity are inverted, we focused our research on the use of components other than the vertical magnetic field Hz. Because a separate sensitivity analysis of [Formula: see text] and [Formula: see text] suggests that [Formula: see text] is more sensitive to variations in the thickness of a near-surface conductive layer, we developed an inversion tool able to make single-sounding and laterally constrained 1D interpretation of both components jointly, associated with an adapted random search algorithm for single-sounding processing for which almost no a priori information is available. Considering the complementarity of [Formula: see text] and [Formula: see text] components, inversion tests of clean and noisy synthetic data showed an improvement in the definition of the thickness of a near-surface conductive layer. This inversion code was applied to the karst site of the basin of Fontaine-Sous-Préaux, near Rouen (northwest of France). Comparison with an electrical resistivity tomography tends to confirm the reliability of the interpretation from the EMI data with the developed inversion tool.


1995 ◽  
Vol 117 (1) ◽  
pp. 108-115 ◽  
Author(s):  
Yubao Chen

The problem of high levels of uncertainty existing in machine diagnosis is addressed by an approach based on fuzzy logic. In this approach, multiple sensors/channels are used, and the uncertainty is treated by membership functions in different stages of the signal processing. The concepts of fuzziness, fuzzy set, and fuzzy inference are described, particularly for the development of a practical procedure for machine diagnosis. The membership functions are established through a learning process based on test data, rather than being selected a priori. The information-gain weighting functions are also introduced in order to improve the robustness and reliability of this method. As a result, a framework of a Fuzzy Decision System (FDS) is proposed and applied to a machining process. Experiment verification with an optimistic success rate of 97.5 percent was achieved.


2005 ◽  
Vol 129 (3) ◽  
pp. 255-265 ◽  
Author(s):  
Chandankumar Aladahalli ◽  
Jonathan Cagan ◽  
Kenji Shimada

Generalized pattern search (GPS) algorithms have been used successfully to solve three-dimensional (3D) component layout problems. These algorithms use a set of patterns and successively decreasing step sizes of these patterns to explore the search space before converging to good local minima. A shortcoming of conventional GPS algorithms is the lack of recognition of the fact that patterns affect the objective function by different amounts and hence it might be efficient to introduce them into the search in a certain order rather than introduce all of them at the beginning of the search. To address this shortcoming, it has been shown by the authors in previous work that it is more efficient to schedule patterns in decreasing order of their effect on the objective function. The effect of the patterns on the objective function was estimated by the a priori expectation of the objective function change due to the patterns. However, computing the a priori expectation is expensive, and to practically implement the scheduling of patterns, an inexpensive estimate of the effect on the objective function is necessary. This paper introduces a metric for geometric layout called the sensitivity metric that is computationally inexpensive, to estimate the effect of pattern moves on the objective function. A new pattern search algorithm that uses the sensitivity metric to schedule patterns is shown to perform as well as the pattern search algorithm that used the a priori expectation of the objective function change. Though the sensitivity metric applies to the class of geometric layout or placement problems, the foundation and approach is useful for developing metrics for other optimization problems.


Sign in / Sign up

Export Citation Format

Share Document