probabilistic search
Recently Published Documents


TOTAL DOCUMENTS

77
(FIVE YEARS 9)

H-INDEX

14
(FIVE YEARS 3)

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rabin K. Jana ◽  
Dinesh K. Sharma ◽  
Subrata Kumar Mitra

PurposeThe purpose of this paper is to offer improvement in routing and collection load decisions for a green logistics system that delivers lunch boxes.Design/methodology/approachA mathematical model is introduced into the literature for the 130 years old logistics systems whose delivery accuracy is better than the Six Sigma standard without using sophisticated tools. A simulated annealing (SA) approach is then used to find the routing and collection load decisions for the lunch box career.FindingsThe findings establish that we can improve the world-class lunch box delivery (LBD) system. The suggested improvement in terms of reduction in distance travel is nearly 6%. This could be a huge relief for thousands of lunch box careers. The uniformity in collection load decisions suggested by the proposed approach can be more effective for the elderly lunch box carriers.Research limitations/implicationsThe research provides a mathematical framework to study an important logistics system that is running with a supreme level of service accuracy. Collecting primary data was challenging as there is no scope for recording and maintaining data in the present logistics system. The replicability of the system for some other city in the world is a challenging question to answer.Practical implicationsBetter routing and collection load decisions can help many lunch box careers save time and bring homogeneity in workload into the system.Social implicationsAn efficient routing decision can help provide smoother traffic movements, and uniformity in collection load can help avoid unwanted injuries to about 5,000 lunch box careers.Originality/valueThe originality of this paper lies in the proposed mathematical model and finding the routing and collection load decisions using a nature-inspired probabilistic search technique. The LBD system of Mumbai was never studied mathematically. The study is the first of its kind.


2021 ◽  
Vol 31 (3) ◽  
pp. 1-22
Author(s):  
Gidon Ernst ◽  
Sean Sedwards ◽  
Zhenya Zhang ◽  
Ichiro Hasuo

We present and analyse an algorithm that quickly finds falsifying inputs for hybrid systems. Our method is based on a probabilistically directed tree search, whose distribution adapts to consider an increasingly fine-grained discretization of the input space. In experiments with standard benchmarks, our algorithm shows comparable or better performance to existing techniques, yet it does not build an explicit model of a system. Instead, at each decision point within a single trial, it makes an uninformed probabilistic choice between simple strategies to extend the input signal by means of exploration or exploitation. Key to our approach is the way input signal space is decomposed into levels, such that coarse segments are more probable than fine segments. We perform experiments to demonstrate how and why our approach works, finding that a fully randomized exploration strategy performs as well as our original algorithm that exploits robustness. We propose this strategy as a new baseline for falsification and conclude that more discriminative benchmarks are required.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Liang Yu ◽  
Da Lin

In this paper, a sequence decision framework based on the Bayesian search is proposed to solve the problem of using an autonomous system to search for the missing target in an unknown environment. In the task, search cost and search efficiency are two competing requirements because they are closely related to the search task. Especially in the actual search task, the sensor assembled by the searcher is not perfect, so an effective search strategy is needed to guide the search agent to perform the task. Meanwhile, the decision-making method is crucial for the search agent. If the search agent fully trusts the feedback information of the sensor, the search task will end when the target is “detected” for the first time, which means it must take the risk of founding a wrong target. Conversely, if the search agent does not trust the feedback information of the sensor, it will most likely miss the real target, which will waste a lot of search resources and time. Based on the existing work, this paper proposes two search strategies and an improved algorithm. Compared with other search methods, the proposed strategies greatly improve the efficiency of unmanned search. Finally, the numerical simulations are provided to demonstrate the effectiveness of the search strategies.


2020 ◽  
Vol 69 (5) ◽  
pp. 897-912 ◽  
Author(s):  
Joseph N Keating ◽  
Robert S Sansom ◽  
Mark D Sutton ◽  
Christopher G Knight ◽  
Russell J Garwood

Abstract Evolutionary inferences require reliable phylogenies. Morphological data have traditionally been analyzed using maximum parsimony, but recent simulation studies have suggested that Bayesian analyses yield more accurate trees. This debate is ongoing, in part, because of ambiguity over modes of morphological evolution and a lack of appropriate models. Here, we investigate phylogenetic methods using two novel simulation models—one in which morphological characters evolve stochastically along lineages and another in which individuals undergo selection. Both models generate character data and lineage splitting simultaneously: the resulting trees are an emergent property, rather than a fixed parameter. Standard consensus methods for Bayesian searches (Mki) yield fewer incorrect nodes and quartets than the standard consensus trees recovered using equal weighting and implied weighting parsimony searches. Distances between the pool of derived trees (most parsimonious or posterior distribution) and the true trees—measured using Robinson-Foulds (RF), subtree prune and regraft (SPR), and tree bisection reconnection (TBR) metrics—demonstrate that this is related to the search strategy and consensus method of each technique. The amount and structure of homoplasy in character data differ between models. Morphological coherence, which has previously not been considered in this context, proves to be a more important factor for phylogenetic accuracy than homoplasy. Selection-based models exhibit relatively lower homoplasy, lower morphological coherence, and higher inaccuracy in inferred trees. Selection is a dominant driver of morphological evolution, but we demonstrate that it has a confounding effect on numerous character properties which are fundamental to phylogenetic inference. We suggest that the current debate should move beyond considerations of parsimony versus Bayesian, toward identifying modes of morphological evolution and using these to build models for probabilistic search methods. [Bayesian; evolution; morphology; parsimony; phylogenetics; selection; simulation.]


2019 ◽  
Vol 51 (1) ◽  
pp. 90-104
Author(s):  
Hamdy A. El-Ghandour ◽  
Emad Elbeltagi

Abstract The increased pumping of freshwater from coastal aquifers, to meet growing demands, causes an environmental problem called saltwater intrusion. Consequently, proper management schemes are necessary to tackle such a situation and permit the optimal development of coastal groundwater basins. In this research, a probabilistic search algorithm, namely Probabilistic Global Search Lausanne (PGSL), is used to calculate optimal pumping rates of unconfined coastal aquifer. The results of using PGSL are compared with a stochastic search optimization technique, Shuffled Frog Leaping Algorithm (SFLA). The finite element method is applied to simulate the hydraulic response of the steady state homogenous aquifer. The lower and upper (LU) decomposition method is adapted to invert the conductance matrix, which noticeably decreases the computation time. The results of both the PGSL and the SFLA are verified through the application on the aquifer system underlying the City of Miami Beach in the north of Spain. Multiple independent optimization runs are carried out to provide more insightful comparison outcomes. Consequently, a statistical analysis is performed to assess the performance of each algorithm. The two optimization algorithms are then applied on the Quaternary aquifer of El-Arish Rafah area, Egypt. The results show that both algorithms can effectively be used to obtain nearly global solutions compared with other previous published results.


Mathematics ◽  
2019 ◽  
Vol 7 (11) ◽  
pp. 1051 ◽  
Author(s):  
Valentino Santucci ◽  
Alfredo Milani ◽  
Fabio Caraffini

This article presents a novel hybrid classification paradigm for medical diagnoses and prognoses prediction. The core mechanism of the proposed method relies on a centroid classification algorithm whose logic is exploited to formulate the classification task as a real-valued optimisation problem. A novel metaheuristic combining the algorithmic structure of Swarm Intelligence optimisers with the probabilistic search models of Estimation of Distribution Algorithms is designed to optimise such a problem, thus leading to high-accuracy predictions. This method is tested over 11 medical datasets and compared against 14 cherry-picked classification algorithms. Results show that the proposed approach is competitive and superior to the state-of-the-art on several occasions.


Author(s):  
Savvas Papaioannou ◽  
Panayiotis Kolios ◽  
Theocharis Theocharides ◽  
Christos G. Panayiotou ◽  
Marios M. Polycarpou

Sign in / Sign up

Export Citation Format

Share Document