Optimal search for a randomly moving object

1986 ◽  
Vol 23 (03) ◽  
pp. 708-717 ◽  
Author(s):  
R. R. Weber

It is desired to minimize the expected cost of finding an object which moves back and forth between two locations according to an unobservable Markov process. When the object is in location i (i = 1, 2) it resides there for a time which is exponentially distributed with parameter λ1 and then moves to the other location. The location of the object is not known and at each instant until it is found exactly one of the two locations must be searched. Searching location i for time δ costs ciδ and conditional on the object being in location i there is a probability α i δ + o(δ) that this search will find it. The probability that the object starts in location 1 is known to bé p 1(0). The location to be searched at time t is to be chosen on the basis of the value of p 1(t), the probability that the object is in location 1, given that it has not yet been discovered. We prove that there exists a threshold Π such that the optimal policy may be described as: search location 1 if and only if the probability that the object is in location 1 is greater than Π. Expressions for the threshold Π are given in terms of the parameters of the model.

1986 ◽  
Vol 23 (3) ◽  
pp. 708-717 ◽  
Author(s):  
R. R. Weber

It is desired to minimize the expected cost of finding an object which moves back and forth between two locations according to an unobservable Markov process. When the object is in location i (i = 1, 2) it resides there for a time which is exponentially distributed with parameter λ1 and then moves to the other location. The location of the object is not known and at each instant until it is found exactly one of the two locations must be searched. Searching location i for time δ costs ciδ and conditional on the object being in location i there is a probability αiδ + o(δ) that this search will find it. The probability that the object starts in location 1 is known to bé p1(0). The location to be searched at time t is to be chosen on the basis of the value of p1(t), the probability that the object is in location 1, given that it has not yet been discovered. We prove that there exists a threshold Π such that the optimal policy may be described as: search location 1 if and only if the probability that the object is in location 1 is greater than Π. Expressions for the threshold Π are given in terms of the parameters of the model.


1994 ◽  
Vol 31 (02) ◽  
pp. 438-457
Author(s):  
David Assaf ◽  
Ariela Sharlin-Bilitzky

An object is hidden in one of two boxes and occasionally moves between the boxes in accordance with some specified continuous-time Markov process. The objective is to find the object with a minimal expected cost. In this paper it is assumed that search efforts are unlimited. In addition to the search costs, the ‘real time' until the object is found is also taken into account in the cost structure. Our main results are that the optimal policy may consist of five regions and that the controls applied should be of the extreme 0 or ∞ type. The resulting expected cost compares favorably with that of the expected cost with bounded controls studied previously in the search literature.


In this paper production and availability of machinery for production are considered. Here a machinery of production with two components is considered and that production is full when the machinery is working with both the components functioning well. But there is a chance that the whole machinery may dysfunction because of failure of both components in which case the production comes to a standstill and it is worst crisis. The other possibility is that one of the components may fail but still the machine continues functioning but with less efficiency. The production may continue and if the other component also fails the production completely stops and the situation is critical. When the machine is in one component failure, the failed part may be a repaired and machine can be made to work with full efficiency. But when both components fail, should be renewed as a package and then the production should start. Under such conditions found the steady state probabilities and the rate of crisis and the expected cost of production.


1994 ◽  
Vol 31 (2) ◽  
pp. 438-457 ◽  
Author(s):  
David Assaf ◽  
Ariela Sharlin-Bilitzky

An object is hidden in one of two boxes and occasionally moves between the boxes in accordance with some specified continuous-time Markov process. The objective is to find the object with a minimal expected cost. In this paper it is assumed that search efforts are unlimited. In addition to the search costs, the ‘real time' until the object is found is also taken into account in the cost structure. Our main results are that the optimal policy may consist of five regions and that the controls applied should be of the extreme 0 or ∞ type. The resulting expected cost compares favorably with that of the expected cost with bounded controls studied previously in the search literature.


1995 ◽  
Vol 9 (2) ◽  
pp. 159-182 ◽  
Author(s):  
I. M. MacPhee ◽  
B. P. Jordan

Consider the problem of searching for a leprechaun that moves randomly between two sites. The movement is modelled with a two-state Markov chain. One of the sites is searched at each time t = 1,2,…, until the leprechaun is found. Associated with each search of site i is an overlook probability αi and a cost Ci Our aim is to determine the policy that will find the leprechaun with the minimal average cost. Let p denote the probability that the leprechaun is at site 1. Ross conjectured that an optimal policy can be defined in terms of a threshold probability P* such that site 1 is searched if and only if p ≥ P*. We show this conjecture to be correct (i) when α1 = α2 and C1 = C2, (ii) for general Ci when the overlook probabilities α, are small, and (iii) for general αi and Ci for a large range of transition laws for the movement. We also derive some properties of the optimal policy for the problem on n sites in the no-overlook case and for the case where each site has the same αi, and Ci.


2021 ◽  
Author(s):  
Rafael P. Greminger

This paper studies a search problem in which a consumer is initially aware of only a few products. At every point in time, the consumer then decides between searching among alternatives of which he is already aware and discovering more products. I show that the optimal policy for this search and discovery problem is fully characterized by tractable reservation values. Moreover, I prove that a predetermined index fully specifies the purchase decision of a consumer following the optimal search policy. Finally, a comparison highlights differences to classical random and directed search. This paper was accepted by Dmitri Kuksov, marketing.


2015 ◽  
Vol 25 (1) ◽  
Author(s):  
Petr V. Shnurkov ◽  
Alexey V. Ivanov

AbstractWe consider a discrete stochastic model of inventory control based on a controlled semi-Markov process. Probabilistic characteristics of the semi-Markov process are found along with characteristics of a stationary cost functional connected with this process. It is proved that an optimal policy of inventory control is a deterministic one. Explicit analitical representation of stationary functional characterising the control quality is obtained. An optimal control problem is reduced to the solution of an extremal problem for a multivariate function.


1965 ◽  
Vol 17 (2) ◽  
pp. 97-109 ◽  
Author(s):  
G. C. Grindley ◽  
Valerie Townsend

Movement in a part of either of two binocular fields can, under some conditions, produce temporary obliteration of the corresponding part of the other field. This paper is a mainly qualitative study of this rather surprising phenomenon. The effect is found to increase from the fovea to the periphery, to be greatest at a velocity of about 20° visual angle per sec. and to vary with the orientation of the fixation point in the visual field. Some further lines of research designed to elucidate the relation of the effect described here to certain other visual phenomena are suggested.


2003 ◽  
Vol 13 (06) ◽  
pp. 875-892 ◽  
Author(s):  
Jacek Banasiak

The paper deals with an analysis of well-posedness of the Boltzmann-like semiconductor equation with unbounded collision frequency, introduced recently by Majorana and Milazzo.17The equation is derived by writing the balance of the electrons lost and gained at each energy level due to scattering on the crystalline lattice of the semiconductor. As the total amount of electrons is expected to be constant, the process can be viewed as a Markov process, and from the functional analytic point of view it fits into the general theory of substochastic semigroups.5,26In this paper we present two methods of solving the evolution equation describing this process: one is a generalization of the approach of Reuter and Ledermann23to solving differential equations governing Markov processes with denumerably many states, while the other is based on the Kato–Voigt perturbation technique for substochastic semigroups.15,26,2,3,5The combination of these two techniques is a powerful tool yielding strong results on the existence and uniqueness of conservative solutions. It is also shown how the solution method employed in Ref. 17 fits into the theory developed in this paper.


2011 ◽  
Vol 48 (02) ◽  
pp. 322-332 ◽  
Author(s):  
Amine Asselah ◽  
Pablo A. Ferrari ◽  
Pablo Groisman

Consider a continuous-time Markov process with transition rates matrixQin the state space Λ ⋃ {0}. In the associated Fleming-Viot processNparticles evolve independently in Λ with transition rates matrixQuntil one of them attempts to jump to state 0. At this moment the particle jumps to one of the positions of the other particles, chosen uniformly at random. When Λ is finite, we show that the empirical distribution of the particles at a fixed time converges asN→ ∞ to the distribution of a single particle at the same time conditioned on not touching {0}. Furthermore, the empirical profile of the unique invariant measure for the Fleming-Viot process withNparticles converges asN→ ∞ to the unique quasistationary distribution of the one-particle motion. A key element of the approach is to show that the two-particle correlations are of order 1 /N.


Sign in / Sign up

Export Citation Format

Share Document