Bifurcation and Sunspots in Continuous Time Optimal Model with Externalities

Author(s):  
Beatrice Venturi ◽  
Alessandro Pirisinu
2008 ◽  
Vol 37 (2) ◽  
pp. 321-333 ◽  
Author(s):  
Hippolyte d’Albis ◽  
Pascal Gourdel ◽  
Cuong Le Van

Automatica ◽  
2017 ◽  
Vol 81 ◽  
pp. 297-304 ◽  
Author(s):  
Timm Faulwasser ◽  
Milan Korda ◽  
Colin N. Jones ◽  
Dominique Bonvin

2014 ◽  
Vol 21 (3) ◽  
pp. 237-261
Author(s):  
Hisashi Nakamura ◽  
Koichiro Takaoka

2021 ◽  
Author(s):  
Bin Hu ◽  
Zhankun Sun

Inspired by self-replicating three-dimensional printers and innovative agricultural and husbandry goods, we study optimal production and sales policies for a manufacturer of self-replicating innovative goods with a focus on the unique “keep-or-sell” trade-off—namely, whether a newly produced unit should be sold to satisfy demand and stimulate future demand or added to inventory to increase production capacity. We adopt the continuous-time optimal control framework and marry a self-replication model on the production side to the canonical innovation-diffusion model on the demand side. By analyzing the model, we identify a condition that differentiates Strong and Weak Replicability regimes, wherein production and sales, respectively, take priority over the other and fully characterize their distinct optimal policies. These insights prove robust and helpful in several extensions, including backlogged demand, liquidity constraints, stochastic innovation diffusion, launch inventory decision, and exogenous demand. We also find that social marketing strategies are particularly well suited for self-replicating innovative goods under Strong Replication. This paper was accepted by Victor Martínez de Albéniz, operations management.


1989 ◽  
Vol 26 (04) ◽  
pp. 695-706
Author(s):  
Gerold Alsmeyer ◽  
Albrecht Irle

Consider a population of distinct species Sj , j∈J, members of which are selected at different time points T 1 , T 2,· ··, one at each time. Assume linear costs per unit of time and that a reward is earned at each discovery epoch of a new species. We treat the problem of finding a selection rule which maximizes the expected payoff. As the times between successive selections are supposed to be continuous random variables, we are dealing with a continuous-time optimal stopping problem which is the natural generalization of the one Rasmussen and Starr (1979) have investigated; namely, the corresponding problem with fixed times between successive selections. However, in contrast to their discrete-time setting the derivation of an optimal strategy appears to be much harder in our model as generally we are no longer in the monotone case. This note gives a general point process formulation for this problem, leading in particular to an equivalent stopping problem via stochastic intensities which is easier to handle. Then we present a formal derivation of the optimal stopping time under the stronger assumption of i.i.d. (X 1 , A 1) (X2, A2 ), · ·· where Xn gives the label (j for Sj ) of the species selected at Tn and An denotes the time between the nth and (n – 1)th selection, i.e. An = Tn – Tn– 1. In the case where even Xn and An are independent and An has an IFR (increasing failure rate) distribution, an explicit solution for the optimal strategy is derived as a simple consequence.


1987 ◽  
Vol 24 (03) ◽  
pp. 644-656 ◽  
Author(s):  
Frederick J. Beutler ◽  
Keith W. Ross

Uniformization permits the replacement of a semi-Markov decision process (SMDP) by a Markov chain exhibiting the same average rewards for simple (non-randomized) policies. It is shown that various anomalies may occur, especially for stationary (randomized) policies; uniformization introduces virtual jumps with concomitant action changes not present in the original process. Since these lead to discrepancies in the average rewards for stationary processes, uniformization can be accepted as valid only for simple policies. We generalize uniformization to yield consistent results for stationary policies also. These results are applied to constrained optimization of SMDP, in which stationary (randomized) policies appear naturally. The structure of optimal constrained SMDP policies can then be elucidated by studying the corresponding controlled Markov chains. Moreover, constrained SMDP optimal policy computations can be more easily implemented in discrete time, the generalized uniformization being employed to relate discrete- and continuous-time optimal constrained policies.


Sign in / Sign up

Export Citation Format

Share Document