base stock
Recently Published Documents


TOTAL DOCUMENTS

277
(FIVE YEARS 44)

H-INDEX

28
(FIVE YEARS 3)

2021 ◽  
Vol 16 (4) ◽  
pp. 473-484
Author(s):  
A.S. Xanthopoulos ◽  
D.E. Koulouriotis

Pull production control strategies coordinate manufacturing operations based on actual demand. Up to now, relevant publications mostly examine manufacturing systems that produce a single type of a product. In this research, we examine the CONWIP, Base Stock, and CONWIP/Kanban Hybrid pull strategies in multi-product manufacturing systems. In a multi-product manufacturing system, several types of products are manufactured by utilizing the same resources. We develop queueing network models of multi-stage, multi-product manufacturing systems operating under the three aforementioned pull control strategies. Simulation models of the alternative production systems are implemented using an open-source software. A comparative evaluation of CONWIP, Base Stock and CONWIP/Kanban Hybrid in multi-product manufacturing is carried out in a series of simulation experiments with varying demand arrival rates, setup times and control parameters. The control strategies are compared based on average wait time of backordered demand, average finished products inventories, and average length of backorders queues. The Base Stock strategy excels when the manufacturing system is subjected to high demand arrival rates. The CONWIP strategy produced consistently the highest level of finished goods inventories. The CONWIP/Kanban Hybrid strategy is significantly affected by the workload that is imposed on the system.


Algorithms ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 240
Author(s):  
Zhandos Kegenbekov ◽  
Ilya Jackson

Adaptive and highly synchronized supply chains can avoid a cascading rise-and-fall inventory dynamic and mitigate ripple effects caused by operational failures. This paper aims to demonstrate how a deep reinforcement learning agent based on the proximal policy optimization algorithm can synchronize inbound and outbound flows and support business continuity operating in the stochastic and nonstationary environment if end-to-end visibility is provided. The deep reinforcement learning agent is built upon the Proximal Policy Optimization algorithm, which does not require hardcoded action space and exhaustive hyperparameter tuning. These features, complimented with a straightforward supply chain environment, give rise to a general and task unspecific approach to adaptive control in multi-echelon supply chains. The proposed approach is compared with the base-stock policy, a well-known method in classic operations research and inventory control theory. The base-stock policy is prevalent in continuous-review inventory systems. The paper concludes with the statement that the proposed solution can perform adaptive control in complex supply chains. The paper also postulates fully fledged supply chain digital twins as a necessary infrastructural condition for scalable real-world applications.


2021 ◽  
pp. 163-188
Author(s):  
Homender Kumar ◽  
A.P. Harsha
Keyword(s):  
Group Iv ◽  

2021 ◽  
Vol 20 ◽  
pp. 108-123
Author(s):  
Samuel Chiabom Zelibe ◽  
Unanaowo Nyong Bassey

This paper considers a two-echelon inventory system with service consideration and lateral transshipment. So far, researchers have not extensively considered the use of lateral transshipment for such systems. Demand arrivals at both echelons follow the Poisson process. We introduce a continuous review base stock policy for the system in steady state, which determined the expected level for on-hand inventory, expected lateral transshipment level and expected backorder level. We showed that the model satisfied convexity with respect to base stock level. Computational experiments showed that the model with lateral transshipment performed better that the model without lateral transshipment.


Author(s):  
Afshin Oroojlooyjadid ◽  
MohammadReza Nazari ◽  
Lawrence V. Snyder ◽  
Martin Takáč

Problem definition: The beer game is widely used in supply chain management classes to demonstrate the bullwhip effect and the importance of supply chain coordination. The game is a decentralized, multiagent, cooperative problem that can be modeled as a serial supply chain network in which agents choose order quantities while cooperatively attempting to minimize the network’s total cost, although each agent only observes local information. Academic/practical relevance: Under some conditions, a base-stock replenishment policy is optimal. However, in a decentralized supply chain in which some agents act irrationally, there is no known optimal policy for an agent wishing to act optimally. Methodology: We propose a deep reinforcement learning (RL) algorithm to play the beer game. Our algorithm makes no assumptions about costs or other settings. As with any deep RL algorithm, training is computationally intensive, but once trained, the algorithm executes in real time. We propose a transfer-learning approach so that training performed for one agent can be adapted quickly for other agents and settings. Results: When playing with teammates who follow a base-stock policy, our algorithm obtains near-optimal order quantities. More important, it performs significantly better than a base-stock policy when other agents use a more realistic model of human ordering behavior. We observe similar results using a real-world data set. Sensitivity analysis shows that a trained model is robust to changes in the cost coefficients. Finally, applying transfer learning reduces the training time by one order of magnitude. Managerial implications: This paper shows how artificial intelligence can be applied to inventory optimization. Our approach can be extended to other supply chain optimization problems, especially those in which supply chain partners act in irrational or unpredictable ways. Our RL agent has been integrated into a new online beer game, which has been played more than 17,000 times by more than 4,000 people.


Sign in / Sign up

Export Citation Format

Share Document