Optimizing Quality of Experience of Free-Viewpoint Video Streaming with Markov Decision Process

Author(s):  
Liang Zhao ◽  
Zhe Chen
Author(s):  
Maryam Eghbali-Zarch ◽  
Reza Tavakkoli-Moghaddam ◽  
Fatemeh Esfahanian ◽  
Amir Azaron ◽  
Mohammad Mehdi Sepehri

Type 2 diabetes has an increasing prevalence and high cost of treatment. The goal of type 2 diabetes treatment is to control patients’ blood glucose level by pharmacological interventions and to prevent adverse disease-related complications. Therefore, it is important to optimize the medication treatment plans for type 2 diabetes patients to enhance the quality of their lives and to decrease the economic burden of this chronic disease. Since the treatment of type 2 diabetes relies on medication, it is vital to consider adverse drug reactions. Adverse drug reaction is undesired harmful reactions that may result from some certain medications. Therefore, a Markov decision process is developed in this article to model the medication treatment of type 2 diabetes, considering the possibility of adverse drug reaction occurring adverse drug reaction. The optimal policy of the proposed Markov decision process model is compared with clinical guidelines and existing models in the literature. Moreover, a sensitivity analysis is conducted to address the manner in which model behavior depends on model parameterization and then therapeutic insights are obtained based on the results. The satisfying results show that the model has the capability to offer an optimal treatment policy with an acceptable expected quality of life by utilizing fewer medications and provide significant implications in endocrinology and metabolism applications.


Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1385
Author(s):  
Irais Mora-Ochomogo ◽  
Marco Serrato ◽  
Jaime Mora-Vargas ◽  
Raha Akhavan-Tabatabaei

Natural disasters represent a latent threat for every country in the world. Due to climate change and other factors, statistics show that they continue to be on the rise. This situation presents a challenge for the communities and the humanitarian organizations to be better prepared and react faster to natural disasters. In some countries, in-kind donations represent a high percentage of the supply for the operations, which presents additional challenges. This research proposes a Markov Decision Process (MDP) model to resemble operations in collection centers, where in-kind donations are received, sorted, packed, and sent to the affected areas. The decision addressed is when to send a shipment considering the uncertainty of the donations’ supply and the demand, as well as the logistics costs and the penalty of unsatisfied demand. As a result of the MDP a Monotone Optimal Non-Decreasing Policy (MONDP) is proposed, which provides valuable insights for decision-makers within this field. Moreover, the necessary conditions to prove the existence of such MONDP are presented.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 190
Author(s):  
Wu Ouyang ◽  
Zhigang Chen ◽  
Jia Wu ◽  
Genghua Yu ◽  
Heng Zhang

As transportation becomes more convenient and efficient, users move faster and faster. When a user leaves the service range of the original edge server, the original edge server needs to migrate the tasks offloaded by the user to other edge servers. An effective task migration strategy needs to fully consider the location of users, the load status of edge servers, and energy consumption, which make designing an effective task migration strategy a challenge. In this paper, we innovatively proposed a mobile edge computing (MEC) system architecture consisting of multiple smart mobile devices (SMDs), multiple unmanned aerial vehicle (UAV), and a base station (BS). Moreover, we establish the model of the Markov decision process with unknown rewards (MDPUR) based on the traditional Markov decision process (MDP), which comprehensively considers the three aspects of the migration distance, the residual energy status of the UAVs, and the load status of the UAVs. Based on the MDPUR model, we propose a advantage-based value iteration (ABVI) algorithm to obtain the effective task migration strategy, which can help the UAV group to achieve load balancing and reduce the total energy consumption of the UAV group under the premise of ensuring user service quality. Finally, the results of simulation experiments show that the ABVI algorithm is effective. In particular, the ABVI algorithm has better performance than the traditional value iterative algorithm. And in a dynamic environment, the ABVI algorithm is also very robust.


Sign in / Sign up

Export Citation Format

Share Document