scholarly journals Service Migration Algorithm Based On Markov Decision Process With Multiple QoS Attributes

Author(s):  
Anhua Ma ◽  
Su Pan ◽  
Shuai Tao ◽  
Weiwei Zhou

Abstract With the rapid development of mobile internet cloud computing, the traditional network structure becomes non-suitable for advanced network traffic requirements. A service migration decision algorithm is proposed in the Software Defined Network(SDN) to satisfy differential Quality of Service(QoS) requirements. We divide services into real-time ones and non-real-time ones due to their different requirements on time delay and transmission rates, and construct the revenue function on two QoS attributes i.e. time delay and available transmission rates. We use the Markov decision process to maximize the overall benefits of users and network system to achieve the best user experience. The simulation results show that our proposed algorithm achieves better performance in terms of overall benefits than the exiting algorithms only considering single service and single QoS attribute.

2018 ◽  
Vol 15 (4) ◽  
pp. 172988141878706 ◽  
Author(s):  
Yunyun Zhao ◽  
Xiangke Wang ◽  
Yirui Cong ◽  
Lincheng Shen

In this article, we study the ground moving target tracking problem for a fixed-wing unmanned aerial vehicle equipped with a radar. This problem is formulated in a partially observable Markov process framework, which contains the following two parts: in the first part, the unmanned aerial vehicle utilizes the measurements from its radar and employs a Kalman filter to estimate the target’s real-time location; in the second part, the unmanned aerial vehicle optimizes its trajectory in a real-time manner so that the radar’s measurements can include more useful information. To solve the trajectory optimization problem, we proposed an information geometry-based partially observable Markov decision process method. Specifically, the cumulative amount of information in the observation is represented by Fisher information of information geometry, and acts as the criterion of the partially observable Markov decision process problem. Furthermore, to guarantee the real-time performance, an important trade-off between the optimality and computation cost is made by an approximate receding horizon approach. Finally, simulation results corroborate the accuracy and time-efficiency of our proposed method and also show our advantage in computation time compared to existing methods.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1309 ◽  
Author(s):  
Yi Zou ◽  
Weiwei Zhang ◽  
Wendi Weng ◽  
Zhengyun Meng

Online multi-object tracking (MOT) has broad applications in time-critical video analysis scenarios such as advanced driver-assistance systems (ADASs) and autonomous driving. In this paper, the proposed system aims at tracking multiple vehicles in the front view of an onboard monocular camera. The vehicle detection probes are customized to generate high precision detection, which plays a basic role in the following tracking-by-detection method. A novel Siamese network with a spatial pyramid pooling (SPP) layer is applied to calculate pairwise appearance similarity. The motion model captured from the refined bounding box provides the relative movements and aspects. The online-learned policy treats each tracking period as a Markov decision process (MDP) to maintain long-term, robust tracking. The proposed method is validated in a moving vehicle with an onboard NVIDIA Jetson TX2 and returns real-time speeds. Compared with other methods on KITTI and self-collected datasets, our method achieves significant performance in terms of the “Mostly-tracked”, “Fragmentation”, and “ID switch” variables.


2013 ◽  
Vol 785-786 ◽  
pp. 1403-1407
Author(s):  
Qing Yang Song ◽  
Xun Li ◽  
Shu Yu Ding ◽  
Zhao Long Ning

Many vertical handoff decision algorithms have not considered the impact of call dropping during the vertical handoff decision process. Besides, most of current multi-attribute vertical handoff algorithms cannot predict users’ specific circumstances dynamically. In this paper, we formulate the vertical handoff decision problem as a Markov decision process, with the objective of maximizing the expected total reward during the handoff procedure. A reward function is formulated to assess the service quality during each connection. The G1 and entropy methods are applied in an iterative way, by which we can work out a stationary deterministic policy. Numerical results demonstrate the superiority of our proposed algorithm compared with the existing methods.


2019 ◽  
Vol 27 (3) ◽  
pp. 1272-1288 ◽  
Author(s):  
Shiqiang Wang ◽  
Rahul Urgaonkar ◽  
Murtaza Zafer ◽  
Ting He ◽  
Kevin Chan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document