scholarly journals Performance Analysis of D2D Communication with Retransmission Mechanism in Cellular Networks

2020 ◽  
Vol 10 (3) ◽  
pp. 1097 ◽  
Author(s):  
Jianfang Xin ◽  
Qi Zhu ◽  
Guangjun Liang ◽  
Tianjiao Zhang

In this paper, we focus on the performance analysis of device-to-device (D2D) underlay communication in cellular networks. First, we develop a spatiotemporal traffic model to model a retransmission mechanism for D2D underlay communication. The D2D users in backlogged statuses are modeled as a thinned Poisson point process (PPP). To capture the characteristics of sporadic wireless data generation and limited buffer, we adopt queuing theory to analyze the performance of dynamic traffic. Furthermore, a feedback queuing model is adopted to analyze the performance with retransmission strategy. With the consideration of interference and channel fading, the service probability of the queue departure process is determined by the received signal-to-interference-plus-noise ratio (SINR). Then, the embedded Markov chain is employed to depict the queuing status in the D2D user buffer. We compute its steady-state distribution and derive the closed-form expressions of performance metrics, namely the average queue length, average throughput, average delay, and dropping probability. Simulation results show the validity and rationality of the theoretical analysis with different channel parameters and D2D densities. In addition, the simulation explores the dropping probability of a D2D user with and without the retransmission strategy for different D2D links in the system. When the arrival rate is comparatively high, the optimal throughput is reached after fewer retransmission attempts as a result of the limited buffer.


2021 ◽  
Vol 2091 (1) ◽  
pp. 012003
Author(s):  
Rakesh Kumar ◽  
Bhavneet Singh Soodan ◽  
Godlove Suila Kuaban ◽  
Piotr Czekalski ◽  
Sapana Sharma

Abstract Queuing theory has been extensively used in the modelling and performance analysis of cloud computing systems. The phenomenon of the task (or request) reneging, that is, the dropping of requests from the request queue often occur in cloud computing systems, and it is important to consider it when developing performance evaluations models for cloud computing infrastructures. Majority of studies in the performance evaluation of cloud computing data centres with the use of queuing theory do not consider the fact that the tasks could be removed from queue without being serviced. The removal of tasks from the queue could be due to the user impatience, execution deadline expiration, security reasons, or as an active queue management strategy. The reneging could be correlated in nature, that is, if a request is dropped (or reneged) at any time epoch, and then there is a probability that a request may or may not be dropped at the next time epoch. This kind of dropping (or reneging) of requests is referred to as correlated request reneging. In this paper we have modelled a cloud computing infrastructure with correlated request reneging using queuing theory. An M/M/1/N queuing model with correlated reneging has been used to study the performance analysis of the load balancing server of a cloud computing system. The steady-state as well as the transient performance analyses have been carried out. Important measures of performance like average queue size, average delay, probability of task blocking, and the probability of no waiting in the queue are studied. Finally, some comparisons are performed which describe the effect of correlated task reneging over simple exponential reneging.



2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Felix Blank

PurposeRefugee camps can be severely struck by pandemics, like potential COVID-19 outbreaks, due to high population densities and often only base-level medical infrastructure. Fast responding medical systems can help to avoid spikes in infections and death rates as they allow the prompt isolation and treatment of patients. At the same time, the normal demand for emergency medical services has to be dealt with as well. The overall goal of this study is the design of an emergency service system that is appropriate for both types of demand.Design/methodology/approachA spatial hypercube queuing model (HQM) is developed that uses queuing-theory methods to determine locations for emergency medical vehicles (also called servers). Therefore, a general optimization approach is applied, and subsequently, virus outbreaks at various locations of the study areas are simulated to analyze and evaluate the solution proposed. The derived performance metrics offer insights into the behavior of the proposed emergency service system during pandemic outbreaks. The Za'atari refugee camp in Jordan is used as a case study.FindingsThe derived locations of the emergency medical system (EMS) can handle all non-virus-related emergency demands. If additional demand due to virus outbreaks is considered, the system becomes largely congested. The HQM shows that the actual congestion is highly dependent on the overall amount of outbreaks and the corresponding case numbers per outbreak. Multiple outbreaks are much harder to handle even if their cumulative average case number is lower than for one singular outbreak. Additional servers can mitigate the described effects and lead to enhanced resilience in the case of virus outbreaks and better values in all considered performance metrics.Research limitations/implicationsSome parameters that were assumed for simplification purposes as well as the overall model should be verified in future studies with the relevant designers of EMSs in refugee camps. Moreover, from a practitioners perspective, the application of the model requires, at least some, training and knowledge in the overall field of optimization and queuing theory.Practical implicationsThe model can be applied to different data sets, e.g. refugee camps or temporary shelters. The optimization model, as well as the subsequent simulation, can be used collectively or independently. It can support decision-makers in the general location decision as well as for the simulation of stress-tests, like virus outbreaks in the camp area.Originality/valueThe study addresses the research gap in an optimization-based design of emergency service systems for refugee camps. The queuing theory-based approach allows the calculation of precise (expected) performance metrics for both the optimization process and the subsequent analysis of the system. Applied to pandemic outbreaks, it allows for the simulation of the behavior of the system during stress-tests and adds a further tool for designing resilient emergency service systems.



Author(s):  
Adel Agamy ◽  
Ahmed M. Mohamed

Modern mobile internet networks are becoming heavier and denser. Also it is not regularly planned, and becoming more heterogeneous. The explosive growth in the usage of smartphones poses numerous challenges for LTE cellular networks design and implementation. The performance of LTE networks with bursty and self-similar traffic has become a major challenge. Accurate modeling of the data generated by each connected wireless device is important for properly investigating the performance of LTE networks. This paper presents a mathematical model for LTE networks using queuing theory considering the influence of various application types. Using sporadic source traffic feeding to the queue of the evolved node B and with the exponential service time assumption, we construct a queuing model to estimate the performance of LTE networks. We use the performance model presented in this paper to study the influence of various application categories on the performance of LTE cellular networks. Also we validate our model with simulation using NS3 simulator with different scenarios.



2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Hongsheng Yin ◽  
Honggang Qi ◽  
Jingwen Xu ◽  
Xin Huang ◽  
Anping He

The sensor nodes of multitask wireless network are constrained in performance-driven computation. Theoretical studies on the data processing model of wireless sensor nodes suggest satisfying the requirements of high qualities of service (QoS) of multiple application networks, thus improving the efficiency of network. In this paper, we present the priority based data processing model for multitask sensor nodes in the architecture of multitask wireless sensor network. The proposed model is deduced with the M/M/1 queuing model based on the queuing theory where the average delay of data packets passing by sensor nodes is estimated. The model is validated with the real data from the Huoerxinhe Coal Mine. By applying the proposed priority based data processing model in the multitask wireless sensor network, the average delay of data packets in a sensor nodes is reduced nearly to 50%. The simulation results show that the proposed model can improve the throughput of network efficiently.



IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 27479-27489 ◽  
Author(s):  
Jianfang Xin ◽  
Qi Zhu ◽  
Guangjun Liang ◽  
Tianjiao Zhang


2021 ◽  
Vol 11 (1) ◽  
pp. 93-111
Author(s):  
Deepak Kapgate

The quality of cloud computing services is evaluated based on various performance metrics out of which response time (RT) is most important. Nearly all cloud users demand its application's RT as minimum as possible, so to minimize overall system RT, the authors have proposed request response time prediction-based data center (DC) selection algorithm in this work. Proposed DC selection algorithm uses results of optimization function for DC selection formulated based on M/M/m queuing theory, as present cloud scenario roughly obeys M/M/m queuing model. In cloud environment, DC selection algorithms are assessed based on their performance in practice, rather than how they are supposed to be used. Hence, explained DC selection algorithm with various forecasting models is evaluated for minimum user application RT and RT prediction accuracy on various job arrival rates, real parallel workload types, and forecasting model training set length. Finally, performance of proposed DC selection algorithm with optimal forecasting model is compared with other DC selection algorithms on various cloud configurations.



Author(s):  
Zhuofan Liao ◽  
Jingsheng Peng ◽  
Bing Xiong ◽  
Jiawei Huang

AbstractWith the combination of Mobile Edge Computing (MEC) and the next generation cellular networks, computation requests from end devices can be offloaded promptly and accurately by edge servers equipped on Base Stations (BSs). However, due to the densified heterogeneous deployment of BSs, the end device may be covered by more than one BS, which brings new challenges for offloading decision, that is whether and where to offload computing tasks for low latency and energy cost. This paper formulates a multi-user-to-multi-servers (MUMS) edge computing problem in ultra-dense cellular networks. The MUMS problem is divided and conquered by two phases, which are server selection and offloading decision. For the server selection phases, mobile users are grouped to one BS considering both physical distance and workload. After the grouping, the original problem is divided into parallel multi-user-to-one-server offloading decision subproblems. To get fast and near-optimal solutions for these subproblems, a distributed offloading strategy based on a binary-coded genetic algorithm is designed to get an adaptive offloading decision. Convergence analysis of the genetic algorithm is given and extensive simulations show that the proposed strategy significantly reduces the average latency and energy consumption of mobile devices. Compared with the state-of-the-art offloading researches, our strategy reduces the average delay by 56% and total energy consumption by 14% in the ultra-dense cellular networks.



Sign in / Sign up

Export Citation Format

Share Document