quality of service parameters
Recently Published Documents


TOTAL DOCUMENTS

108
(FIVE YEARS 40)

H-INDEX

5
(FIVE YEARS 2)

2022 ◽  
Vol 54 (9) ◽  
pp. 1-33
Author(s):  
Josef Schmid ◽  
Alfred Höss ◽  
Björn W. Schuller

Network communication has become a part of everyday life, and the interconnection among devices and people will increase even more in the future. Nevertheless, prediction of Quality of Service parameters, particularly throughput, is quite a challenging task. In this survey, we provide an extensive insight into the literature on Transmission Control Protocol throughput prediction. The goal is to provide an overview of the used techniques and to elaborate on open aspects and white spots in this area. We assessed more than 35 approaches spanning from equation-based over various time smoothing to modern learning and location smoothing methods. In addition, different error functions for the evaluation of the approaches as well as publicly available recording tools and datasets are discussed. To conclude, we point out open challenges especially looking in the area of moving mobile network clients. The use of throughput prediction not only enables a more efficient use of the available bandwidth, the techniques shown in this work also result in more robust and stable communication.


Author(s):  
Neeraj Arora ◽  
Rohitash Kumar Banyal

<p><span>Cloud computing is one of the emerging fields in computer science due to its several advancements like on-demand processing, resource sharing, and pay per use. There are several cloud computing issues like security, quality of service (QoS) management, data center energy consumption, and scaling. Scheduling is one of the several challenging problems in cloud computing, where several tasks need to be assigned to resources to optimize the quality of service parameters. Scheduling is a well-known NP-hard problem in cloud computing. This will require a suitable scheduling algorithm. Several heuristics and meta-heuristics algorithms were proposed for scheduling the user's task to the resources available in cloud computing in an optimal way. Hybrid scheduling algorithms have become popular in cloud computing. In this paper, we reviewed the hybrid algorithms, which are the combinations of two or more algorithms, used for scheduling in cloud computing. The basic idea behind the hybridization of the algorithm is to take useful features of the used algorithms. This article also classifies the hybrid algorithms and analyzes their objectives, quality of service (QoS) parameters, and future directions for hybrid scheduling algorithms.</span></p>


2021 ◽  
Vol 7 (4) ◽  
pp. 10-17
Author(s):  
M. Buranova ◽  
I Kartashevskiy

An accurate assessment of the quality of service parameters in modern information communication networks is a very important task. This paper proposes the use of hyperexponential distributions to solve the problem of approxi-mating an arbitrary probability density in the G/G/1 system for the case when the approximation by a system of the type H2/H2/1 is assumed. To determine the parameters of the probability density of the hyperexponential distribu-tion, it is proposed to use EM- algorithm that provides fairly simple use cases for uncorrelated flows. In this paper, we propose a variant of the EM algorithm implementation for determining the parameters of the hyperexponential distribution in the presence of correlation properties of the analyzed flow.


2021 ◽  
Vol 29 (04) ◽  
pp. 56-75
Author(s):  
Narpat Asia ◽  
◽  
Pramod Paliwal ◽  

Purpose: Using the SERVQUAL model in the Natural Gas Distribution business context, the research paper examines and compares the quality of service parameters of two City Gas Distribution (CGD) companies engaged in Piped Natural Gas (PNG) distribution- one from public and other from the private sector. Research Design/Approach: Mapping of various activities pertaining to domestic (household) PNG service on the SERVQUAL dimensions was undertaken. Based on the mapping, a relevant data collection tool was deployed to collect the data on PNG service quality parameters from the respondents, who were current consumers of these companies. Hypotheses regarding various components of SERVQUAL model were tested towards the comparison of service quality of these companies. Data were analyzed by employing an appropriate statistical tool. Findings: Statistical results reveal a significant difference between the companies in terms of the quality of services offered by them. Interpretation of study results, managerial implications and suggestions have been discussed in the paper. Practical Implications: The study shall help in designing and implementing the quality of service parameters and subsequently devising or revising Service Level Agreements (SLA) for the domestic PNG customers of CGD companies. Originality/Value: Not much relevant research work on service quality issues has been undertaken in the CGD Sector in general and in the domestic Piped Natural Gas (PNG) sub-sector in particular. One of the outcomes of the study is also the mapping of various activities pertaining to domestic PNG service on the SERVQUAL dimensions.


2021 ◽  
Author(s):  
Pallavi Shelke ◽  
Rekha Shahapurkar

In today’s growing cloud world, where users are continuously demanding a large number of services or resources at the same time, cloud providers aim to meet their needs while maintaining service quality, an ideal QoS-based resource provisioning is required. In the consideration of the quality-of-service parameters, it is essential to place a greater emphasis on the scalability attribute, which aids in the design of complex resource provisioning frameworks. This study aims to determine how much work is done in light of scalability as the most important QoS attribute. We first conducted a detailed survey on similar QoS-based resource provisioning proposed frameworks/techniques in this article, which discusses QoS parameters with increasingly growing cloud usage expectations. Second, this paper focuses on scalability as the main QOS characteristic, with types, issues, review questions and research gaps discussed in detail, revealing that less work has been performed thus far. We will try to address scalability and resource provisioning problems with our proposed advance scalable QoS-based resource provisioning framework by integrating new modules resource scheduler, load balancer, resource tracker, and cloud user budget tracker in the resource provisioning process. Cloud providers can easily achieve scalability of resources while performing resource provisioning by integrating the working specialty of these sub modules.


2021 ◽  
Vol 7 ◽  
pp. e588
Author(s):  
Olena Skarlat ◽  
Stefan Schulte

Recently, a multitude of conceptual architectures and theoretical foundations for fog computing have been proposed. Despite this, there is still a lack of concrete frameworks to setup real-world fog landscapes. In this work, we design and implement the fog computing framework FogFrame—a system able to manage and monitor edge and cloud resources in fog landscapes and to execute Internet of Things (IoT) applications. FogFrame provides communication and interaction as well as application management within a fog landscape, namely, decentralized service placement, deployment and execution. For service placement, we formalize a system model, define an objective function and constraints, and solve the problem implementing a greedy algorithm and a genetic algorithm. The framework is evaluated with regard to Quality of Service parameters of IoT applications and the utilization of fog resources using a real-world operational testbed. The evaluation shows that the service placement is adapted according to the demand and the available resources in the fog landscape. The greedy placement leads to the maximum utilization of edge devices keeping at the edge as many services as possible, while the placement based on the genetic algorithm keeps devices from overloads by balancing between the cloud and edge. When comparing edge and cloud deployment, the service deployment time at the edge takes 14% of the deployment time in the cloud. If fog resources are utilized at maximum capacity, and a new application request arrives with the need of certain sensor equipment, service deployment becomes impossible, and the application needs to be delegated to other fog resources. The genetic algorithm allows to better accommodate new applications and keep the utilization of edge devices at about 50% CPU. During the experiments, the framework successfully reacts to runtime events: (i) services are recovered when devices disappear from the fog landscape; (ii) cloud resources and highly utilized devices are released by migrating services to new devices; (iii) and in case of overloads, services are migrated in order to release resources.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Prabhdeep Singh ◽  
Rajbir Kaur

Purpose The purpose of this paper is to provide more accurate structure that allows the estimation of coronavirus (COVID-19) at a very early stage with ultra-low latency. The machine learning algorithms are used to evaluate the past medical details of the patients and forecast COVID-19 positive cases, which can aid in lowering costs and distinctively enhance the standard of treatment at hospitals. Design/methodology/approach In this paper, artificial intelligence (AI) and cloud/fog computing are integrated to strengthen COVID-19 patient prediction. A delay-sensitive efficient framework for the prediction of COVID-19 at an early stage is proposed. A novel similarity measure-based random forest classifier is proposed to increase the efficiency of the framework. Findings The performance of the framework is checked with various quality of service parameters such as delay, network usage, RAM usages and energy consumption, whereas classification accuracy, recall, precision, kappa static and root mean square error is used for the proposed classifier. Results show the effectiveness of the proposed framework. Originality/value AI and cloud/fog computing are integrated to strengthen COVID-19 patient prediction. A novel similarity measure-based random forest classifier with more than 80% accuracy is proposed to increase the efficiency of the framework.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2829
Author(s):  
Rezoan Ahmed Nazib ◽  
Sangman Moh

Owing to automation trends, research on wireless sensor networks (WSNs) has become prevalent. In addition to static sinks, ground and aerial mobile sinks have become popular for data gathering because of the implementation of WSNs in hard-to-reach or infrastructure-less areas. Consequently, several data-gathering mechanisms in WSNs have been investigated, and the sink type plays a major role in energy consumption and other quality of service parameters, such as packet delivery ratio, delay, and throughput. However, the data-gathering schemes based on different sink types in WSNs have not been investigated previously. This paper reviews such data-gathering frameworks based on three different types of sinks (i.e., static, ground mobile, and aerial mobile sinks), analyzing the data-gathering frameworks both qualitatively and quantitatively. First, we examine the frameworks by discussing their working principles, advantages, and limitations, followed by a qualitative comparative study based on their main ideas, optimization criteria, and performance evaluation parameters. Next, we present a simulation-based quantitative comparison of three representative data-gathering schemes, one from each category. Simulation results are shown in terms of energy efficiency, number of dead nodes, number of exchanged control packets, and packet drop ratio. Finally, lessons learned from the investigation and recommendations made are summarized.


Sign in / Sign up

Export Citation Format

Share Document