An Adaptive Cloud Monitoring Framework Based on Sampling Frequency Adjusting

2020 ◽  
Vol 16 (2) ◽  
pp. 12-26
Author(s):  
Dongbo Liu ◽  
Zhichao Liu

In a cloud platform, the monitoring service has become a necessary infrastructure to manage resources and deliver desirable quality-of-service (QoS). Although many monitoring solutions have been proposed in recent years, how to mitigate the overhead of monitoring service is still an opening issue. This article presents an adaptive monitoring framework, in which a traffic prediction model is introduced to estimate short-term traffic overhead. Based on this prediction model, a novel algorithm is proposed to dynamically change the sampling frequency of sensors so as to achieve better tradeoffs between monitoring accuracy and overhead. Also, a monitoring topology optimization mechanism is incorporated which enables to make more cost-effective decisions on monitoring management. The proposed framework is tested in a realistic cloud and the results indicate that it can significantly reduce the communication overhead when performing monitoring tasks for multiple tenants.

2019 ◽  
Vol 15 (4) ◽  
pp. 31-45
Author(s):  
Peng Xiao ◽  
Dongbo Liu

In Cloud environments, performance monitoring service is to obtain a full knowledge of underlying resources. However, it is still a challenging task to manage the information of heterogeneous resources in an efficient way, which is especially true when user-specific quality-of-service (QoS) should be concerned. Although many Cloud monitoring solutions have been proposed in recent years, most of them only passively raise an alert event when a QoS violation occurs. In this article, the authors present a novel Cloud monitoring framework, in which enhanced-QoS is supported through three mechanisms: proactive service-layer agreement (SLA) violation prediction, SLA ranking service, and multi-tenant resource monitoring mechanism. Extensive experiments are conducted in a realistic cloud platform, and the results indicate the proposed framework is capable of providing better QoS supporting comparing with existing monitoring solutions. Also, it exhibits desirable scalability and adaptiveness in a wide range of experimental scenarios.


2021 ◽  
Vol 251 ◽  
pp. 02052
Author(s):  
Robert Currie ◽  
Wenlong Yuan

To optimise the performance of distributed compute, smaller lightweight storage caches are needed which integrate with existing grid computing workflows. A good solution to provide lightweight storage caches is to use an XRootD-proxy cache. To support distributed lightweight XRootD proxy services across GridPP we have developed a centralised monitoring framework. With the v5 release of XRootD it is possible to build a monitoring framework which collects distributed caching metadata broadcast from multiple sites. To provide the best support for these distributed caches we have built a centralised monitoring service for XRootD storage instances within GridPP. This monitoring solution is built upon experiences presented by CMS in setting up a similar service as part of their AAA system. This new framework is designed to provide remote monitoring of the behaviour, performance, and reliability of distributed XRootD services across the UK. Effort has been made to simplify ease of deployment by remote site administrators. The result of this work is an interactive dashboard system which enables administrators to access real-time metrics on the performance of their lightweight storage systems. This monitoring framework is intended to supplement existing functionality and availability testing metrics by providing detailed information and logging from a site perspective.


2020 ◽  
pp. 263-285
Author(s):  
Badia Bouhdid ◽  
Wafa Akkari ◽  
Sofien Gannouni

While existing localization approaches mainly focus on enhancing the accuracy, particular attention has recently been given to reducing the localization algorithm implementation costs. To obtain a tradeoff between location accuracy and implementation cost, recursive localization approaches are being pursued as a cost-effective alternative to the more expensive localization approaches. In the recursive approach, localization information increases progressively as new nodes compute their positions and become themselves reference nodes. A strategy is then required to control and maintain the distribution of these new reference nodes. The lack of such a strategy leads, especially in high density networks, to wasted energy, important communication overhead and even impacts the localization accuracy. In this paper, the authors propose an efficient recursive localization approach that reduces the energy consumption, the execution time, and the communication overhead, yet it increases the localization accuracy through an adequate distribution of reference nodes within the network.


Author(s):  
Oliver Faust ◽  
Ningrong Lei ◽  
Eng Chew ◽  
Edward J. Ciaccio ◽  
U Rajendra Acharya

Aim: In this study we have investigated the problem of cost effective wireless heart health monitoring from a service design perspective. Subject and Methods: There is a great medical and economic need to support the diagnosis of a wide range of debilitating and indeed fatal non-communicable diseases, like Cardiovascular Disease (CVD), Atrial Fibrillation (AF), diabetes, and sleep disorders. To address this need, we put forward the idea that the combination of Heart Rate (HR) measurements, Internet of Things (IoT), and advanced Artificial Intelligence (AI), forms a Heart Health Monitoring Service Platform (HHMSP). This service platform can be used for multi-disease monitoring, where a distinct service meets the needs of patients having a specific disease. The service functionality is realized by combining common and distinct modules. This forms the technological basis which facilitates a hybrid diagnosis process where machines and practitioners work cooperatively to improve outcomes for patients. Results: Human checks and balances on independent machine decisions maintain safety and reliability of the diagnosis. Cost efficiency comes from efficient signal processing and replacing manual analysis with AI based machine classification. To show the practicality of the proposed service platform, we have implemented an AF monitoring service. Conclusion: Having common modules allows us to harvest the economies of scale. That is an advantage, because the fixed cost for the infrastructure is shared among a large group of customers. Distinct modules define which AI models are used and how the communication with practitioners, caregivers and patients is handled. That makes the proposed HHMSP agile enough to address safety, reliability and functionality needs from healthcare providers.


Author(s):  
Yijun Lu ◽  
Hong Jiang ◽  
Ying Lu

Consistency control is important in replication-based-Grid systems because it provides QoS guarantee. However, conventional consistency control mechanisms incur high communication overhead and are ill suited for large-scale dynamic Grid systems. In this chapter, the authors propose CVRetrieval (Consistency View Retrieval) to provide quantitative scalability improvement of consistency control for large-scale, replication-based Grid systems. Based on the observation that not all participants are equally active or engaged in distributed online collaboration, CVRetrieval differentiates the notions of consistency maintenance and consistency retrieval. Here, consistency maintenance implies a protocol that periodically communicates with all participants to maintain a certain consistency level; and consistency retrieval means that passive participants explicitly request consistent views from the system when the need arises in stead of joining the expensive consistency maintenance protocol all the time. The rationale is that it is much more cost-effective to satisfy a passive participant’s need on-demand. The evaluation of CVRetrieval is done in two parts. First, by analyzing its scalability and the result shows that CVRetrieval can greatly reduce communication cost and hence make consistency control more scalable. Second, a prototype of CVRetrieval is deployed on the Planet-Lab test-bed and the results show that the active participants experience a short response time at expense of the passive participants that may encounter a longer response time.


Flood are one of the unfavorable natural disasters. A flood can result in a huge loss of human lives and properties. It can also affect agricultural lands and destroy cultivated crops and trees. The flood can occur as a result of surface-runoff formed from melting snow, long-drawn-out rains, and derisory drainage of rainwater or collapse of dams. Today people have destroyed the rivers and lakes and have turned the natural water storage pools to buildings and construction lands. Flash floods can develop quickly within a few hours when compared with a regular flood. Research in prediction of flood has improved to reduce the loss of human life, property damages, and various problems related to the flood. Machine learning methods are widely used in building an efficient prediction model for weather forecasting. This advancement of the prediction system provides cost-effective solutions and better performance. In this paper, a prediction model is constructed using rainfall data to predict the occurrence of floods due to rainfall. The model predicts whether “flood may happen or not” based on the rainfall range for particular locations. Indian district rainfall data is used to build the prediction model. The dataset is trained with various algorithms like Linear Regression, K- Nearest Neighbor, Support Vector Machine, and Multilayer Perceptron. Among this, MLP algorithm performed efficiently with the highest accuracy of 97.40%. The MLP flash flood prediction model can be useful for the climate scientist to predict the flood during a heavy downpour with the highest accuracy.


Sign in / Sign up

Export Citation Format

Share Document