scholarly journals Efficient resource provisioning for elastic Cloud services based on machine learning techniques

Author(s):  
Rafael Moreno-Vozmediano ◽  
Rubén S. Montero ◽  
Eduardo Huedo ◽  
Ignacio M. Llorente
2020 ◽  
Vol 8 (6) ◽  
pp. 4367-4374

Ultra-flexibility is future asset provisioning method so as to deftly meet the clients' prerequisite in powerful way. Be that as it may, more components are required for execution improvement, for example, CPU and the capacity. It is trying to decide a reasonable edge to effectively scale the assets up or down. In this paper, we propose an efficient resource provisioning using hybrid machine learning techniques (ERP-HML) that emphasis on mutually advance the vitality utilization of servers and system. Here, the proposed asset provisioning is utilized for ultraversatile cloud benefits in hyper-joined cloud framework. In a Hyper-converged Infrastructure the resources such as CPU, storage and Network will be virtualized and software-defined as pools to meet the current demand. The principal commitment is to present an artificial plant optimization algorithm to improve the administration inertness and lessening over-provisioning of flexible cloud administrations. The subsequent commitment is to delineate a deep Q neural network (DQNN) for anticipating the server's preparing load. At that point, an improved hunting search (IHS) calculation is use to register the quantity of assets that must be provisioned dependent on the anticipated burden. The principle target of proposed ERP-HML strategy is precisely foresee the handling heap of a conveyed server and gauge the proper number of assets that must be provisioned to decrease vitality utilization. At last, the presentation of the proposed ERPHML strategy is contrast and the current condition ofcraftsmanship strategies as far as energy consumption, infrastructure costs and QoS.


2021 ◽  
Vol 3 ◽  
Author(s):  
Alberto Martinetti ◽  
Peter K. Chemweno ◽  
Kostas Nizamis ◽  
Eduard Fosch-Villaronga

Policymakers need to consider the impacts that robots and artificial intelligence (AI) technologies have on humans beyond physical safety. Traditionally, the definition of safety has been interpreted to exclusively apply to risks that have a physical impact on persons’ safety, such as, among others, mechanical or chemical risks. However, the current understanding is that the integration of AI in cyber-physical systems such as robots, thus increasing interconnectivity with several devices and cloud services, and influencing the growing human-robot interaction challenges how safety is currently conceptualised rather narrowly. Thus, to address safety comprehensively, AI demands a broader understanding of safety, extending beyond physical interaction, but covering aspects such as cybersecurity, and mental health. Moreover, the expanding use of machine learning techniques will more frequently demand evolving safety mechanisms to safeguard the substantial modifications taking place over time as robots embed more AI features. In this sense, our contribution brings forward the different dimensions of the concept of safety, including interaction (physical and social), psychosocial, cybersecurity, temporal, and societal. These dimensions aim to help policy and standard makers redefine the concept of safety in light of robots and AI’s increasing capabilities, including human-robot interactions, cybersecurity, and machine learning.


2018 ◽  
Vol 7 (4.1) ◽  
pp. 47
Author(s):  
Zarina Kazhmaganbetova ◽  
Shnar Imangaliyev ◽  
Altynbek Sharipbay

The objective of the work that is presented in this paper was the problem of the communication optimization and detection of the issues of computing resources performance degradation [1, 2] with the usage of machine learning techniques. Computer networks transmit payload data and the meta-data from numerous sources towards vast number of destinations, especially in multi-tenant environments [3, 4]. Meta data describes the payload data and could be analyzed for anomalies detection in the communication patterns. Communication patterns depend on the payload itself and technical protocol used. The technical patterns are the research target as their analysis could spotlight the vulnerable behavior, for example: unusual traffic, extra load transported and etc.There was a big data used to train model with a supervised machine learning. Dataset was collected from the network interfaces of the distributed application infrastructure. Machine Learning tools had been retained from the cloud services provider – Amazon Web Services. The stochastic gradient descent technique was utilized for the model training, so that it could represent the communication patterns in the system. The learning target parameter was a packet length, the regression was performed to understand the relationship between packet meta-data (timestamp, protocol, the source server) and its length. The root mean square error calculation was applied to evaluate the learning efficiency. After model was prepared using training dataset, the model was tested with the test dataset and then applied on the target dataset (dataset for prediction) to check whether it was capable to detect anomalies.The experimental part showed the applicability of machine learning for the communication optimization in the distributed application environment. By means of the trained artificial intelligence model, it was possible to predict target parameters of traffic and computing resources usage with purpose to avoid service degradation. Additionally, one could reveal anomalies in the transferred traffic between application components. The application of techniques is envisioned in information security field and in the field of efficient network resources planning.Further research could be in application machine learning techniques for more complicated distributed environments and enlarging the number of protocols to prepare communication patterns.  


Author(s):  
Rajesh Keshavrao Sadavarte ◽  
Dr. G. D. Kurundkar

Cloud computing is gaining a lot of attention, however, security is a major obstacle to its widespread adoption. Users of cloud services are always afraid of data loss, security threats and availability problems. Recently, machine learning-based methods of threat detection are gaining popularity in the literature with the advent of machine learning techniques. Therefore, the study and analysis of threat detection and prevention strategies are a necessity for cloud protection. With the help of the detection of threats, we can determine and inform the normal and inappropriate activities of users. Therefore, there is a need to develop an effective threat detection system using machine learning techniques in the cloud computing environment. In this paper, we present the survey and comparative analysis of the effectiveness of machine learning-based methods for detecting the threat in a cloud computing environment. The performance assessment of these methods is performed using tests performed on the UNSW-NB15 dataset. In this work, we analyse machine learning models that include Support Vector Machine (SVM), Decision Tree (DT), Naive Bayes (NB), Random Forests (RF) and the K-Nearest neighbour (KNN). Additionally, we have used the most important performance indicators, namely, accuracy, precision, recall and F1 score to test the effectiveness of several methods.


2006 ◽  
Author(s):  
Christopher Schreiner ◽  
Kari Torkkola ◽  
Mike Gardner ◽  
Keshu Zhang

2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 389-P
Author(s):  
SATORU KODAMA ◽  
MAYUKO H. YAMADA ◽  
YUTA YAGUCHI ◽  
MASARU KITAZAWA ◽  
MASANORI KANEKO ◽  
...  

Author(s):  
Anantvir Singh Romana

Accurate diagnostic detection of the disease in a patient is critical and may alter the subsequent treatment and increase the chances of survival rate. Machine learning techniques have been instrumental in disease detection and are currently being used in various classification problems due to their accurate prediction performance. Various techniques may provide different desired accuracies and it is therefore imperative to use the most suitable method which provides the best desired results. This research seeks to provide comparative analysis of Support Vector Machine, Naïve bayes, J48 Decision Tree and neural network classifiers breast cancer and diabetes datsets.


Sign in / Sign up

Export Citation Format

Share Document