scholarly journals A Review of Machine Learning Algorithms for Cloud Computing Security

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1379
Author(s):  
Umer Ahmed Butt ◽  
Muhammad Mehmood ◽  
Syed Bilal Hussain Shah ◽  
Rashid Amin ◽  
M. Waqas Shaukat ◽  
...  

Cloud computing (CC) is on-demand accessibility of network resources, especially data storage and processing power, without special and direct management by the users. CC recently has emerged as a set of public and private datacenters that offers the client a single platform across the Internet. Edge computing is an evolving computing paradigm that brings computation and information storage nearer to the end-users to improve response times and spare transmission capacity. Mobile CC (MCC) uses distributed computing to convey applications to cell phones. However, CC and edge computing have security challenges, including vulnerability for clients and association acknowledgment, that delay the rapid adoption of computing models. Machine learning (ML) is the investigation of computer algorithms that improve naturally through experience. In this review paper, we present an analysis of CC security threats, issues, and solutions that utilized one or several ML algorithms. We review different ML algorithms that are used to overcome the cloud security issues including supervised, unsupervised, semi-supervised, and reinforcement learning. Then, we compare the performance of each technique based on their features, advantages, and disadvantages. Moreover, we enlist future research directions to secure CC models.

Recent years have shown the explosive emergence of Cloud computing in the industry and it is now the need of the hour. It is a great idea to go to utilize 5G remote advancement and man-made thinking to engage speedier response times, lower latency, improved upkeep in figuring. The cloud has at no other time been so essential to the undertaking beforehand. This is where Edge Computing came into picture — seen as an expansion to the cloud, yet interesting in a couple of crucial ways. Empowering data to be taken care of, explored and moved at the edge of the framework, edge enlisting will enable undertakings to gather and assessments data closer to where it is taken care of, consistently, without idleness. Thus it can take into consideration snappy substance conveyance and information preparing that ought to be the eventual fate of registering. In this paper we will extensively study the necessity of Edge Cloud simulation environment and simulate it through EdgeCloudSim. We find that the utilization based, fuzzy competitor based and hybrid based methodologies incline toward offloading the assignments to the edge, so they give better outcomes whereas the average service time of the Fuzzy-Based methodology is least in contrast with the others


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


Author(s):  
Joel Weijia Lai ◽  
Candice Ke En Ang ◽  
U. Rajendra Acharya ◽  
Kang Hao Cheong

Artificial Intelligence in healthcare employs machine learning algorithms to emulate human cognition in the analysis of complicated or large sets of data. Specifically, artificial intelligence taps on the ability of computer algorithms and software with allowable thresholds to make deterministic approximate conclusions. In comparison to traditional technologies in healthcare, artificial intelligence enhances the process of data analysis without the need for human input, producing nearly equally reliable, well defined output. Schizophrenia is a chronic mental health condition that affects millions worldwide, with impairment in thinking and behaviour that may be significantly disabling to daily living. Multiple artificial intelligence and machine learning algorithms have been utilized to analyze the different components of schizophrenia, such as in prediction of disease, and assessment of current prevention methods. These are carried out in hope of assisting with diagnosis and provision of viable options for individuals affected. In this paper, we review the progress of the use of artificial intelligence in schizophrenia.


2014 ◽  
Vol 513-517 ◽  
pp. 2107-2110 ◽  
Author(s):  
Zhi Jian Diao ◽  
Song Guo

Cloud computing is a novel network-based computing model, in which the cloud infrastructure is constructed in bottom level and provided as the support environment for the applications in upper cloud level. The combination of clouding computing and GIS can improve the performance of GIS, and it can also provide a new prospect of GIS information storage, processing and utilization. By integrating cloud computing and GIS, this paper presented a cloud computing based GIS model based on two features of cloud computing: data storage and transparent custom service. The model contains two layers: service layer and application layer. With this two-layer model, GIS can provide stable and efficient services to end users by optimized network resource allocation of underlying data and services in cloud computing.


Author(s):  
Kai-Chao Yao ◽  
Shih-Feng Fu ◽  
Wei-Tzer Huang ◽  
Cheng-Chun Wu

This article uses LabVIEW, a software program to develop a whitefly feature identification and counting technology, and machine learning algorithms for whitefly monitoring, identification, and counting applications. In addition, a high-magnification CCD camera is used for on-demand image photography, and then the functional programs of the VI library of LabVIEW NI-DAQ and LabVIEW NI Vision Development Module are used to develop image recognition functions. The grayscale-value pyramid-matching algorithm is used for image conversion and recognition in the machine learning mode. The built graphical user interface and device hardware provide convenient and effective whitefly feature identification and sample counting. This monitoring technology exhibits features such as remote monitoring, counting, data storage, and statistical analysis.


A large volume of datasets is available in various fields that are stored to be somewhere which is called big data. Big Data healthcare has clinical data set of every patient records in huge amount and they are maintained by Electronic Health Records (EHR). More than 80 % of clinical data is the unstructured format and reposit in hundreds of forms. The challenges and demand for data storage, analysis is to handling large datasets in terms of efficiency and scalability. Hadoop Map reduces framework uses big data to store and operate any kinds of data speedily. It is not solely meant for storage system however conjointly a platform for information storage moreover as processing. It is scalable and fault-tolerant to the systems. Also, the prediction of the data sets is handled by machine learning algorithm. This work focuses on the Extreme Machine Learning algorithm (ELM) that can utilize the optimized way of finding a solution to find disease risk prediction by combining ELM with Cuckoo Search optimization-based Support Vector Machine (CS-SVM). The proposed work also considers the scalability and accuracy of big data models, thus the proposed algorithm greatly achieves the computing work and got good results in performance of both veracity and efficiency.


2021 ◽  
Author(s):  
Nicholas Parkyn

Emerging heterogeneous computing, computing at the edge, machine learning and AI at the edge technology drives approaches and techniques for processing and analysing onboard instrument data in near real-time. The author has used edge computing and neural networks combined with high performance heterogeneous computing platforms to accelerate AI workloads. Heterogeneous computing hardware used is readily available, low cost, delivers impressive AI performance and can run multiple neural networks in parallel. Collecting, processing and machine learning from onboard instruments data in near real-time is not a trivial problem due to data volumes, complexities of data filtering, data storage and continual learning. Little research has been done on continual machine learning which aims at a higher level of machine intelligence through providing the artificial agents with the ability to learn from a non-stationary and never-ending stream of data. The author has applied the concept of continual learning to building a system that continually learns from actual boat performance and refines predictions previously done using static VPP data. The neural networks used are initially trained using the output from traditional VPP software and continue to learn from actual data collected under real sailing conditions. The author will present the system design, AI, and edge computing techniques used and the approaches he has researched for incremental training to realise continual learning.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Fathima Aliyar Vellameeran ◽  
Thomas Brindha

Abstract Objectives To make a clear literature review on state-of-the-art heart disease prediction models. Methods It reviews 61 research papers and states the significant analysis. Initially, the analysis addresses the contributions of each literature works and observes the simulation environment. Here, different types of machine learning algorithms deployed in each contribution. In addition, the utilized dataset for existing heart disease prediction models was observed. Results The performance measures computed in entire papers like prediction accuracy, prediction error, specificity, sensitivity, f-measure, etc., are learned. Further, the best performance is also checked to confirm the effectiveness of entire contributions. Conclusions The comprehensive research challenges and the gap are portrayed based on the development of intelligent methods concerning the unresolved challenges in heart disease prediction using data mining techniques.


2012 ◽  
pp. 13-22 ◽  
Author(s):  
João Gama ◽  
André C.P.L.F. de Carvalho

Machine learning techniques have been successfully applied to several real world problems in areas as diverse as image analysis, Semantic Web, bioinformatics, text processing, natural language processing,telecommunications, finance, medical diagnosis, and so forth. A particular application where machine learning plays a key role is data mining, where machine learning techniques have been extensively used for the extraction of association, clustering, prediction, diagnosis, and regression models. This text presents our personal view of the main aspects, major tasks, frequently used algorithms, current research, and future directions of machine learning research. For such, it is organized as follows: Background information concerning machine learning is presented in the second section. The third section discusses different definitions for Machine Learning. Common tasks faced by Machine Learning Systems are described in the fourth section. Popular Machine Learning algorithms and the importance of the loss function are commented on in the fifth section. The sixth and seventh sections present the current trends and future research directions, respectively.


Author(s):  
João Gama ◽  
André C.P.L.F. de Carvalho

Machine learning techniques have been successfully applied to several real world problems in areas as diverse as image analysis, Semantic Web, bioinformatics, text processing, natural language processing,telecommunications, finance, medical diagnosis, and so forth. A particular application where machine learning plays a key role is data mining, where machine learning techniques have been extensively used for the extraction of association, clustering, prediction, diagnosis, and regression models. This text presents our personal view of the main aspects, major tasks, frequently used algorithms, current research, and future directions of machine learning research. For such, it is organized as follows: Background information concerning machine learning is presented in the second section. The third section discusses different definitions for Machine Learning. Common tasks faced by Machine Learning Systems are described in the fourth section. Popular Machine Learning algorithms and the importance of the loss function are commented on in the fifth section. The sixth and seventh sections present the current trends and future research directions, respectively.


Sign in / Sign up

Export Citation Format

Share Document