scholarly journals Efficient Cognitive Fog Computing for Classification of Network Cyberattacks Using Machine Learning

Author(s):  
A. V. Deorankar ◽  
Shiwani S. Thakare

IoT is the network which connects and communicates with billions of devices through the internet and due to the massive use of IoT devices, the shared data between the devices or over the network is not confidential because of increasing growth of cyberattacks. The network traffic via loT systems is growing widely and introducing new cybersecurity challenges since these loT devices are connected to sensors that are directly connected to large-scale cloud servers. In order to reduce these cyberattacks, the developers need to raise new techniques for detecting infected loT devices. In this work, to control over this cyberattacks, the fog layer is introduced, to maintain the security of data on a cloud. Also the working of fog layer and different anomaly detection techniques to prevent the cyberattacks has been studied. The proposed AD-IoT can significantly detect malicious behavior using anomalies based on machine learning classification before distributing on a cloud layer. This work discusses the role of machine learning techniques for identifying the type of Cyberattacks. There are two ML techniques i.e. RF and MLP evaluated on the USNW-NB15 dataset. The accuracy and false alarm rate of the techniques are assessed, and the results revealed the superiority of the RF compared with MLP. The Accuracy measures by classifiers are 98 and 53 of RF and MLP respectively, which shows a huge difference and prove the RF as most efficient algorithm with binary classification as well as multi- classification.

2020 ◽  
Author(s):  
Riya Tapwal ◽  
Nitin Gupta ◽  
Qin Xin

<div>IoT devices (wireless sensors, actuators, computer devices) produce large volume and variety of data and the data</div><div>produced by the IoT devices are transient. In order to overcome the problem of traditional IoT architecture where</div><div>data is sent to the cloud for processing, an emerging technology known as fog computing is proposed recently.</div><div>Fog computing brings storage, computing and control near to the end devices. Fog computing complements the</div><div>cloud and provide services to the IoT devices. Hence, data used by the IoT devices must be cached at the fog nodes</div><div>in order to reduce the bandwidth utilization and latency. This chapter discusses the utility of data caching at the</div><div>fog nodes. Further, various machine learning techniques can be used to reduce the latency by caching the data</div><div>near to the IoT devices by predicting their future demands. Therefore, this chapter also discusses various machine</div><div>learning techniques that can be used to extract the accurate data and predict future requests of IoT devices.</div>


2021 ◽  
Vol 309 ◽  
pp. 01024
Author(s):  
M. Sri Vidya ◽  
G. R. Sakthidharan

Internet of Things connects various physical objects and form a network to do the services for sensing the physical things without any human intervention. They compute the data, retrieve the data by the network connections made through IoT device components such as Sensors, Protocols, Address, etc., The Global Positioning System (GPS) is used for localization in outer areas such as roads, and ground but cannot be used for Indoor environment. So, while using Indoor Environment, finding or locating an object is not possible by GPS. Therefore by using IoT devices such as Wi-Fi routers in Indoor Environment can localize the objects. It can be done by using Received Signal Strengths (RSSs) from a Wi-Fi router. But by using RSSs in Wi-Fi, there are disturbances, reflections, interferences are caused. By using Outlier detection techniques for localization can identify the objects clearly without any interruptions, noises, and irregular signal strengths. This paper produces research about Indoor Situating Environment and various techniques already used for localization and form the effective solution. The several methods used are compared and form a result to make the further computation in the Indoor Environment. The Comparison is done in order to find the effective and more accurate Machine Learning algorithms used for Indoor Localization.


2020 ◽  
Author(s):  
Riya Tapwal ◽  
Nitin Gupta ◽  
Qin Xin

<div>IoT devices (wireless sensors, actuators, computer devices) produce large volume and variety of data and the data</div><div>produced by the IoT devices are transient. In order to overcome the problem of traditional IoT architecture where</div><div>data is sent to the cloud for processing, an emerging technology known as fog computing is proposed recently.</div><div>Fog computing brings storage, computing and control near to the end devices. Fog computing complements the</div><div>cloud and provide services to the IoT devices. Hence, data used by the IoT devices must be cached at the fog nodes</div><div>in order to reduce the bandwidth utilization and latency. This chapter discusses the utility of data caching at the</div><div>fog nodes. Further, various machine learning techniques can be used to reduce the latency by caching the data</div><div>near to the IoT devices by predicting their future demands. Therefore, this chapter also discusses various machine</div><div>learning techniques that can be used to extract the accurate data and predict future requests of IoT devices.</div>


2019 ◽  
Vol 20 (3) ◽  
pp. 185-193 ◽  
Author(s):  
Natalie Stephenson ◽  
Emily Shane ◽  
Jessica Chase ◽  
Jason Rowland ◽  
David Ries ◽  
...  

Background:Drug discovery, which is the process of discovering new candidate medications, is very important for pharmaceutical industries. At its current stage, discovering new drugs is still a very expensive and time-consuming process, requiring Phases I, II and III for clinical trials. Recently, machine learning techniques in Artificial Intelligence (AI), especially the deep learning techniques which allow a computational model to generate multiple layers, have been widely applied and achieved state-of-the-art performance in different fields, such as speech recognition, image classification, bioinformatics, etc. One very important application of these AI techniques is in the field of drug discovery.Methods:We did a large-scale literature search on existing scientific websites (e.g, ScienceDirect, Arxiv) and startup companies to understand current status of machine learning techniques in drug discovery.Results:Our experiments demonstrated that there are different patterns in machine learning fields and drug discovery fields. For example, keywords like prediction, brain, discovery, and treatment are usually in drug discovery fields. Also, the total number of papers published in drug discovery fields with machine learning techniques is increasing every year.Conclusion:The main focus of this survey is to understand the current status of machine learning techniques in the drug discovery field within both academic and industrial settings, and discuss its potential future applications. Several interesting patterns for machine learning techniques in drug discovery fields are discussed in this survey.


2016 ◽  
Vol 27 (8) ◽  
pp. 857-870 ◽  
Author(s):  
Golrokh Mirzaei ◽  
Anahita Adeli ◽  
Hojjat Adeli

AbstractAlzheimer’s disease (AD) is a common health problem in elderly people. There has been considerable research toward the diagnosis and early detection of this disease in the past decade. The sensitivity of biomarkers and the accuracy of the detection techniques have been defined to be the key to an accurate diagnosis. This paper presents a state-of-the-art review of the research performed on the diagnosis of AD based on imaging and machine learning techniques. Different segmentation and machine learning techniques used for the diagnosis of AD are reviewed including thresholding, supervised and unsupervised learning, probabilistic techniques, Atlas-based approaches, and fusion of different image modalities. More recent and powerful classification techniques such as the enhanced probabilistic neural network of Ahmadlou and Adeli should be investigated with the goal of improving the diagnosis accuracy. A combination of different image modalities can help improve the diagnosis accuracy rate. Research is needed on the combination of modalities to discover multi-modal biomarkers.


2017 ◽  
Vol 3 (1) ◽  
Author(s):  
Giorgos Borboudakis ◽  
Taxiarchis Stergiannakos ◽  
Maria Frysali ◽  
Emmanuel Klontzas ◽  
Ioannis Tsamardinos ◽  
...  

2012 ◽  
Vol 10 (10) ◽  
pp. 547
Author(s):  
Mei Zhang ◽  
Gregory Johnson ◽  
Jia Wang

<span style="font-family: Times New Roman; font-size: small;"> </span><p style="margin: 0in 0.5in 0pt; text-align: justify; mso-pagination: none; mso-layout-grid-align: none;" class="MsoNormal"><span style="color: black; font-size: 10pt; mso-themecolor: text1;"><span style="font-family: Times New Roman;">A takeover success prediction model aims at predicting the probability that a takeover attempt will succeed by using publicly available information at the time of the announcement.<span style="mso-spacerun: yes;"> </span>We perform a thorough study using machine learning techniques to predict takeover success.<span style="mso-spacerun: yes;"> </span>Specifically, we model takeover success prediction as a binary classification problem, which has been widely studied in the machine learning community.<span style="mso-spacerun: yes;"> </span>Motivated by the recent advance in machine learning, we empirically evaluate and analyze many state-of-the-art classifiers, including logistic regression, artificial neural network, support vector machines with different kernels, decision trees, random forest, and Adaboost.<span style="mso-spacerun: yes;"> </span>The experiments validate the effectiveness of applying machine learning in takeover success prediction, and we found that the support vector machine with linear kernel and the Adaboost with stump weak classifiers perform the best for the task.<span style="mso-spacerun: yes;"> </span>The result is consistent with the general observations of these two approaches.</span></span></p><span style="font-family: Times New Roman; font-size: small;"> </span>


Author(s):  
Prince Golden ◽  
Kasturi Mojesh ◽  
Lakshmi Madhavi Devarapalli ◽  
Pabbidi Naga Suba Reddy ◽  
Srigiri Rajesh ◽  
...  

In this era of Cloud Computing and Machine Learning where every kind of work is getting automated through machine learning techniques running off of cloud servers to complete them more efficiently and quickly, what needs to be addressed is how we are changing our education systems and minimizing the troubles related to our education systems with all the advancements in technology. One of the the prominent issues in front of students has always been their graduate admissions and the colleges they should apply to. It has always been difficult to decide as to which university or college should they apply according to their marks obtained during their undergrad as not only it’s a tedious and time consuming thing to apply for number of universities at a single time but also expensive. Thus many machine learning solutions have emerged in the recent years to tackle this problem and provide various predictions, estimations and consultancies so that students can easily make their decisions about applying to the universities with higher chances of admission. In this paper, we review the machine learning techniques which are prevalent and provide accurate predictions regarding university admissions. We compare different regression models and machine learning methodologies such as, Random Forest, Linear Regression, Stacked Ensemble Learning, Support Vector Regression, Decision Trees, KNN(K-Nearest Neighbor) etc, used by other authors in their works and try to reach on a conclusion as to which technique will provide better accuracy.


Sign in / Sign up

Export Citation Format

Share Document