scholarly journals Data Caching at Fog Nodes Under IoT Networks: Review of Machine Learning Approaches

Author(s):  
Riya Tapwal ◽  
Nitin Gupta ◽  
Qin Xin

<div>IoT devices (wireless sensors, actuators, computer devices) produce large volume and variety of data and the data</div><div>produced by the IoT devices are transient. In order to overcome the problem of traditional IoT architecture where</div><div>data is sent to the cloud for processing, an emerging technology known as fog computing is proposed recently.</div><div>Fog computing brings storage, computing and control near to the end devices. Fog computing complements the</div><div>cloud and provide services to the IoT devices. Hence, data used by the IoT devices must be cached at the fog nodes</div><div>in order to reduce the bandwidth utilization and latency. This chapter discusses the utility of data caching at the</div><div>fog nodes. Further, various machine learning techniques can be used to reduce the latency by caching the data</div><div>near to the IoT devices by predicting their future demands. Therefore, this chapter also discusses various machine</div><div>learning techniques that can be used to extract the accurate data and predict future requests of IoT devices.</div>

2020 ◽  
Author(s):  
Riya Tapwal ◽  
Nitin Gupta ◽  
Qin Xin

<div>IoT devices (wireless sensors, actuators, computer devices) produce large volume and variety of data and the data</div><div>produced by the IoT devices are transient. In order to overcome the problem of traditional IoT architecture where</div><div>data is sent to the cloud for processing, an emerging technology known as fog computing is proposed recently.</div><div>Fog computing brings storage, computing and control near to the end devices. Fog computing complements the</div><div>cloud and provide services to the IoT devices. Hence, data used by the IoT devices must be cached at the fog nodes</div><div>in order to reduce the bandwidth utilization and latency. This chapter discusses the utility of data caching at the</div><div>fog nodes. Further, various machine learning techniques can be used to reduce the latency by caching the data</div><div>near to the IoT devices by predicting their future demands. Therefore, this chapter also discusses various machine</div><div>learning techniques that can be used to extract the accurate data and predict future requests of IoT devices.</div>


Author(s):  
Omar Farooq ◽  
Parminder Singh

Introduction: The emergence of the concepts like Big Data, Data Science, Machine Learning (ML), and the Internet of Things (IoT) has added the potential of research in today's world. The continuous use of IoT devices, sensors, etc. that collect data continuously puts tremendous pressure on the existing IoT network. Materials and Methods: This resource-constrained IoT environment is flooded with data acquired from millions of IoT nodes deployed at the device level. The limited resources of the IoT Network have driven the researchers towards data Management. This paper focuses on data classification at the device level, edge/fog level, and cloud level using machine learning techniques. Results: The data coming from different devices is vast and is of variety. Therefore, it becomes essential to choose the right approach for classification and analysis. It will help optimize the data at the device edge/fog level to better the network's performance in the future. Conclusion: This paper presents data classification, machine learning approaches, and a proposed mathematical model for the IoT environment.


One of the most dynamic and invigorate advancement in information technology is advent of Internet of Things (IoT). IoT is territory of interrelated computational and digital devices with intelligence to transfer data. Along with swift expansion of IoT devices through the world security of things is not at expected height. As a consequence of ubiquitous nature of IoT environment most of the user do not have expertise or willingness to secure devices by themselves. Machine learning approach could be very effective to address security challenges in IoT environment. In recent related papers, the researcher have used machine learning techniques, approaches or methods for securing things in IoT environment. This paper attempts to review the related research on machine learning approaches to secure IoT devices


Author(s):  
A. V. Deorankar ◽  
Shiwani S. Thakare

IoT is the network which connects and communicates with billions of devices through the internet and due to the massive use of IoT devices, the shared data between the devices or over the network is not confidential because of increasing growth of cyberattacks. The network traffic via loT systems is growing widely and introducing new cybersecurity challenges since these loT devices are connected to sensors that are directly connected to large-scale cloud servers. In order to reduce these cyberattacks, the developers need to raise new techniques for detecting infected loT devices. In this work, to control over this cyberattacks, the fog layer is introduced, to maintain the security of data on a cloud. Also the working of fog layer and different anomaly detection techniques to prevent the cyberattacks has been studied. The proposed AD-IoT can significantly detect malicious behavior using anomalies based on machine learning classification before distributing on a cloud layer. This work discusses the role of machine learning techniques for identifying the type of Cyberattacks. There are two ML techniques i.e. RF and MLP evaluated on the USNW-NB15 dataset. The accuracy and false alarm rate of the techniques are assessed, and the results revealed the superiority of the RF compared with MLP. The Accuracy measures by classifiers are 98 and 53 of RF and MLP respectively, which shows a huge difference and prove the RF as most efficient algorithm with binary classification as well as multi- classification.


2021 ◽  
Vol 297 ◽  
pp. 01073
Author(s):  
Sabyasachi Pramanik ◽  
K. Martin Sagayam ◽  
Om Prakash Jena

Cancer has been described as a diverse illness with several distinct subtypes that may occur simultaneously. As a result, early detection and forecast of cancer types have graced essentially in cancer fact-finding methods since they may help to improve the clinical treatment of cancer survivors. The significance of categorizing cancer suffers into higher or lower-threat categories has prompted numerous fact-finding associates from the bioscience and genomics field to investigate the utilization of machine learning (ML) algorithms in cancer diagnosis and treatment. Because of this, these methods have been used with the goal of simulating the development and treatment of malignant diseases in humans. Furthermore, the capacity of machine learning techniques to identify important characteristics from complicated datasets demonstrates the significance of these technologies. These technologies include Bayesian networks and artificial neural networks, along with a number of other approaches. Decision Trees and Support Vector Machines which have already been extensively used in cancer research for the creation of predictive models, also lead to accurate decision making. The application of machine learning techniques may undoubtedly enhance our knowledge of cancer development; nevertheless, a sufficient degree of validation is required before these approaches can be considered for use in daily clinical practice. An overview of current machine learning approaches utilized in the simulation of cancer development is presented in this paper. All of the supervised machine learning approaches described here, along with a variety of input characteristics and data samples, are used to build the prediction models. In light of the increasing trend towards the use of machine learning methods in biomedical research, we offer the most current papers that have used these approaches to predict risk of cancer or patient outcomes in order to better understand cancer.


Metagenomics ◽  
2017 ◽  
Vol 1 (1) ◽  
Author(s):  
Hayssam Soueidan ◽  
Macha Nikolski

AbstractOwing to the complexity and variability of metagenomic studies, modern machine learning approaches have seen increased usage to answer a variety of question encompassing the full range of metagenomic NGS data analysis.We review here the contribution of machine learning techniques for the field of metagenomics, by presenting known successful approaches in a unified framework. This review focuses on five important metagenomic problems:OTU-clustering, binning, taxonomic proffiing and assignment, comparative metagenomics and gene prediction. For each of these problems, we identify the most prominent methods, summarize the machine learning approaches used and put them into perspective of similar methods.We conclude our review looking further ahead at the challenge posed by the analysis of interactions within microbial communities and different environments, in a field one could call “integrative metagenomics”.


Author(s):  
S. Prasanthi ◽  
S.Durga Bhavani ◽  
T. Sobha Rani ◽  
Raju S. Bapi

Vast majority of successful drugs or inhibitors achieve their activity by binding to, and modifying the activity of a protein leading to the concept of druggability. A target protein is druggable if it has the potential to bind the drug-like molecules. Hence kinase inhibitors need to be studied to understand the specificity of a kinase inhibitor in choosing a particular kinase target. In this paper we focus on human kinase drug target sequences since kinases are known to be potential drug targets. Also we do a preliminary analysis of kinase inhibitors in order to study the problem in the protein-ligand space in future. The identification of druggable kinases is treated as a classification problem in which druggable kinases are taken as positive data set and non-druggable kinases are chosen as negative data set. The classification problem is addressed using machine learning techniques like support vector machine (SVM) and decision tree (DT) and using sequence-specific features. One of the challenges of this classification problem is due to the unbalanced data with only 48 druggable kinases available against 509 non-drugggable kinases present at Uniprot. The accuracy of the decision tree classifier obtained is 57.65 which is not satisfactory. A two-tier architecture of decision trees is carefully designed such that recognition on the non-druggable dataset also gets improved. Thus the overall model is shown to achieve a final performance accuracy of 88.37. To the best of our knowledge, kinase druggability prediction using machine learning approaches has not been reported in literature.


Author(s):  
Tolga Ensari ◽  
Melike Günay ◽  
Yağız Nalçakan ◽  
Eyyüp Yildiz

Machine learning is one of the most popular research areas, and it is commonly used in wireless communications and networks. Security and fast communication are among of the key requirements for next generation wireless networks. Machine learning techniques are getting more important day-by-day since the types, amount, and structure of data is continuously changing. Recent developments in smart phones and other devices like drones, wearable devices, machines with sensors need reliable communication within internet of things (IoT) systems. For this purpose, artificial intelligence can increase the security and reliability and manage the data that is generated by the wireless systems. In this chapter, the authors investigate several machine learning techniques for wireless communications including deep learning, which represents a branch of artificial neural networks.


Algorithms ◽  
2018 ◽  
Vol 11 (11) ◽  
pp. 170 ◽  
Author(s):  
Zhixi Li ◽  
Vincent Tam

Momentum and reversal effects are important phenomena in stock markets. In academia, relevant studies have been conducted for years. Researchers have attempted to analyze these phenomena using statistical methods and to give some plausible explanations. However, those explanations are sometimes unconvincing. Furthermore, it is very difficult to transfer the findings of these studies to real-world investment trading strategies due to the lack of predictive ability. This paper represents the first attempt to adopt machine learning techniques for investigating the momentum and reversal effects occurring in any stock market. In the study, various machine learning techniques, including the Decision Tree (DT), Support Vector Machine (SVM), Multilayer Perceptron Neural Network (MLP), and Long Short-Term Memory Neural Network (LSTM) were explored and compared carefully. Several models built on these machine learning approaches were used to predict the momentum or reversal effect on the stock market of mainland China, thus allowing investors to build corresponding trading strategies. The experimental results demonstrated that these machine learning approaches, especially the SVM, are beneficial for capturing the relevant momentum and reversal effects, and possibly building profitable trading strategies. Moreover, we propose the corresponding trading strategies in terms of market states to acquire the best investment returns.


2020 ◽  
Vol 63 (1) ◽  
pp. 47-56
Author(s):  
Himanshu Khandelwal ◽  
Shweta Shrivastava ◽  
Adity Ganguly ◽  
Abhijit Roy

Sign in / Sign up

Export Citation Format

Share Document