scholarly journals Multipath TCP-Based IoT Communication Evaluation: From the Perspective of Multipath Management with Machine Learning

Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6573
Author(s):  
Ruiwen Ji ◽  
Yuanlong Cao ◽  
Xiaotian Fan ◽  
Yirui Jiang ◽  
Gang Lei ◽  
...  

With the development of wireless networking technology, current Internet-of-Things (IoT) devices are equipped with multiple network access interfaces. Multipath TCP (MPTCP) technology can improve the throughput of data transmission. However, traditional MPTCP path management may cause problems such as data confusion and even buffer blockage, which severely reduces transmission performance. This research introduces machine learning algorithms into MPTCP path management, and proposes an automatic learning selection path mechanism based on MPTCP (ALPS-MPTCP), which can adaptively select some high-quality paths and transmit data at the same time. This paper designs a simulation experiment that compares the performance of four machine learning algorithms in judging path quality. The experimental results show that, considering the running time and accuracy, the random forest algorithm has the best performance in judging path quality.

Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 444 ◽  
Author(s):  
Valerio Morfino ◽  
Salvatore Rampone

In the fields of Internet of Things (IoT) infrastructures, attack and anomaly detection are rising concerns. With the increased use of IoT infrastructure in every domain, threats and attacks in these infrastructures are also growing proportionally. In this paper the performances of several machine learning algorithms in identifying cyber-attacks (namely SYN-DOS attacks) to IoT systems are compared both in terms of application performances, and in training/application times. We use supervised machine learning algorithms included in the MLlib library of Apache Spark, a fast and general engine for big data processing. We show the implementation details and the performance of those algorithms on public datasets using a training set of up to 2 million instances. We adopt a Cloud environment, emphasizing the importance of the scalability and of the elasticity of use. Results show that all the Spark algorithms used result in a very good identification accuracy (>99%). Overall, one of them, Random Forest, achieves an accuracy of 1. We also report a very short training time (23.22 sec for Decision Tree with 2 million rows). The experiments also show a very low application time (0.13 sec for over than 600,000 instances for Random Forest) using Apache Spark in the Cloud. Furthermore, the explicit model generated by Random Forest is very easy-to-implement using high- or low-level programming languages. In light of the results obtained, both in terms of computation times and identification performance, a hybrid approach for the detection of SYN-DOS cyber-attacks on IoT devices is proposed: the application of an explicit Random Forest model, implemented directly on the IoT device, along with a second level analysis (training) performed in the Cloud.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1677
Author(s):  
Ersin Elbasi ◽  
Ahmet E. Topcu ◽  
Shinu Mathew

COVID-19 is a community-acquired infection with symptoms that resemble those of influenza and bacterial pneumonia. Creating an infection control policy involving isolation, disinfection of surfaces, and identification of contagions is crucial in eradicating such pandemics. Incorporating social distancing could also help stop the spread of community-acquired infections like COVID-19. Social distancing entails maintaining certain distances between people and reducing the frequency of contact between people. Meanwhile, a significant increase in the development of different Internet of Things (IoT) devices has been seen together with cyber-physical systems that connect with physical environments. Machine learning is strengthening current technologies by adding new approaches to quickly and correctly solve problems utilizing this surge of available IoT devices. We propose a new approach using machine learning algorithms for monitoring the risk of COVID-19 in public areas. Extracted features from IoT sensors are used as input for several machine learning algorithms such as decision tree, neural network, naïve Bayes classifier, support vector machine, and random forest to predict the risks of the COVID-19 pandemic and calculate the risk probability of public places. This research aims to find vulnerable populations and reduce the impact of the disease on certain groups using machine learning models. We build a model to calculate and predict the risk factors of populated areas. This model generates automated alerts for security authorities in the case of any abnormal detection. Experimental results show that we have high accuracy with random forest of 97.32%, with decision tree of 94.50%, and with the naïve Bayes classifier of 99.37%. These algorithms indicate great potential for crowd risk prediction in public areas.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mona Bokharaei Nia ◽  
Mohammadali Afshar Kazemi ◽  
Changiz Valmohammadi ◽  
Ghanbar Abbaspour

PurposeThe increase in the number of healthcare wearable (Internet of Things) IoT options is making it difficult for individuals, healthcare experts and physicians to find the right smart device that best matches their requirements or treatments. The purpose of this research is to propose a framework for a recommender system to advise on the best device for the patient using machine learning algorithms and social media sentiment analysis. This approach will provide great value for patients, doctors, medical centers, and hospitals to enable them to provide the best advice and guidance in allocating the device for that particular time in the treatment process.Design/methodology/approachThis data-driven approach comprises multiple stages that lead to classifying the diseases that a patient is currently facing or is at risk of facing by using and comparing the results of various machine learning algorithms. Hereupon, the proposed recommender framework aggregates the specifications of wearable IoT devices along with the image of the wearable product, which is the extracted user perception shared on social media after applying sentiment analysis. Lastly, a proposed computation with the use of a genetic algorithm was used to compute all the collected data and to recommend the wearable IoT device recommendation for a patient.FindingsThe proposed conceptual framework illustrates how health record data, diseases, wearable devices, social media sentiment analysis and machine learning algorithms are interrelated to recommend the relevant wearable IoT devices for each patient. With the consultation of 15 physicians, each a specialist in their area, the proof-of-concept implementation result shows an accuracy rate of up to 95% using 17 settings of machine learning algorithms over multiple disease-detection stages. Social media sentiment analysis was computed at 76% accuracy. To reach the final optimized result for each patient, the proposed formula using a Genetic Algorithm has been tested and its results presented.Research limitations/implicationsThe research data were limited to recommendations for the best wearable devices for five types of patient diseases. The authors could not compare the results of this research with other studies because of the novelty of the proposed framework and, as such, the lack of available relevant research.Practical implicationsThe emerging trend of wearable IoT devices is having a significant impact on the lifestyle of people. The interest in healthcare and well-being is a major driver of this growth. This framework can help in accelerating the transformation of smart hospitals and can assist doctors in finding and suggesting the right wearable IoT for their patients smartly and efficiently during treatment for various diseases. Furthermore, wearable device manufacturers can also use the outcome of the proposed platform to develop personalized wearable devices for patients in the future.Originality/valueIn this study, by considering patient health, disease-detection algorithm, wearable and IoT social media sentiment analysis, and healthcare wearable device dataset, we were able to propose and test a framework for the intelligent recommendation of wearable and IoT devices helping healthcare professionals and patients find wearable devices with a better understanding of their demands and experiences.


2021 ◽  
Vol 30 (04) ◽  
pp. 2150020
Author(s):  
Luke Holbrook ◽  
Miltiadis Alamaniotis

With the increase of cyber-attacks on millions of Internet of Things (IoT) devices, the poor network security measures on those devices are the main source of the problem. This article aims to study a number of these machine learning algorithms available for their effectiveness in detecting malware in consumer internet of things devices. In particular, the Support Vector Machines (SVM), Random Forest, and Deep Neural Network (DNN) algorithms are utilized for a benchmark with a set of test data and compared as tools in safeguarding the deployment for IoT security. Test results on a set of 4 IoT devices exhibited that all three tested algorithms presented here detect the network anomalies with high accuracy. However, the deep neural network provides the highest coefficient of determination R2, and hence, it is identified as the most precise among the tested algorithms concerning the security of IoT devices based on the data sets we have undertaken.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 600
Author(s):  
Gianluca Cornetta ◽  
Abdellah Touhafi

Low-cost, high-performance embedded devices are proliferating and a plethora of new platforms are available on the market. Some of them either have embedded GPUs or the possibility to be connected to external Machine Learning (ML) algorithm hardware accelerators. These enhanced hardware features enable new applications in which AI-powered smart objects can effectively and pervasively run in real-time distributed ML algorithms, shifting part of the raw data analysis and processing from cloud or edge to the device itself. In such context, Artificial Intelligence (AI) can be considered as the backbone of the next generation of Internet of the Things (IoT) devices, which will no longer merely be data collectors and forwarders, but really “smart” devices with built-in data wrangling and data analysis features that leverage lightweight machine learning algorithms to make autonomous decisions on the field. This work thoroughly reviews and analyses the most popular ML algorithms, with particular emphasis on those that are more suitable to run on resource-constrained embedded devices. In addition, several machine learning algorithms have been built on top of a custom multi-dimensional array library. The designed framework has been evaluated and its performance stressed on Raspberry Pi III- and IV-embedded computers.


Due to increasing digitalization and the development of new technologies such as the IoT, the application of machine learning (ML) algorithms is rapidly expanding (IoT). ML algorithms are being used in healthcare, IoT, engineering, finance, and other fields in today's digital age. However, in order to predict/solve a specific issue, all of these algorithms must be taught. There's a good chance that the training datasets have been tampered with, resulting in skewed findings. As a result, we have suggested a blockchain-based approach to protect datasets produced by IoT devices for E-Health applications in this paper. To address the aforementioned problem, the suggested blockchain-based system makes use of a private cloud. For assessment, we created a mechanism that dataset owners may use to protect their data.


Author(s):  
Manu C. ◽  
Vijaya Kumar B. P. ◽  
Naresh E.

In daily realistic activities, security is one of the main criteria among the different machines like IOT devices, networks. In these systems, anomaly detection is one of the issues. Anomaly detection based on user behavior is very essential to secure the machines from the unauthorized activities by anomaly user. Techniques used for an anomaly detection is to learn the daily realistic activities of the user, and later it proactively detects the anomalous situation and unusual activities. In the IOT-related systems, the detection of such anomalous situations can be fine-tuned with minor and major erroneous conditions to the machine learning algorithms that learn the activities of a user. In this chapter, neural networks, with multiple hidden layers to detect the different situation by creating an environment with random anomalous activities to the machine, are proposed. Using deep learning for anomaly detection would help in enhancing the accuracy and speed.


Sign in / Sign up

Export Citation Format

Share Document