scholarly journals Automated identification of vulnerable devices in networks using traffic data and deep learning

Author(s):  
Jakob Greis ◽  
Artem Yushchenko ◽  
Daniel Vogel ◽  
Michael Meier ◽  
Volker Steinhage
Author(s):  
Volker Steinhage ◽  
Jakob Greis ◽  
Artem Yushchenko ◽  
Michael Meier ◽  
Daniel Vogel

Agronomy ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 2388
Author(s):  
Sk Mahmudul Hassan ◽  
Michal Jasinski ◽  
Zbigniew Leonowicz ◽  
Elzbieta Jasinska ◽  
Arnab Kumar Maji

Various plant diseases are major threats to agriculture. For timely control of different plant diseases in effective manner, automated identification of diseases are highly beneficial. So far, different techniques have been used to identify the diseases in plants. Deep learning is among the most widely used techniques in recent times due to its impressive results. In this work, we have proposed two methods namely shallow VGG with RF and shallow VGG with Xgboost to identify the diseases. The proposed model is compared with other hand-crafted and deep learning-based approaches. The experiments are carried on three different plants namely corn, potato, and tomato. The considered diseases in corns are Blight, Common rust, and Gray leaf spot, diseases in potatoes are early blight and late blight, and tomato diseases are bacterial spot, early blight, and late blight. The result shows that our implemented shallow VGG with Xgboost model outperforms different deep learning models in terms of accuracy, precision, recall, f1-score, and specificity. Shallow Visual Geometric Group (VGG) with Xgboost gives the highest accuracy rate of 94.47% in corn, 98.74% in potato, and 93.91% in the tomato dataset. The models are also tested with field images of potato, corn, and tomato. Even in field image the average accuracy obtained using shallow VGG with Xgboost are 94.22%, 97.36%, and 93.14%, respectively.


2021 ◽  
Author(s):  
ming ji ◽  
Chuanxia Sun ◽  
Yinglei Hu

Abstract In order to solve the increasingly serious traffic congestion problem, an intelligent transportation system is widely used in dynamic traffic management, which effectively alleviates traffic congestion and improves road traffic efficiency. With the continuous development of traffic data acquisition technology, it is possible to obtain real-time traffic data in the road network in time. A large amount of traffic information provides a data guarantee for the analysis and prediction of road network traffic state. Based on the deep learning framework, this paper studies the vehicle recognition algorithm and road environment discrimination algorithm, which greatly improves the accuracy of highway vehicle recognition. Collect highway video surveillance images in different environments, establish a complete original database, build a deep learning model of environment discrimination, and train the classification model to realize real-time environment recognition of highway, as the basic condition of vehicle recognition and traffic event discrimination, and provide basic information for vehicle detection model selection. To improve the accuracy of road vehicle detection, the vehicle target labeling and sample preprocessing of different environment samples are carried out. On this basis, the vehicle recognition algorithm is studied, and the vehicle detection algorithm based on weather environment recognition and fast RCNN model is proposed. Then, the performance of the vehicle detection algorithm described in this paper is verified by comparing the detection accuracy differences between different environment dataset models and overall dataset models, different network structures and deep learning methods, and other methods.


2020 ◽  
Vol 30 (12) ◽  
pp. 6902-6912
Author(s):  
Eui Jin Hwang ◽  
Hyungjin Kim ◽  
Jong Hyuk Lee ◽  
Jin Mo Goo ◽  
Chang Min Park

2020 ◽  
Vol 12 (12) ◽  
pp. 1924 ◽  
Author(s):  
Hiroyuki Miura ◽  
Tomohiro Aridome ◽  
Masashi Matsuoka

A methodology for the automated identification of building damage from post-disaster aerial images was developed based on convolutional neural network (CNN) and building damage inventories. The aerial images and the building damage data obtained in the 2016 Kumamoto, and the 1995 Kobe, Japan earthquakes were analyzed. Since the roofs of many moderately damaged houses are covered with blue tarps immediately after disasters, not only collapsed and non-collapsed buildings but also the buildings covered with blue tarps were identified by the proposed method. The CNN architecture developed in this study correctly classifies the building damage with the accuracy of approximately 95 % in both earthquake data. We applied the developed CNN model to aerial images in Chiba, Japan, damaged by the typhoon in September 2019. The result shows that more than 90 % of the building damage are correctly classified by the CNN model.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Bandar Alotaibi ◽  
Munif Alotaibi

Internet of things (IoT) devices and applications are dramatically increasing worldwide, resulting in more cybersecurity challenges. Among these challenges are malicious activities that target IoT devices and cause serious damage, such as data leakage, phishing and spamming campaigns, distributed denial-of-service (DDoS) attacks, and security breaches. In this paper, a stacked deep learning method is proposed to detect malicious traffic data, particularly malicious attacks targeting IoT devices. The proposed stacked deep learning method is bundled with five pretrained residual networks (ResNets) to deeply learn the characteristics of the suspicious activities and distinguish them from normal traffic. Each pretrained ResNet model consists of 10 residual blocks. We used two large datasets to evaluate the performance of our detection method. We investigated two heterogeneous IoT environments to make our approach deployable in any IoT setting. Our proposed method has the ability to distinguish between benign and malicious traffic data and detect most IoT attacks. The experimental results show that our proposed stacked deep learning method can provide a higher detection rate in real time compared with existing classification techniques.


The concept of big Data for intelligent transportation system has been employed for traffic management on dealing with dynamic traffic environments. Big data analytics helps to cope with large amount of storage and computing resources required to use mass traffic data effectively. However these traditional solutions brings us unprecedented opportunities to manage transportation data but it is inefficient for building the next-generation intelligent transportation systems as Traffic data exploring in velocity and volume on various characteristics. In this article, a new deep intelligent prediction network has been introduced that is hierarchical and operates with spatiotemporal characteristics and location based service on utilizing the Sensor and GPS data of the vehicle in the real time. The proposed model employs deep learning architecture to predict potential road clusters for passengers. It is injected as recommendation system to passenger in terms of mobile apps and hardware equipment employment on the vehicle incorporating location based services models to seek available parking slots, traffic free roads and shortest path for reach destination and other services in the specified path etc. The underlying the traffic data is classified into clusters with extracting set of features on it. The deep behavioural network processes the traffic data in terms of spatiotemporal characteristics to generate the traffic forecasting information, vehicle detection, autonomous driving and driving behaviours. In addition, markov model is embedded to discover the hidden features .The experimental results demonstrates that proposed approaches achieves better results against state of art approaches on the performance measures named as precision, execution time, feasibility and efficiency.


2021 ◽  
Author(s):  
Hamidullah Binol ◽  
M. Khalid Khan Niazi ◽  
Charles Elmaraghy ◽  
Aaron C Moberly ◽  
Metin N Gurcan

Background: The lack of an objective method to evaluate the eardrum is a critical barrier to an accurate diagnosis. Eardrum images are classified into normal or abnormal categories with machine learning techniques. If the input is an otoscopy video, a traditional approach requires great effort and expertise to manually determine the representative frame(s). Methods: In this paper, we propose a novel deep learning-based method, called OtoXNet, which automatically learns features for eardrum classification from otoscope video clips. We utilized multiple composite image generation methods to construct a highly representative version of otoscopy videos to diagnose three major eardrum diseases, i.e., otitis media with effusion, eardrum perforation, and tympanosclerosis versus normal (healthy). We compared the performance of OtoXNet against methods with that either use a single composite image or a keyframe selected by an experienced human. Our dataset consists of 394 otoscopy videos from 312 patients and 765 composite images before augmentation. Results: OtoXNet with multiple composite images achieved 84.8% class-weighted accuracy with 3.8% standard deviation, whereas with the human-selected keyframes and single composite images, the accuracies were respectively, 81.8% ± 5.0% and 80.1% ± 4.8% on multi-class eardrum video classification task using an 8-fold cross-validation scheme. A paired t-test shows that there is a statistically significant difference (p-value of 1.3 × 10-2) between the performance values of OtoXNet (multiple composite images) and the human-selected keyframes. Contrarily, the difference in means of keyframe and single composites was not significant (p = 5.49 × 10-1). OtoXNet surpasses the baseline approaches in qualitative results. Conclusion: The use of multiple composite images in analyzing eardrum abnormalities is advantageous compared to using single composite images or manual keyframe selection.


2018 ◽  
Author(s):  
Sebastien Villon ◽  
David Mouillot ◽  
Marc Chaumont ◽  
Emily S Darling ◽  
Gérard Subsol ◽  
...  

Identifying and counting individual fish on videos is a crucial task to cost-effectively monitor marine biodiversity, but it remains a difficult and time-consuming task. In this paper, we present a method to assist the automated identification of fish species on underwater images, and we compare our algorithm performances to human ability in terms of speed and accuracy. We first tested the performance of a convolutional neural network trained with different photographic databases while accounting for different post-processing decision rules to identify 20 fish species. Finally, we compared the performance in species identification of our best model with human performances on a test database of 1197 pictures representing nine species. The best network was the one trained with 900 000 pictures of whole fish and of their parts and environment (e.g. reef bottom or water). The rate of correct identification of fish was 94.9%, greater than the rate of correct identifications by humans (89.3%). The network was also able to identify fish individuals partially hidden behind corals or behind other fish and was more effective than humans identification on smallest or blurry pictures while humans were better to recognize fish individuals in unusual positions (e.g. twisted body). On average, each identification by our best algorithm using a common hardware took 0.06 seconds. Deep Learning methods can thus perform efficient fish identification on underwater pictures which pave the way to new video-based protocols for monitoring fish biodiversity cheaply and effectively.


Sign in / Sign up

Export Citation Format

Share Document