scholarly journals Detection of a Moving UAV Based on Deep Learning-Based Distance Estimation

2020 ◽  
Vol 12 (18) ◽  
pp. 3035
Author(s):  
Ying-Chih Lai ◽  
Zong-Ying Huang

Distance information of an obstacle is important for obstacle avoidance in many applications, and could be used to determine the potential risk of object collision. In this study, the detection of a moving fixed-wing unmanned aerial vehicle (UAV) with deep learning-based distance estimation to conduct a feasibility study of sense and avoid (SAA) and mid-air collision avoidance of UAVs is proposed by using a monocular camera to detect and track an incoming UAV. A quadrotor is regarded as an owned UAV, and it is able to estimate the distance of an incoming fixed-wing intruder. The adopted object detection method is based on the you only look once (YOLO) object detector. Deep neural network (DNN) and convolutional neural network (CNN) methods are applied to exam their performance in the distance estimation of moving objects. The feature extraction of fixed-wing UAVs is based on the VGG-16 model, and then its result is applied to the distance network to estimate the object distance. The proposed model is trained by using synthetic images from animation software and validated by using both synthetic and real flight videos. The results show that the proposed active vision-based scheme is able to detect and track a moving UAV with high detection accuracy and low distance errors.

Information ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 80 ◽  
Author(s):  
Kyung-Eun Park ◽  
Jeong-Pyo Lee ◽  
Youngok Kim

In the distance estimation scheme using Frequency-Modulated-Continuous-Wave (FMCW) radar, the frequency difference, which was caused by the time delay of the received signal reflected from the target, is calculated to estimate the distance information of the target. In this paper, we propose a distance estimation scheme exploiting the deep learning technology of artificial neural network to improve the accuracy of distance estimation over the conventional Fast Fourier Transform (FFT) Max value index-based distance estimation scheme. The performance of the proposed scheme is compared with that of the conventional scheme through the experiments evaluating the accuracy of distance estimation. The average estimated distance error of the proposed scheme was 0.069 m, while that of the conventional scheme was 1.9 m.


2021 ◽  
Vol 11 (15) ◽  
pp. 7050
Author(s):  
Zeeshan Ahmad ◽  
Adnan Shahid Khan ◽  
Kashif Nisar ◽  
Iram Haider ◽  
Rosilah Hassan ◽  
...  

The revolutionary idea of the internet of things (IoT) architecture has gained enormous popularity over the last decade, resulting in an exponential growth in the IoT networks, connected devices, and the data processed therein. Since IoT devices generate and exchange sensitive data over the traditional internet, security has become a prime concern due to the generation of zero-day cyberattacks. A network-based intrusion detection system (NIDS) can provide the much-needed efficient security solution to the IoT network by protecting the network entry points through constant network traffic monitoring. Recent NIDS have a high false alarm rate (FAR) in detecting the anomalies, including the novel and zero-day anomalies. This paper proposes an efficient anomaly detection mechanism using mutual information (MI), considering a deep neural network (DNN) for an IoT network. A comparative analysis of different deep-learning models such as DNN, Convolutional Neural Network, Recurrent Neural Network, and its different variants, such as Gated Recurrent Unit and Long Short-term Memory is performed considering the IoT-Botnet 2020 dataset. Experimental results show the improvement of 0.57–2.6% in terms of the model’s accuracy, while at the same time reducing the FAR by 0.23–7.98% to show the effectiveness of the DNN-based NIDS model compared to the well-known deep learning models. It was also observed that using only the 16–35 best numerical features selected using MI instead of 80 features of the dataset result in almost negligible degradation in the model’s performance but helped in decreasing the overall model’s complexity. In addition, the overall accuracy of the DL-based models is further improved by almost 0.99–3.45% in terms of the detection accuracy considering only the top five categorical and numerical features.


Over the recent years, the term deep learning has been considered as one of the primary choice for handling huge amount of data. Having deeper hidden layers, it surpasses classical methods for detection of outlier in wireless sensor network. The Convolutional Neural Network (CNN) is a biologically inspired computational model which is one of the most popular deep learning approaches. It comprises neurons that self-optimize through learning. EEG generally known as Electroencephalography is a tool used for investigation of brain function and EEG signal gives time-series data as output. In this paper, we propose a state-of-the-art technique designed by processing the time-series data generated by the sensor nodes stored in a large dataset into discrete one-second frames and these frames are projected onto a 2D map images. A convolutional neural network (CNN) is then trained to classify these frames. The result improves detection accuracy and encouraging.


2019 ◽  
Vol 27 ◽  
pp. 04002
Author(s):  
Diego Herrera ◽  
Hiroki Imamura

In the new technological era, facial recognition has become a central issue for a great number of engineers. Currently, there are a great number of techniques for facial recognition, but in this research, we focus on the use of deep learning. The problems with current facial recognition convection systems are that they are developed in non-mobile devices. This research intends to develop a Facial Recognition System implemented in an unmanned aerial vehicle of the quadcopter type. While it is true, there are quadcopters capable of detecting faces and/or shapes and following them, but most are for fun and entertainment. This research focuses on the facial recognition of people with criminal records, for which a neural network is trained. The Caffe framework is used for the training of a convolutional neural network. The system is developed on the NVIDIA Jetson TX2 motherboard. The design and construction of the quadcopter are done from scratch because we need the UAV for adapt to our requirements. This research aims to reduce violence and crime in Latin America.


Author(s):  
Dima M. Alalharith ◽  
Hajar M. Alharthi ◽  
Wejdan M. Alghamdi ◽  
Yasmine M. Alsenbel ◽  
Nida Aslam ◽  
...  

Computer-based technologies play a central role in the dentistry field, as they present many methods for diagnosing and detecting various diseases, such as periodontitis. The current study aimed to develop and evaluate the state-of-the-art object detection and recognition techniques and deep learning algorithms for the automatic detection of periodontal disease in orthodontic patients using intraoral images. In this study, a total of 134 intraoral images were divided into a training dataset (n = 107 [80%]) and a test dataset (n = 27 [20%]). Two Faster Region-based Convolutional Neural Network (R-CNN) models using ResNet-50 Convolutional Neural Network (CNN) were developed. The first model detects the teeth to locate the region of interest (ROI), while the second model detects gingival inflammation. The detection accuracy, precision, recall, and mean average precision (mAP) were calculated to verify the significance of the proposed model. The teeth detection model achieved an accuracy, precision, recall, and mAP of 100 %, 100%, 51.85%, and 100%, respectively. The inflammation detection model achieved an accuracy, precision, recall, and mAP of 77.12%, 88.02%, 41.75%, and 68.19%, respectively. This study proved the viability of deep learning models for the detection and diagnosis of gingivitis in intraoral images. Hence, this highlights its potential usability in the field of dentistry and aiding in reducing the severity of periodontal disease globally through preemptive non-invasive diagnosis.


Author(s):  
MUHAMMAD EFAN ABDULFATTAH ◽  
LEDYA NOVAMIZANTI ◽  
SYAMSUL RIZAL

ABSTRAKBencana di Indonesia didominasi oleh bencana hidrometeorologi yang mengakibatkan kerusakan dalam skala besar. Melalui pemetaan, penanganan yang menyeluruh dapat dilakukan guna membantu analisa dan penindakan selanjutnya. Unmanned Aerial Vehicle (UAV) dapat digunakan sebagai alat bantu pemetaan dari udara. Namun, karena faktor kamera maupun perangkat pengolah citra yang tidak memenuhi spesifikasi, hasilnya menjadi kurang informatif. Penelitian ini mengusulkan Super Resolution pada citra udara berbasis Convolutional Neural Network (CNN) dengan model DCSCN. Model terdiri atas Feature Extraction Network untuk mengekstraksi ciri citra, dan Reconstruction Network untuk merekonstruksi citra. Performa DCSCN dibandingkan dengan Super Resolution CNN (SRCNN). Eksperimen dilakukan pada dataset Set5 dengan nilai scale factor 2, 3 dan 4. Secara berurutan SRCNN menghasilkan nilai PSNR dan SSIM sebesar 36.66 dB / 0.9542, 32.75 dB / 0.9090 dan 30.49 dB / 0.8628. Performa DCSCN meningkat menjadi 37.614dB / 0.9588, 33.86 dB / 0.9225 dan 31.48 dB / 0.8851.Kata kunci: citra udara, deep learning, super resolution ABSTRACTDisasters in Indonesia are dominated by hydrometeorological disasters, which cause large-scale damage. Through mapping, comprehensive handling can be done to help the analysis and subsequent action. Unmanned Aerial Vehicle (UAV) can be used as an aerial mapping tool. However, due to the camera and image processing devices that do not meet specifications, the results are less informative. This research proposes Super Resolution on aerial imagery based on Convolutional Neural Network (CNN) with the DCSCN model. The model consists of Feature Extraction Network for extracting image features and Reconstruction Network for reconstructing images. DCSCN's performance is compared to CNN Super Resolution (SRCNN). Experiments were carried out on the Set5 dataset with scale factor values 2, 3, and 4. The SRCNN sequentially produced PSNR and SSIM values of 36.66dB / 0.9542, 32.75dB / 0.9090 and 30.49dB / 0.8628. DCSCN's performance increased to 37,614dB / 0.9588, 33.86dB / 0.9225 and 31.48dB / 0.8851.Keywords: aerial imagery, deep learning, super resolution


Author(s):  
Mohamed Esmail Karar ◽  
Ezz El-Din Hemdan ◽  
Marwa A. Shouman

Abstract Computer-aided diagnosis (CAD) systems are considered a powerful tool for physicians to support identification of the novel Coronavirus Disease 2019 (COVID-19) using medical imaging modalities. Therefore, this article proposes a new framework of cascaded deep learning classifiers to enhance the performance of these CAD systems for highly suspected COVID-19 and pneumonia diseases in X-ray images. Our proposed deep learning framework constitutes two major advancements as follows. First, complicated multi-label classification of X-ray images have been simplified using a series of binary classifiers for each tested case of the health status. That mimics the clinical situation to diagnose potential diseases for a patient. Second, the cascaded architecture of COVID-19 and pneumonia classifiers is flexible to use different fine-tuned deep learning models simultaneously, achieving the best performance of confirming infected cases. This study includes eleven pre-trained convolutional neural network models, such as Visual Geometry Group Network (VGG) and Residual Neural Network (ResNet). They have been successfully tested and evaluated on public X-ray image dataset for normal and three diseased cases. The results of proposed cascaded classifiers showed that VGG16, ResNet50V2, and Dense Neural Network (DenseNet169) models achieved the best detection accuracy of COVID-19, viral (Non-COVID-19) pneumonia, and bacterial pneumonia images, respectively. Furthermore, the performance of our cascaded deep learning classifiers is superior to other multi-label classification methods of COVID-19 and pneumonia diseases in previous studies. Therefore, the proposed deep learning framework presents a good option to be applied in the clinical routine to assist the diagnostic procedures of COVID-19 infection.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1698 ◽  
Author(s):  
Jia Yin ◽  
Koppaka Ganesh Sai Apuroop ◽  
Yokhesh Krishnasamy Tamilselvam ◽  
Rajesh Elara Mohan ◽  
Balakrishnan Ramalingam ◽  
...  

This work presents a table cleaning and inspection method using a Human Support Robot (HSR) which can operate in a typical food court setting. The HSR is able to perform a cleanliness inspection and also clean the food litter on the table by implementing a deep learning technique and planner framework. A lightweight Deep Convolutional Neural Network (DCNN) has been proposed to recognize the food litter on top of the table. In addition, the planner framework was proposed to HSR for accomplishing the table cleaning task which generates the cleaning path according to the detection of food litter and then the cleaning action is carried out. The effectiveness of the food litter detection module is verified with the cleanliness inspection task using Toyota HSR, and its detection results are verified with standard quality metrics. The experimental results show that the food litter detection module achieves an average of 96 % detection accuracy, which is more suitable for deploying the HSR robots for performing the cleanliness inspection and also helps to select the different cleaning modes. Further, the planner part has been tested through the table cleaning tasks. The experimental results show that the planner generated the cleaning path in real time and its generated path is optimal which reduces the cleaning time by grouping based cleaning action for removing the food litters from the table.


Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2140
Author(s):  
Oleg Kupervasser ◽  
Hennadii Kutomanov ◽  
Ori Levi ◽  
Vladislav Pukshansky ◽  
Roman Yavich

In the paper, visual navigation of a drone is considered. The drone navigation problem consists of two parts. The first part is finding the real position and orientation of the drone. The second part is finding the difference between desirable and real position and orientation of the drone and creation of the correspondent control signal for decreasing the difference. For the first part of the drone navigation problem, the paper presents a method for determining the coordinates of the drone camera with respect to known three-dimensional (3D) ground objects using deep learning. The algorithm has two stages. It causes the algorithm to be easy for interpretation by artificial neural network (ANN) and consequently increases its accuracy. At the first stage, we use the first ANN to find coordinates of the object origin projection. At the second stage, we use the second ANN to find the drone camera position and orientation. The algorithm has high accuracy (these errors were found for the validation set of images as differences between positions and orientations, obtained from a pretrained artificial neural network, and known positions and orientations), it is not sensitive to interference associated with changes in lighting, the appearance of external moving objects and the other phenomena where other methods of visual navigation are not effective. For the second part of the drone navigation problem, the paper presents a method for stabilization of drone flight controlled by autopilot with time delay. Indeed, image processing for navigation demands a lot of time and results in a time delay. However, the proposed method allows to get stable control in the presence of this time delay.


2021 ◽  
Vol 13 (21) ◽  
pp. 4377
Author(s):  
Long Sun ◽  
Jie Chen ◽  
Dazheng Feng ◽  
Mengdao Xing

Unmanned aerial vehicle (UAV) is one of the main means of information warfare, such as in battlefield cruises, reconnaissance, and military strikes. Rapid detection and accurate recognition of key targets in UAV images are the basis of subsequent military tasks. The UAV image has characteristics of high resolution and small target size, and in practical application, the detection speed is often required to be fast. Existing algorithms are not able to achieve an effective trade-off between detection accuracy and speed. Therefore, this paper proposes a parallel ensemble deep learning framework for unmanned aerial vehicle video multi-target detection, which is a global and local joint detection strategy. It combines a deep learning target detection algorithm with template matching to make full use of image information. It also integrates multi-process and multi-threading mechanisms to speed up processing. Experiments show that the system has high detection accuracy for targets with focal lengths varying from one to ten times. At the same time, the real-time and stable display of detection results is realized by aiming at the moving UAV video image.


Sign in / Sign up

Export Citation Format

Share Document