Air-to-Air Visual Detection of Micro-UAVs: An Experimental Evaluation of Deep Learning

2021 ◽  
Vol 6 (2) ◽  
pp. 1020-1027
Author(s):  
Ye Zheng ◽  
Zhang Chen ◽  
Dailin Lv ◽  
Zhixing Li ◽  
Zhenzhong Lan ◽  
...  
2021 ◽  
Vol 297 ◽  
pp. 01030
Author(s):  
Issam Elmagrouni ◽  
Abdelaziz Ettaoufik ◽  
Siham Aouad ◽  
Abderrahim Maizate

Gesture recognition technology based on visual detection to acquire gestures information is obtained in a non-contact manner. There are two types of gesture recognition: independent and continuous gesture recognition. The former aims to classify videos or other types of gesture sequences that only contain one isolated gesture instance in each sequence (e.g., RGB-D or skeleton data). In this study, we review existing research methods of visual gesture recognition and will be grouped according to the following family: static, dynamic, based on the supports (Kinect, Leap…etc), works that focus on the application of gesture recognition on robots and works on dealing with gesture recognition at the browser level. Following that, we take a look at the most common JavaScript-based deep learning frameworks. Then we present the idea of defining a process for improving user interface control based on gesture recognition to streamline the implementation of this mechanism.


2021 ◽  
Vol 1976 (1) ◽  
pp. 012013
Author(s):  
Zhihong Xu ◽  
Mingyang Fan ◽  
Yapeng Zhang

2021 ◽  
Vol 11 (4) ◽  
pp. 1754
Author(s):  
Jooyoung Kim ◽  
Sojung Go ◽  
Kyoungjin Noh ◽  
Sangjun Park ◽  
Soochahn Lee

Retinal photomontages, which are constructed by aligning and integrating multiple fundus images, are useful in diagnosing retinal diseases affecting peripheral retina. We present a novel framework for constructing retinal photomontages that fully leverage recent deep learning methods. Deep learning based object detection is used to define the order of image registration and blending. Deep learning based vessel segmentation is used to enhance image texture to improve registration performance within a two step image registration framework comprising rigid and non-rigid registration. Experimental evaluation demonstrates the robustness of our montage construction method with an increased amount of successfully integrated images as well as reduction of image artifacts.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Xieyi Chen ◽  
Dongyun Wang ◽  
Jinjun Shao ◽  
Jun Fan

To automatically detect plastic gasket defects, a set of plastic gasket defect visual detection devices based on GoogLeNet Inception-V2 transfer learning was designed and established in this study. The GoogLeNet Inception-V2 deep convolutional neural network (DCNN) was adopted to extract and classify the defect features of plastic gaskets to solve the problem of their numerous surface defects and difficulty in extracting and classifying the features. Deep learning applications require a large amount of training data to avoid model overfitting, but there are few datasets of plastic gasket defects. To address this issue, data augmentation was applied to our dataset. Finally, the performance of the three convolutional neural networks was comprehensively compared. The results showed that the GoogLeNet Inception-V2 transfer learning model had a better performance in less time. It means it had higher accuracy, reliability, and efficiency on the dataset used in this paper.


2020 ◽  
Author(s):  
J. G. Fennell ◽  
L. Talas ◽  
R. J. Baddeley ◽  
I. C. Cuthill ◽  
N. E. Scott-Samuel

AbstractThe essential problem in visual detection is separating an object from its background. Whether in nature or human conflict, camouflage aims to make the problem harder, while conspicuous signals (e.g. for warning or mate attraction) require the opposite. Our goal is to provide a reliable method for identifying the hardest and easiest to find patterns, for any given environment. The problem is challenging because the parameter space provided by varying natural scenes and potential patterns is vast. Here we successfully solve the problem using deep learning with genetic algorithms and illustrate our solution by identifying appropriate patterns in two environments. To show the generality of our approach, we do so for both trichromatic and dichromatic visual systems. Patterns were validated using human participants; those identified as the best camouflage were significantly harder to find than a widely adopted military camouflage pattern, while those identified as most conspicuous were significantly easier than other patterns. Our method, dubbed the ‘Camouflage Machine’, will be a useful tool for those interested in identifying the most effective patterns in a given context.


Sign in / Sign up

Export Citation Format

Share Document