Self-Adaptive Feature Transformation Networks for Object Detection in low luminance Images

2022 ◽  
Vol 13 (1) ◽  
pp. 1-11
Author(s):  
Shih-Chia Huang ◽  
Quoc-Viet Hoang ◽  
Da-Wei Jaw

Despite the recent improvement of object detection techniques, many of them fail to detect objects in low-luminance images. The blurry and dimmed nature of low-luminance images results in the extraction of vague features and failure to detect objects. In addition, many existing object detection methods are based on models trained on both sufficient- and low-luminance images, which also negatively affect the feature extraction process and detection results. In this article, we propose a framework called Self-adaptive Feature Transformation Network (SFT-Net) to effectively detect objects in low-luminance conditions. The proposed SFT-Net consists of the following three modules: (1) feature transformation module, (2) self-adaptive module, and (3) object detection module. The purpose of the feature transformation module is to enhance the extracted feature through unsupervisely learning a feature domain projection procedure. The self-adaptive module is utilized as a probabilistic module producing appropriate features either from the transformed or the original features to further boost the performance and generalization ability of the proposed framework. Finally, the object detection module is designed to accurately detect objects in both low- and sufficient- luminance images by using the appropriate features produced by the self-adaptive module. The experimental results demonstrate that the proposed SFT-Net framework significantly outperforms the state-of-the-art object detection techniques, achieving an average precision (AP) of up to 6.35 and 11.89 higher on the sufficient- and low- luminance domain, respectively.

2019 ◽  
Vol 2019 ◽  
pp. 1-15 ◽  
Author(s):  
Qimei Wang ◽  
Feng Qi ◽  
Minghe Sun ◽  
Jianhua Qu ◽  
Jie Xue

This study develops tomato disease detection methods based on deep convolutional neural networks and object detection models. Two different models, Faster R-CNN and Mask R-CNN, are used in these methods, where Faster R-CNN is used to identify the types of tomato diseases and Mask R-CNN is used to detect and segment the locations and shapes of the infected areas. To select the model that best fits the tomato disease detection task, four different deep convolutional neural networks are combined with the two object detection models. Data are collected from the Internet and the dataset is divided into a training set, a validation set, and a test set used in the experiments. The experimental results show that the proposed models can accurately and quickly identify the eleven tomato disease types and segment the locations and shapes of the infected areas.


2020 ◽  
pp. 123-145
Author(s):  
Sushma Jaiswal ◽  
Tarun Jaiswal

In computer vision, object detection is a very important, exciting and mind-blowing study. Object detection work in numerous fields such as observing security, independently/autonomous driving and etc. Deep-learning based object detection techniques have developed at a very fast pace and have attracted the attention of many researchers. The main focus of the 21st century is the development of the object-detection framework, comprehensively and genuinely. In this investigation, we initially investigate and evaluate the various object detection approaches and designate the benchmark datasets. We also delivered the wide-ranging general idea of object detection approaches in an organized way. We covered the first and second stage detectors of object detection methods. And lastly, we consider the construction of these object detection approaches to give dimensions for further research.


2020 ◽  
Vol 10 (9) ◽  
pp. 3280 ◽  
Author(s):  
Chinthakindi Balaram Murthy ◽  
Mohammad Farukh Hashmi ◽  
Neeraj Dhanraj Bokde ◽  
Zong Woo Geem

In recent years there has been remarkable progress in one computer vision application area: object detection. One of the most challenging and fundamental problems in object detection is locating a specific object from the multiple objects present in a scene. Earlier traditional detection methods were used for detecting the objects with the introduction of convolutional neural networks. From 2012 onward, deep learning-based techniques were used for feature extraction, and that led to remarkable breakthroughs in this area. This paper shows a detailed survey on recent advancements and achievements in object detection using various deep learning techniques. Several topics have been included, such as Viola–Jones (VJ), histogram of oriented gradient (HOG), one-shot and two-shot detectors, benchmark datasets, evaluation metrics, speed-up techniques, and current state-of-art object detectors. Detailed discussions on some important applications in object detection areas, including pedestrian detection, crowd detection, and real-time object detection on Gpu-based embedded systems have been presented. At last, we conclude by identifying promising future directions.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3595 ◽  
Author(s):  
Anderson Aparecido dos Santos ◽  
José Marcato Junior ◽  
Márcio Santos Araújo ◽  
David Robledo Di Martini ◽  
Everton Castelão Tetila ◽  
...  

Detection and classification of tree species from remote sensing data were performed using mainly multispectral and hyperspectral images and Light Detection And Ranging (LiDAR) data. Despite the comparatively lower cost and higher spatial resolution, few studies focused on images captured by Red-Green-Blue (RGB) sensors. Besides, the recent years have witnessed an impressive progress of deep learning methods for object detection. Motivated by this scenario, we proposed and evaluated the usage of Convolutional Neural Network (CNN)-based methods combined with Unmanned Aerial Vehicle (UAV) high spatial resolution RGB imagery for the detection of law protected tree species. Three state-of-the-art object detection methods were evaluated: Faster Region-based Convolutional Neural Network (Faster R-CNN), YOLOv3 and RetinaNet. A dataset was built to assess the selected methods, comprising 392 RBG images captured from August 2018 to February 2019, over a forested urban area in midwest Brazil. The target object is an important tree species threatened by extinction known as Dipteryx alata Vogel (Fabaceae). The experimental analysis delivered average precision around 92% with an associated processing times below 30 miliseconds.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3374
Author(s):  
Hansen Liu ◽  
Kuangang Fan ◽  
Qinghua Ouyang ◽  
Na Li

To address the threat of drones intruding into high-security areas, the real-time detection of drones is urgently required to protect these areas. There are two main difficulties in real-time detection of drones. One of them is that the drones move quickly, which leads to requiring faster detectors. Another problem is that small drones are difficult to detect. In this paper, firstly, we achieve high detection accuracy by evaluating three state-of-the-art object detection methods: RetinaNet, FCOS, YOLOv3 and YOLOv4. Then, to address the first problem, we prune the convolutional channel and shortcut layer of YOLOv4 to develop thinner and shallower models. Furthermore, to improve the accuracy of small drone detection, we implement a special augmentation for small object detection by copying and pasting small drones. Experimental results verify that compared to YOLOv4, our pruned-YOLOv4 model, with 0.8 channel prune rate and 24 layers prune, achieves 90.5% mAP and its processing speed is increased by 60.4%. Additionally, after small object augmentation, the precision and recall of the pruned-YOLOv4 almost increases by 22.8% and 12.7%, respectively. Experiment results verify that our pruned-YOLOv4 is an effective and accurate approach for drone detection.


Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 195
Author(s):  
Yundong Wu ◽  
Jiajia Liao ◽  
Yujun Liu ◽  
Kaiming Ding ◽  
Shimin Li ◽  
...  

Object detection is a challenging computer vision task with numerous real-world applications. In recent years, the concept of the object relationship model has become helpful for object detection and has been verified and realized in deep learning. Nonetheless, most approaches to modeling object relations are limited to using the anchor-based algorithms; they cannot be directly migrated to the anchor-free frameworks. The reason is that the anchor-free algorithms are used to eliminate the complex design of anchors and predict heatmaps to represent the locations of keypoints of different object categories, without considering the relationship between keypoints. Therefore, to better fuse the information between the heatmap channels, it is important to model the visual relationship between keypoints. In this paper, we present a knowledge-driven network (KDNet)—a new architecture that can aggregate and model keypoint relations to augment object features for detection. Specifically, it processes a set of keypoints simultaneously through interactions between their local and geometric features, thereby allowing the modeling of their relationship. Finally, the updated heatmaps were used to obtain the corners of the objects and determine their positions. The experimental results conducted on the RIDER dataset confirm the effectiveness of the proposed KDNet, which significantly outperformed other state-of-the-art object detection methods.


2021 ◽  
Vol 13 (18) ◽  
pp. 3776
Author(s):  
Linlin Zhu ◽  
Xun Geng ◽  
Zheng Li ◽  
Chun Liu

It is of great significance to apply the object detection methods to automatically detect boulders from planetary images and analyze their distribution. This contributes to the selection of candidate landing sites and the understanding of the geological processes. This paper improves the state-of-the-art object detection method of YOLOv5 with attention mechanism and designs a pyramid based approach to detect boulders from planetary images. A new feature fusion layer has been designed to capture more shallow features of the small boulders. The attention modules implemented by combining the convolutional block attention module (CBAM) and efficient channel attention network (ECA-Net) are also added into YOLOv5 to highlight the information that contribute to boulder detection. Based on the Pascal Visual Object Classes 2007 (VOC2007) dataset which is widely used for object detection evaluations and the boulder dataset that we constructed from the images of Bennu asteroid, the evaluation results have shown that the improvements have increased the performance of YOLOv5 by 3.4% in precision. With the improved YOLOv5 detection method, the pyramid based approach extracts several layers of images with different resolutions from the large planetary images and detects boulders of different scales from different layers. We have also applied the proposed approach to detect the boulders on Bennu asteroid. The distribution of the boulders on Bennu asteroid has been analyzed and presented.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Manhuai Lu ◽  
Liqin Chen

The accuracy of object detection based on kitchen appliance scene images can suffer severely from external disturbances such as various levels of specular reflection, uneven lighting, and spurious lighting, as well as internal scene-related disturbances such as invalid edges and pattern information unrelated to the object of interest. The present study addresses these unique challenges by proposing an object detection method based on improved faster R-CNN algorithm. The improved method can identify object regions scattered in various areas of complex appliance scenes quickly and automatically. In this paper, we put forward a feature enhancement framework, named deeper region proposal network (D-RPN). In D-RPN, a feature enhancement module is designed to more effectively extract feature information of an object on kitchen appliance scene. Then, we reconstruct a U-shaped network structure using a series of feature enhancement modules. We have evaluated the proposed D-RPN on the dataset we created. It includes all kinds of kitchen appliance control panels captured in nature scene by image collector. In our experiments, the best-performing object detection method obtained a mean average precision mAP value of 89.84% in the testing dataset. The test results show that the proposed improved algorithm achieves higher detecting accuracy than state-of-the-art object detection methods. Finally, our proposed detection method can further be used in text recognition.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jie Shen ◽  
Zhenxin Xu ◽  
Zhe Chen ◽  
Huibin Wang ◽  
Xiaotao Shi

Underwater object detection plays an important role in research and practice, as it provides condensed and informative content that represents underwater objects. However, detecting objects from underwater images is challenging because underwater environments significantly degenerate image quality and distort the contrast between the object and background. To address this problem, this paper proposes an optical prior-based underwater object detection approach that takes advantage of optical principles to identify optical collimation over underwater images, providing valuable guidance for extracting object features. Unlike data-driven knowledge, the prior in our method is independent of training samples. The fundamental novelty of our approach lies in the integration of an image prior and the object detection task. This novelty is fundamental to the satisfying performance of our approach in underwater environments, which is demonstrated through comparisons with state-of-the-art object detection methods.


Sign in / Sign up

Export Citation Format

Share Document