scholarly journals Automated Detection of Animals in Low-Resolution Airborne Thermal Imagery

2021 ◽  
Vol 13 (16) ◽  
pp. 3276
Author(s):  
Anwaar Ulhaq ◽  
Peter Adams ◽  
Tarnya E. Cox ◽  
Asim Khan ◽  
Tom Low ◽  
...  

Detecting animals to estimate abundance can be difficult, particularly when the habitat is dense or the target animals are fossorial. The recent surge in the use of thermal imagers in ecology and their use in animal detections can increase the accuracy of population estimates and improve the subsequent implementation of management programs. However, the use of thermal imagers results in many hours of captured flight videos which require manual review for confirmation of species detection and identification. Therefore, the perceived cost and efficiency trade-off often restricts the use of these systems. Additionally, for many off-the-shelf systems, the exported imagery can be quite low resolution (<9 Hz), increasing the difficulty of using automated detections algorithms to streamline the review process. This paper presents an animal species detection system that utilises the cost-effectiveness of these lower resolution thermal imagers while harnessing the power of transfer learning and an enhanced small object detection algorithm. We have proposed a distant object detection algorithm named Distant-YOLO (D-YOLO) that utilises YOLO (You Only Look Once) and improves its training and structure for the automated detection of target objects in thermal imagery. We trained our system on thermal imaging data of rabbits, their active warrens, feral pigs, and kangaroos collected by thermal imaging researchers in New South Wales and Western Australia. This work will enhance the visual analysis of animal species while performing well on low, medium and high-resolution thermal imagery.

Author(s):  
Anwaar Ulhaq

Invasive species are significant threats to global agriculture and food security being the major causes of crop loss. An operative biosecurity policy requires full automation of detection and habitat identification of the potential pests and pathogens. Unmanned Aerial Vehicles (UAVs) mounted thermal imaging cameras can observe and detect pest animals and their habitats, and estimate their population size around the clock. However, their effectiveness becomes limited due to manual detection of cryptic species in hours of captured flight videos, failure in habitat disclosure and the requirement of expensive high-resolution cameras. Therefore, the cost and efficiency trade-off often restricts the use of these systems. In this paper, we present an invasive animal species detection system that uses cost-effectiveness of consumer-level cameras while harnessing the power of transfer learning and an optimised small object detection algorithm. Our proposed optimised object detection algorithm named Optimised YOLO (OYOLO) enhances YOLO (You Only Look Once) by improving its training and structure for remote detection of elusive targets. Our system, trained on the massive data collected from New South Wales and Western Australia, can detect invasive species (rabbits, Kangaroos and pigs) in real-time with a higher probability of detection (85&ndash;100 %), compared to the manual detection. This work will enhance the visual analysis of pest species while performing well on low, medium and high-resolution thermal imagery, and equally accessible to all stakeholders and end-users in Australia via a public cloud.


Author(s):  
Anwaar Ulhaq ◽  
Asim Khan

Invasive species are significant threats to global agriculture and food security being the major causes of crop loss. An operative biosecurity policy requires full automation of detection and habitat identification of the potential pests and pathogens. Unmanned Aerial Vehicles (UAVs) mounted thermal imaging cameras can observe and detect pest animals and their habitats, and estimate their population size around the clock. However, their effectiveness becomes limited due to manual detection of cryptic species in hours of captured flight videos, failure in habitat disclosure and the requirement of expensive high-resolution cameras. Therefore, the cost and efficiency trade-off often restricts the use of these systems. In this paper, we present an invasive animal species detection system that uses cost-effectiveness of consumer-level cameras while harnessing the power of transfer learning and an optimised small object detection algorithm. Our proposed optimised object detection algorithm named Optimised YOLO (OYOLO) enhances YOLO (You Only Look Once) by improving its training and structure for remote detection of elusive targets. Our system, trained on the massive data collected from New South Wales and Western Australia, can detect invasive species (rabbits, Kangaroos and pigs) in real-time with a higher probability of detection (85&ndash;100 %), compared to the manual detection. This work will enhance the visual analysis of pest species while performing well on low, medium and high-resolution thermal imagery, and equally accessible to all stakeholders and end-users in Australia via a public cloud.


Author(s):  
Jakaria Rabbi ◽  
Nilanjan Ray ◽  
Matthias Schubert ◽  
Subir Chowdhury ◽  
Dennis Chao

The detection performance of small objects in remote sensing images is not satisfactory compared to large objects, especially in low-resolution and noisy images. A generative adversarial network (GAN)-based model called enhanced super-resolution GAN (ESRGAN) shows remarkable image enhancement performance, but reconstructed images miss high-frequency edge information. Therefore, object detection performance degrades for the small objects on recovered noisy and low-resolution remote sensing images. Inspired by the success of edge enhanced GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN (EESRGAN) to improve the image quality of remote sensing images and used different detector networks in an end-to-end manner where detector loss is backpropagated into the EESRGAN to improve the detection performance. We propose an architecture with three components: ESRGAN, Edge Enhancement Network (EEN), and Detection network. We use residual-in-residual dense blocks (RRDB) for both the GAN and EEN, and for the detector network, we use the faster region-based convolutional network (FRCNN) (two-stage detector) and single-shot multi-box detector (SSD) (one stage detector). Extensive experiments on car overhead with context and oil and gas storage tank (created by us) data sets show superior performance of our method compared to the standalone state-of-the-art object detectors.


AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 552-577
Author(s):  
Mai Ibraheam ◽  
Kin Fun Li ◽  
Fayez Gebali ◽  
Leonard E. Sielecki

Object detection is one of the vital and challenging tasks of computer vision. It supports a wide range of applications in real life, such as surveillance, shipping, and medical diagnostics. Object detection techniques aim to detect objects of certain target classes in a given image and assign each object to a corresponding class label. These techniques proceed differently in network architecture, training strategy and optimization function. In this paper, we focus on animal species detection as an initial step to mitigate the negative impacts of wildlife–human and wildlife–vehicle encounters in remote wilderness regions and on highways. Our goal is to provide a summary of object detection techniques based on R-CNN models, and to enhance the performance of detecting animal species in accuracy and speed, by using four different R-CNN models and a deformable convolutional neural network. Each model is applied on three wildlife datasets, results are compared and analyzed by using four evaluation metrics. Based on the evaluation, an animal species detection system is proposed.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Xuewei Wang ◽  
Jun Liu ◽  
Xiaoning Zhu

Abstract Background Research on early object detection methods of crop diseases and pests in the natural environment has been an important research direction in the fields of computer vision, complex image processing and machine learning. Because of the complexity of the early images of tomato diseases and pests in the natural environment, the traditional methods can not achieve real-time and accurate detection. Results Aiming at the complex background of early period of tomato diseases and pests image objects in the natural environment, an improved object detection algorithm based on YOLOv3 for early real-time detection of tomato diseases and pests was proposed. Firstly, aiming at the complex background of tomato diseases and pests images under natural conditions, dilated convolution layer is used to replace convolution layer in backbone network to maintain high resolution and receptive field and improve the ability of small object detection. Secondly, in the detection network, according to the size of candidate box intersection ratio (IOU) and linear attenuation confidence score predicted by multiple grids, the obscured objects of tomato diseases and pests are retained, and the detection problem of mutual obscure objects of tomato diseases and pests is solved. Thirdly, to reduce the model volume and reduce the model parameters, the network is lightweight by using the idea of convolution factorization. Finally, by introducing a balance factor, the small object weight in the loss function is optimized. The test results of nine common tomato diseases and pests under six different background conditions are statistically analyzed. The proposed method has a F1 value of 94.77%, an AP value of 91.81%, a false detection rate of only 2.1%, and a detection time of only 55 Ms. The test results show that the method is suitable for early detection of tomato diseases and pests using large-scale video images collected by the agricultural Internet of Things. Conclusions At present, most of the object detection of diseases and pests based on computer vision needs to be carried out in a specific environment (such as picking the leaves of diseases and pests and placing them in the environment with light supplement equipment, so as to achieve the best environment). For the images taken by the Internet of things monitoring camera in the field, due to various factors such as light intensity, weather change, etc., the images are very different, the existing methods cannot work reliably. The proposed method has been applied to the actual tomato production scenarios, showing good detection performance. The experimental results show that the method in this study improves the detection effect of small objects and leaves occlusion, and the recognition effect under different background conditions is better than the existing object detection algorithms. The results show that the method is feasible to detect tomato diseases and pests in the natural environment.


2022 ◽  
Vol 14 (2) ◽  
pp. 255
Author(s):  
Xin Gao ◽  
Sundaresh Ram ◽  
Rohit C. Philip ◽  
Jeffrey J. Rodríguez ◽  
Jeno Szep ◽  
...  

In low-resolution wide-area aerial imagery, object detection algorithms are categorized as feature extraction and machine learning approaches, where the former often requires a post-processing scheme to reduce false detections and the latter demands multi-stage learning followed by post-processing. In this paper, we present an approach on how to select post-processing schemes for aerial object detection. We evaluated combinations of each of ten vehicle detection algorithms with any of seven post-processing schemes, where the best three schemes for each algorithm were determined using average F-score metric. The performance improvement is quantified using basic information retrieval metrics as well as the classification of events, activities and relationships (CLEAR) metrics. We also implemented a two-stage learning algorithm using a hundred-layer densely connected convolutional neural network for small object detection and evaluated its degree of improvement when combined with the various post-processing schemes. The highest average F-scores after post-processing are 0.902, 0.704 and 0.891 for the Tucson, Phoenix and online VEDAI datasets, respectively. The combined results prove that our enhanced three-stage post-processing scheme achieves a mean average precision (mAP) of 63.9% for feature extraction methods and 82.8% for the machine learning approach.


Author(s):  
Jakaria Rabbi ◽  
Nilanjan Ray ◽  
Matthias Schubert ◽  
Subir Chowdhury ◽  
Dennis Chao

The detection performance of small objects in remote sensing images is not satisfactory compared to large objects, especially in low-resolution and noisy images. A generative adversarial network (GAN)-based model called enhanced super-resolution GAN (ESRGAN) shows remarkable image enhancement performance, but reconstructed images miss high-frequency edge information. Therefore, object detection performance degrades for small objects on recovered noisy and low-resolution remote sensing images. Inspired by the success of edge enhanced GAN (EEGAN) and ESRGAN, we apply a new edge-enhanced super-resolution GAN (EESRGAN) to improve the image quality of remote sensing images and use different detector networks in an end-to-end manner where detector loss is backpropagated into the EESRGAN to improve the detection performance. We propose an architecture with three components: ESRGAN, Edge Enhancement Network (EEN), and Detection network. We use residual-in-residual dense blocks (RRDB) for both the ESRGAN and EEN, and for the detector network, we use the faster region-based convolutional network (FRCNN) (two-stage detector) and single-shot multi-box detector (SSD) (one stage detector). Extensive experiments on a public (car overhead with context) and a self-assembled (oil and gas storage tank) satellite dataset show superior performance of our method compared to the standalone state-of-the-art object detectors.


Sign in / Sign up

Export Citation Format

Share Document