scholarly journals Pest Animal's Detection, and Habitat Identification in Low-resolution Airborne Thermal Imagery

Author(s):  
Anwaar Ulhaq

Invasive species are significant threats to global agriculture and food security being the major causes of crop loss. An operative biosecurity policy requires full automation of detection and habitat identification of the potential pests and pathogens. Unmanned Aerial Vehicles (UAVs) mounted thermal imaging cameras can observe and detect pest animals and their habitats, and estimate their population size around the clock. However, their effectiveness becomes limited due to manual detection of cryptic species in hours of captured flight videos, failure in habitat disclosure and the requirement of expensive high-resolution cameras. Therefore, the cost and efficiency trade-off often restricts the use of these systems. In this paper, we present an invasive animal species detection system that uses cost-effectiveness of consumer-level cameras while harnessing the power of transfer learning and an optimised small object detection algorithm. Our proposed optimised object detection algorithm named Optimised YOLO (OYOLO) enhances YOLO (You Only Look Once) by improving its training and structure for remote detection of elusive targets. Our system, trained on the massive data collected from New South Wales and Western Australia, can detect invasive species (rabbits, Kangaroos and pigs) in real-time with a higher probability of detection (85–100 %), compared to the manual detection. This work will enhance the visual analysis of pest species while performing well on low, medium and high-resolution thermal imagery, and equally accessible to all stakeholders and end-users in Australia via a public cloud.

Author(s):  
Anwaar Ulhaq ◽  
Asim Khan

Invasive species are significant threats to global agriculture and food security being the major causes of crop loss. An operative biosecurity policy requires full automation of detection and habitat identification of the potential pests and pathogens. Unmanned Aerial Vehicles (UAVs) mounted thermal imaging cameras can observe and detect pest animals and their habitats, and estimate their population size around the clock. However, their effectiveness becomes limited due to manual detection of cryptic species in hours of captured flight videos, failure in habitat disclosure and the requirement of expensive high-resolution cameras. Therefore, the cost and efficiency trade-off often restricts the use of these systems. In this paper, we present an invasive animal species detection system that uses cost-effectiveness of consumer-level cameras while harnessing the power of transfer learning and an optimised small object detection algorithm. Our proposed optimised object detection algorithm named Optimised YOLO (OYOLO) enhances YOLO (You Only Look Once) by improving its training and structure for remote detection of elusive targets. Our system, trained on the massive data collected from New South Wales and Western Australia, can detect invasive species (rabbits, Kangaroos and pigs) in real-time with a higher probability of detection (85–100 %), compared to the manual detection. This work will enhance the visual analysis of pest species while performing well on low, medium and high-resolution thermal imagery, and equally accessible to all stakeholders and end-users in Australia via a public cloud.


2021 ◽  
Vol 13 (16) ◽  
pp. 3276
Author(s):  
Anwaar Ulhaq ◽  
Peter Adams ◽  
Tarnya E. Cox ◽  
Asim Khan ◽  
Tom Low ◽  
...  

Detecting animals to estimate abundance can be difficult, particularly when the habitat is dense or the target animals are fossorial. The recent surge in the use of thermal imagers in ecology and their use in animal detections can increase the accuracy of population estimates and improve the subsequent implementation of management programs. However, the use of thermal imagers results in many hours of captured flight videos which require manual review for confirmation of species detection and identification. Therefore, the perceived cost and efficiency trade-off often restricts the use of these systems. Additionally, for many off-the-shelf systems, the exported imagery can be quite low resolution (<9 Hz), increasing the difficulty of using automated detections algorithms to streamline the review process. This paper presents an animal species detection system that utilises the cost-effectiveness of these lower resolution thermal imagers while harnessing the power of transfer learning and an enhanced small object detection algorithm. We have proposed a distant object detection algorithm named Distant-YOLO (D-YOLO) that utilises YOLO (You Only Look Once) and improves its training and structure for the automated detection of target objects in thermal imagery. We trained our system on thermal imaging data of rabbits, their active warrens, feral pigs, and kangaroos collected by thermal imaging researchers in New South Wales and Western Australia. This work will enhance the visual analysis of animal species while performing well on low, medium and high-resolution thermal imagery.


2021 ◽  
Vol 11 (24) ◽  
pp. 11868
Author(s):  
José Naranjo-Torres ◽  
Marco Mora ◽  
Claudio Fredes ◽  
Andres Valenzuela

Raspberries are fruit of great importance for human beings. Their products are segmented by quality. However, estimating raspberry quality is a manual process carried out at the reception of the fruit processing plant,and is thus exposed to factors that could distort the measurement. The agriculture industry has increased the use of deep learning (DL) in computer vision systems. Non-destructive and computer vision equipment and methods are proposed to solve the problem of estimating the quality of raspberries in a tray. To solve the issue of estimating the quality of raspberries in a picking tray, prototype equipment is developed to determine the quality of raspberry trays using computer vision techniques and convolutional neural networks from images captured in the visible RGB spectrum. The Faster R–CNN object-detection algorithm is used, and different pretrained CNN networks are evaluated as a backbone to develop the software for the developed equipment. To avoid imbalance in the dataset, an individual object-detection model is trained and optimized for each detection class. Finally, both hardware and software are effectively integrated. A conceptual test is performed in a real industrial scenario, thus achieving an automatic evaluation of the quality of the raspberry tray, in this way eliminating the intervention of the human expert and eliminating errors involved in visual analysis. Excellent results were obtained in the conceptual test performed, reaching in some cases precision of 100%, reducing the evaluation time per raspberry tray image to 30 s on average, allowing the evaluation of a larger and representative sample of the raspberry batch arriving at the processing plant.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Kaliappan Madasamy ◽  
Vimal Shanmuganathan ◽  
Vijayalakshmi Kandasamy ◽  
Mi Young Lee ◽  
Manikandan Thangadurai

AbstractComputer vision is an interdisciplinary domain for object detection. Object detection relay is a vital part in assisting surveillance, vehicle detection and pose estimation. In this work, we proposed a novel deep you only look once (deep YOLO V3) approach to detect the multi-object. This approach looks at the entire frame during the training and test phase. It followed a regression-based technique that used a probabilistic model to locate objects. In this, we construct 106 convolution layers followed by 2 fully connected layers and 812 × 812 × 3 input size to detect the drones with small size. We pre-train the convolution layers for classification at half the resolution and then double the resolution for detection. The number of filters of each layer will be set to 16. The number of filters of the last scale layer is more than 16 to improve the small object detection. This construction uses up-sampling techniques to improve undesired spectral images into the existing signal and rescaling the features in specific locations. It clearly reveals that the up-sampling detects small objects. It actually improves the sampling rate. This YOLO architecture is preferred because it considers less memory resource and computation cost rather than more number of filters. The proposed system is designed and trained to perform a single type of class called drone and the object detection and tracking is performed with the embedded system-based deep YOLO. The proposed YOLO approach predicts the multiple bounding boxes per grid cell with better accuracy. The proposed model has been trained with a large number of small drones with different conditions like open field, and marine environment with complex background.


2020 ◽  
Vol 28 (S2) ◽  
Author(s):  
Asmida Ismail ◽  
Siti Anom Ahmad ◽  
Azura Che Soh ◽  
Mohd Khair Hassan ◽  
Hazreen Haizi Harith

The object detection system is a computer technology related to image processing and computer vision that detects instances of semantic objects of a certain class in digital images and videos. The system consists of two main processes, which are classification and detection. Once an object instance has been classified and detected, it is possible to obtain further information, including recognizes the specific instance, track the object over an image sequence and extract further information about the object and the scene. This paper presented an analysis performance of deep learning object detector by combining a deep learning Convolutional Neural Network (CNN) for object classification and applies classic object detection algorithms to devise our own deep learning object detector. MiniVGGNet is an architecture network used to train an object classification, and the data used for this purpose was collected from specific indoor environment building. For object detection, sliding windows and image pyramids were used to localize and detect objects at different locations, and non-maxima suppression (NMS) was used to obtain the final bounding box to localize the object location. Based on the experiment result, the percentage of classification accuracy of the network is 80% to 90% and the time for the system to detect the object is less than 15sec/frame. Experimental results show that there are reasonable and efficient to combine classic object detection method with a deep learning classification approach. The performance of this method can work in some specific use cases and effectively solving the problem of the inaccurate classification and detection of typical features.


Sign in / Sign up

Export Citation Format

Share Document