scholarly journals An IoT System Using Deep Learning to Classify Camera Trap Images on the Edge

Computers ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 13
Imran Zualkernan ◽  
Salam Dhou ◽  
Jacky Judas ◽  
Ali Reza Sajun ◽  
Brylle Ryan Gomez ◽  

Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to manual coding, the results are often stale by the time they get to the ecologists. Using the Internet of Things (IoT) combined with deep learning represents a good solution for both these problems, as the images can be classified automatically, and the results immediately made available to ecologists. This paper proposes an IoT architecture that uses deep learning on edge devices to convey animal classification results to a mobile app using the LoRaWAN low-power, wide-area network. The primary goal of the proposed approach is to reduce the cost of the wildlife monitoring process for ecologists, and to provide real-time animal sightings data from the camera traps in the field. Camera trap image data consisting of 66,400 images were used to train the InceptionV3, MobileNetV2, ResNet18, EfficientNetB1, DenseNet121, and Xception neural network models. While performance of the trained models was statistically different (Kruskal–Wallis: Accuracy H(5) = 22.34, p < 0.05; F1-score H(5) = 13.82, p = 0.0168), there was only a 3% difference in the F1-score between the worst (MobileNet V2) and the best model (Xception). Moreover, the models made similar errors (Adjusted Rand Index (ARI) > 0.88 and Adjusted Mutual Information (AMU) > 0.82). Subsequently, the best model, Xception (Accuracy = 96.1%; F1-score = 0.87; F1-Score = 0.97 with oversampling), was optimized and deployed on the Raspberry Pi, Google Coral, and Nvidia Jetson edge devices using both TenorFlow Lite and TensorRT frameworks. Optimizing the models to run on edge devices reduced the average macro F1-Score to 0.7, and adversely affected the minority classes, reducing their F1-score to as low as 0.18. Upon stress testing, by processing 1000 images consecutively, Jetson Nano, running a TensorRT model, outperformed others with a latency of 0.276 s/image (s.d. = 0.002) while consuming an average current of 1665.21 mA. Raspberry Pi consumed the least average current (838.99 mA) with a ten times worse latency of 2.83 s/image (s.d. = 0.036). Nano was the only reasonable option as an edge device because it could capture most animals whose maximum speeds were below 80 km/h, including goats, lions, ostriches, etc. While the proposed architecture is viable, unbalanced data remain a challenge and the results can potentially be improved by using object detection to reduce imbalances and by exploring semi-supervised learning.

2019 ◽  
Hayder Yousif

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Camera traps are a popular tool to sample animal populations because they are noninvasive, detect a variety of species, and can record many thousands of animal detections per deployment. Cameras are typically set to take bursts of multiple images for each detection, and are deployed in arrays of dozens or hundreds of sites, often resulting in millions of images per study. The task of converting images to animal detection records from such large image collections is daunting, and made worse by situations that generate copious empty pictures from false triggers (e.g. camera malfunction or moving vegetation) or pictures of humans. We offer the first widely available computer vision tool for processing camera trap images. Our results show that the tool is accurate and results in substantial time savings for processing large image datasets, thus improving our ability to monitor wildlife across large scales with camera traps. In this dissertation, we have developed new image/video processing and computer vision algorithms for efficient and accurate object detection and sequence-level classiffication from natural scene camera-trap images. This work addresses the following five major tasks: (1) Human-animal detection. We develop a fast and accurate scheme for human-animal detection from highly cluttered camera-trap images using joint background modeling and deep learning classification. Specifically, first, We develop an effective background modeling and subtraction scheme to generate region proposals for the foreground objects. We then develop a cross-frame image patch verification to reduce the number of foreground object proposals. Finally, We perform complexity-accuracy analysis of deep convolutional neural networks (DCNN) to develop a fast deep learning classification scheme to classify these region proposals into three categories: human, animals, and background patches. The optimized DCNN is able to maintain high level of accuracy while reducing the computational complexity by 14 times. Our experimental results demonstrate that the proposed method outperforms existing methods on the camera-trap dataset. (2) Object segmentation from natural scene. We first design and train a fast DCNN for animal-human-background object classification, which is used to analyze the input image to generate multi-layer feature maps, representing the responses of different image regions to the animal-human-background classifier. From these feature maps, we construct the so-called deep objectness graph for accurate animal-human object segmentation with graph cut. The segmented object regions from each image in the sequence are then verfied and fused in the temporal domain using background modeling. Our experimental results demonstrate that our proposed method outperforms existing state-of-the-art methods on the camera-trap dataset with highly cluttered natural scenes. (3) DCNN domain background modeling. We replaced the background model with a new more efficient deep learning based model. The input frames are segmented into regions through the deep objectness graph then the region boundaries of the input frames are multiplied by each other to obtain the regions of movement patches. We construct the background representation using the temporal information of the co-located patches. We propose to fuse the subtraction and foreground/background pixel classiffcation of two representation : a) chromaticity and b) deep pixel information. (4) Sequence-level object classiffcation. We proposed a new method for sequence-level video recognition with application to animal species recognition from camera trap images. First, using background modeling and cross-frame patch verification, we developed a scheme to generate candidate object regions or object proposals in the spatiotemporal domain. Second, we develop a dynamic programming optimization approach to identify the best temporal subset of object proposals. Third, we aggregate and fuse the features of these selected object proposals for efficient sequence-level animal species classification.

Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4651 ◽  
Shadia Awadallah ◽  
David Moure ◽  
Pedro Torres-González

In the last few years, there has been a huge interest in the Internet of Things (hereinafter IoT) field. Among the large number of IoT technologies, the low-power wide-area network (hereinafter LPWAN) has emerged providing low power, low data-rate communication over long distances, enabling battery-operated devices to operate for long time periods. This paper introduces an application of long-range (hereinafter LoRa) technology, one of the most popular LPWANs, to volcanic surveillance. The first low-power and low-cost wireless network based on LoRa to monitor the soil temperature in thermal anomaly zones in volcanic areas has been developed. A total of eight thermometers (end devices) have been deployed on a Teide volcano in Tenerife (Canary Islands). In addition, a repeater device was developed to extend the network range when the gateway did not have a line of sight connection with the thermometers. Combining LoRa communication capabilities with microchip microcontrollers (end devices and repeater) and a Raspberry Pi board (gateway), three main milestones have been achieved: (i) extreme low-power consumption, (ii) real-time and proper temperature acquisition, and (iii) a reliable network operation. The first results are shown. These results provide enough quality for a proper volcanic surveillance.

2018 ◽  
Vol 115 (25) ◽  
pp. E5716-E5725 ◽  
Mohammad Sadegh Norouzzadeh ◽  
Anh Nguyen ◽  
Margaret Kosmala ◽  
Alexandra Swanson ◽  
Meredith S. Palmer ◽  

Having accurate, detailed, and up-to-date information about the location and behavior of animals in the wild would improve our ability to study and conserve ecosystems. We investigate the ability to automatically, accurately, and inexpensively collect such data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, and animal behavior into “big data” sciences. Motion-sensor “camera traps” enable collecting wildlife pictures inexpensively, unobtrusively, and frequently. However, extracting information from these pictures remains an expensive, time-consuming, manual task. We demonstrate that such information can be automatically extracted by deep learning, a cutting-edge type of artificial intelligence. We train deep convolutional neural networks to identify, count, and describe the behaviors of 48 species in the 3.2 million-image Snapshot Serengeti dataset. Our deep neural networks automatically identify animals with >93.8% accuracy, and we expect that number to improve rapidly in years to come. More importantly, if our system classifies only images it is confident about, our system can automate animal identification for 99.3% of the data while still performing at the same 96.6% accuracy as that of crowdsourced teams of human volunteers, saving >8.4 y (i.e., >17,000 h at 40 h/wk) of human labeling effort on this 3.2 million-image dataset. Those efficiency gains highlight the importance of using deep neural networks to automate data extraction from camera-trap images, reducing a roadblock for this widely used technology. Our results suggest that deep learning could enable the inexpensive, unobtrusive, high-volume, and even real-time collection of a wealth of information about vast numbers of animals in the wild.

Forest fires, wildfires and bushfires are a global environmental problem that causes serious damage each year. The most significant factors in the fight against forest fires involve earliest possible detection of the fire, flame or smoke event, proper classification of the fire and rapid response from the fire departments. In this paper, we developed an automatic early warning system that incorporates multiple sensors and state of the art deep learning algorithm which has a minimum number of false positives and give a good accuracy in real time data and in the lowest cost possible to our drone to monitor forest fire as early as possible and report it to the concerned authority. The drones will be equipped with sensors, Raspberry pi 3, neural stick, APM 2.5, GPS, Wifi. The neural stick will be used for real time image processing using our state-of-the-art deep learning model. And as soon as forest fire is detected the UAV will send an alert message to the concerned authority on the mobile App along with location coordinates of the fire, image and the amount of area in which forest is spread using a mesh messaging. So that immediate action will be taken to stop it from spreading and causing loss of millions of lives and money. Using both deep learning and infrared cameras to monitor the forest and surrounding area, we will take advantage of recent advances in multi-sensor surveillance technologies. This innovative technique helps the forest department to detect fire in first 12 hours of its initialization , which is the most effective time to control the fire.

2021 ◽  
Vol 13 (6) ◽  
pp. 18651-18654
Lukman Ismail ◽  
Syafiq Sulaiman ◽  
Muhammad Izzat Hakimi Mat Nafi ◽  
Muhammad Syafiq Mohmad Nor ◽  
Nur Izyan Fathiah Saimeh ◽  

The Asiatic Golden Cat Catopuma temminckii is poorly studied in Peninsular Malaysia.  We deployed 12 camera traps to assess the wildlife diversity in the unprotected State Land Forest of Merapoh, Pahang State.  During the period from August to October 2019, one Asiatic Golden Cat was photographed at a single camera trap station.  This record outside the protected area network emphasizes the importance of wildlife corridors.  This State Land Forest is located between Forest Reserve and Taman Negara National Park.  Therefore, appropriate conservation measures must be taken in order to maintain this site as a wildlife corridor.

Matthew Kutugata ◽  
Jeremy Baumgardt ◽  
John A. Goolsby ◽  
Alexis E. Racelis

Abstract Camera traps provide a low-cost approach to collect data and monitor wildlife across large scales but hand-labeling images at a rate that outpaces accumulation is difficult. Deep learning, a subdiscipline of machine learning and computer science, can address the issue of automatically classifying camera-trap images with a high degree of accuracy. This technique, however, may be less accessible to ecologists or small-scale conservation projects, and has serious limitations. In this study, we trained a simple deep learning model using a dataset of 120,000 images to identify the presence of nilgai Boselaphus tragocamelus, a regionally specific nonnative game animal, in camera-trap images with an overall accuracy of 97%. We trained a second model to identify 20 groups of animals and one group of images without any animals present, labeled as “none,” with an accuracy of 89%. Lastly, we tested the multigroup model on images collected of similar species, but in the southwestern United States, resulting in significantly lower precision and recall for each group. This study highlights the potential of deep learning for automating camera-trap image processing workflows, provides a brief overview of image-based deep learning, and discusses the often-understated limitations and methodological considerations in the context of wildlife conservation and species monitoring.

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1439 ◽  
Nora El-Rashidy ◽  
Shaker El-Sappagh ◽  
S. M. Riazul Islam ◽  
Hazem M. El-Bakry ◽  
Samir Abdelrazek

Coronavirus (COVID-19) is a new virus of viral pneumonia. It can outbreak in the world through person-to-person transmission. Although several medical companies provide cooperative monitoring healthcare systems, these solutions lack offering of the end-to-end management of the disease. The main objective of the proposed framework is to bridge the current gap between current technologies and healthcare systems. The wireless body area network, cloud computing, fog computing, and clinical decision support system are integrated to provide a comprehensive and complete model for disease detection and monitoring. By monitoring a person with COVID-19 in real time, physicians can guide patients with the right decisions. The proposed framework has three main layers (i.e., a patient layer, cloud layer, and hospital layer). In the patient layer, the patient is tracked through a set of wearable sensors and a mobile app. In the cloud layer, a fog network architecture is proposed to solve the issues of storage and data transmission. In the hospital layer, we propose a convolutional neural network-based deep learning model for COVID-19 detection based on patient’s X-ray scan images and transfer learning. The proposed model achieved promising results compared to the state-of-the art (i.e., accuracy of 97.95% and specificity of 98.85%). Our framework is a useful application, through which we expect significant effects on COVID-19 proliferation and considerable lowering in healthcare expenses.

Bhakti J. Soochik

Abstract: This paper simulate IoT based smart companies and make our networking infrastructure effective, efficient and most importantly accurate with security. The simulator used is Cisco Packet Tracer, this tool has been used form many years in networking. Main strength of the tool is the offering of a variety of network components that simulate a real network, devices would then need to be interconnected and configured in order to create a network. Technology plays a critical role in all daily activities of the present day. One of these needs is to create a smart office that controls operation and turns off electronic devices via a smartphone. This implementation can be implemented effectively using package tracking software that includes IoT functions to control and simulate a smart office. The latest version of the tool Cisco introduced IoT functionalities, and now it is possible to add to the network smart devices, components, sensors, actuators and also devices that simulate microcontrollers such as Arudino or Raspberry Pi. All the IoT devices can be run on standard programs or can be customized by programming them with Java, Phyton or Blockly. This makes Cisco Packet Tracer an ideal tool for building IoT practical simulations. Smart-Industrial smart-company office offer simulation of a power plant that produces and stores electricity via solar panels and wind turbines. All the electricity is produced by smart devices, then stored and utilized to power a production chain filled with smart sensor and actuators. IoT security features are also introduced in the simulations. Keywords: Internet of things (IOT), Campus Network (CN), networking, wide area network (WAN).

Sign in / Sign up

Export Citation Format

Share Document