scholarly journals Deep Learning Methods for monitoring, detecting and measuring Deer Movements for Wildlife Conservation

2019 ◽  
Vol 8 (4) ◽  
pp. 3303-3308

Wildlife Researchers examine and dig video corpus for behavioral studies of free-ranging animals, which included monitoring, analyzing, classifying & detecting, managing, counting etc. Unfortunately, automated visual implementation for challenging real-time scenarios of wildlife is not an easy task especially for classification and recognition of wildlife-animals and estimate the sizes of wildlife populations. The aim of this paper is to bring state-of-the-art results from raw sensor data for learning features advancing automatic implementation and interpreting of animal movements from different perspectives. Also, turnout with an objectness score from object proposals generated by Region Proposal Network (RPN). The imagery data are captured from the motion sensor cameras and then through RCNN, Fast RCNN and Faster RCNN, it automatically are segmented and recognized the object with its objectness score. ConvNet automatically process these images and correctly recognizing the object. Experimentation results demonstrated prominent deer images with 96% accuracy with identifying three basic activities sleeping, grazing and resting. In addition, a measured implementation has been shown among CNN, RCNN, Fast RCNN and Faster RCNN.

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4486
Author(s):  
Niall O’Mahony ◽  
Sean Campbell ◽  
Lenka Krpalkova ◽  
Anderson Carvalho ◽  
Joseph Walsh ◽  
...  

Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.


Author(s):  
M A Isayev ◽  
D A Savelyev

The comparison of different convolutional neural networks which are the core of the most actual solutions in the computer vision area is considers in hhe paper. The study includes benchmarks of this state-of-the-art solutions by some criteria, such as mAP (mean average precision), FPS (frames per seconds), for the possibility of real-time usability. It is concluded on the best convolutional neural network model and deep learning methods that were used at particular solution.


2021 ◽  
Vol 21 (3) ◽  
pp. 93-104
Author(s):  
Yoseob Heo ◽  
Seongho Seo ◽  
We Shim ◽  
Jongseok Kang

Several researchers have been drawn to the development of fire detector in recent years, to protect people and property from the catastrophic disaster of fire. However, studies related to fire monitoring are affected by some unique characteristics of fire sensor signals, such as time dependence and the complexity of the signal pattern based on the variety of fire types,. In this study, a new deep learning-based approach that accurately classifies various types of fire situations in real-time using data obtained from multidimensional channel fire sensor signals was proposed. The contribution of this study is to develop a stacked-LSTM model that considers the time-series characteristics of sensor data and the complexity of multidimensional channel sensing data to develop a new fire monitoring framework for fire identification based on improving existing fire detectors.


2021 ◽  
pp. 503-514
Author(s):  
Luis-Roberto Jácome-Galarza ◽  
Miguel-Andrés Realpe-Robalino ◽  
Jonathan Paillacho-Corredores ◽  
José-Leonardo Benavides-Maldonado

1997 ◽  
Author(s):  
Jeffrey J. Carlson ◽  
Sharon A. Stansfield ◽  
Dan Shawver ◽  
Gerald M. Flachs ◽  
Jay B. Jordan ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Nadim Arubai ◽  
Omar Hamdoun ◽  
Assef Jafar

Applying deep learning methods, this paper addresses depth prediction problem resulting from single monocular images. A vector of distances is predicted instead of a whole image matrix. A vector-only prediction decreases training overhead and prediction periods and requires less resources (memory, CPU). We propose a module which is more time efficient than the state-of-the-art modules ResNet, VGG, FCRN, and DORN. We enhanced the network results by training it on depth vectors from other levels (we get a new level by changing the Lidar tilt angle). The predicted results give a vector of distances around the robot, which is sufficient for the obstacle avoidance problem and many other applications.


2021 ◽  
Author(s):  
Hongming Li ◽  
Zhi-De Deng ◽  
Desmond Oathes ◽  
Yong Fan

Background: Electric fields (E-fields) induced by transcranial magnetic stimulation (TMS) can be modeled using partial differential equations (PDEs) with boundary conditions. However, existing numerical methods to solve PDEs for computing E-fields are usually computationally expensive. It often takes minutes to compute a high-resolution E-field using state-of-the-art finite-element methods (FEM). Methods: We developed a self-supervised deep learning (DL) method to compute precise TMS E-fields in real-time. Given a head model and the primary E-field generated by TMS coils, a self-supervised DL model was built to generate a E-field by minimizing a loss function that measures how well the generated E-field fits the governing PDE and Neumann boundary condition. The DL model was trained in a self-supervised manner, which does not require any external supervision. We evaluated the DL model using both a simulated sphere head model and realistic head models of 125 individuals and compared the accuracy and computational efficiency of the DL model with a state-of-the-art FEM. Results: In realistic head models, the DL model obtained accurate E-fields with significantly smaller PDE residual and boundary condition residual than the FEM (p<0.002, Wilcoxon signed-rank test). The DL model was computationally efficient, which took about 0.30 seconds on average to compute the E-field for one testing individual. The DL model built for the simulated sphere head model also obtained an accurate E-field whose difference from the analytical E-fields was 0.004, more accurate than the solution obtained using the FEM. Conclusions: We have developed a self-supervised DL model to directly learn a mapping from the magnetic vector potential of a TMS coil and a realistic head model to the TMS induced E-fields, facilitating real-time, precise TMS E-field modeling.


Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3371 ◽  
Author(s):  
Hossain ◽  
Lee

In recent years, demand has been increasing for target detection and tracking from aerial imagery via drones using onboard powered sensors and devices. We propose a very effective method for this application based on a deep learning framework. A state-of-the-art embedded hardware system empowers small flying robots to carry out the real-time onboard computation necessary for object tracking. Two types of embedded modules were developed: one was designed using a Jetson TX or AGX Xavier, and the other was based on an Intel Neural Compute Stick. These are suitable for real-time onboard computing power on small flying drones with limited space. A comparative analysis of current state-of-the-art deep learning-based multi-object detection algorithms was carried out utilizing the designated GPU-based embedded computing modules to obtain detailed metric data about frame rates, as well as the computation power. We also introduce an effective target tracking approach for moving objects. The algorithm for tracking moving objects is based on the extension of simple online and real-time tracking. It was developed by integrating a deep learning-based association metric approach with simple online and real-time tracking (Deep SORT), which uses a hypothesis tracking methodology with Kalman filtering and a deep learning-based association metric. In addition, a guidance system that tracks the target position using a GPU-based algorithm is introduced. Finally, we demonstrate the effectiveness of the proposed algorithms by real-time experiments with a small multi-rotor drone.


Information ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 157 ◽  
Author(s):  
Daniel S. Berman

Domain generation algorithms (DGAs) represent a class of malware used to generate large numbers of new domain names to achieve command-and-control (C2) communication between the malware program and its C2 server to avoid detection by cybersecurity measures. Deep learning has proven successful in serving as a mechanism to implement real-time DGA detection, specifically through the use of recurrent neural networks (RNNs) and convolutional neural networks (CNNs). This paper compares several state-of-the-art deep-learning implementations of DGA detection found in the literature with two novel models: a deeper CNN model and a one-dimensional (1D) Capsule Networks (CapsNet) model. The comparison shows that the 1D CapsNet model performs as well as the best-performing model from the literature.


Sign in / Sign up

Export Citation Format

Share Document