scholarly journals Deep-Learning-Based Automatic Monitoring of Pigs’ Physico-Temporal Activities at Different Greenhouse Gas Concentrations

Animals ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 3089
Author(s):  
Anil Bhujel ◽  
Elanchezhian Arulmozhi ◽  
Byeong-Eun Moon ◽  
Hyeon-Tae Kim

Pig behavior is an integral part of health and welfare management, as pigs usually reflect their inner emotions through behavior change. The livestock environment plays a key role in pigs’ health and wellbeing. A poor farm environment increases the toxic GHGs, which might deteriorate pigs’ health and welfare. In this study a computer-vision-based automatic monitoring and tracking model was proposed to detect pigs’ short-term physical activities in the compromised environment. The ventilators of the livestock barn were closed for an hour, three times in a day (07:00–08:00, 13:00–14:00, and 20:00–21:00) to create a compromised environment, which increases the GHGs level significantly. The corresponding pig activities were observed before, during, and after an hour of the treatment. Two widely used object detection models (YOLOv4 and Faster R-CNN) were trained and compared their performances in terms of pig localization and posture detection. The YOLOv4, which outperformed the Faster R-CNN model, was coupled with a Deep-SORT tracking algorithm to detect and track the pig activities. The results revealed that the pigs became more inactive with the increase in GHG concentration, reducing their standing and walking activities. Moreover, the pigs shortened their sternal-lying posture, increasing the lateral lying posture duration at higher GHG concentration. The high detection accuracy (mAP: 98.67%) and tracking accuracy (MOTA: 93.86% and MOTP: 82.41%) signify the models’ efficacy in the monitoring and tracking of pigs’ physical activities non-invasively.

Author(s):  
Anil Bhujel ◽  
Elanchezhian Arulmozhi ◽  
Byeong Eun Moon ◽  
Hyeon Tae Kim

Pig behavior is an integral part of health and welfare management, as pigs usually reflect their inner emotions through behavior change. The livestock environment plays a key role in pigs' health and wellbeing. A poor farm environment increases the toxic GHGs, which might deteriorate pigs' health and welfare. In this study a computer-vision-based automatic monitoring and tracking model was proposed to detect short-term pigs' physical activities in a compromised environment. The ventilators of the livestock barn were closed for an hour, three times in a day (07:00-08:00, 13:00-14:00, and 20:00-21:00) to create a compromised environment, which increases the GHGs level significantly. The corresponding pig activities were observed before, during, and after an hour of the treatment. Two widely used object detection models (YOLOv4 and Fast-er R-CNN) were trained and compared their performances in terms of pig localization and posture detection. The YOLOv4, which outperformed the Faster R-CNN model, coupled with a Deep-SORT tracking algorithm to detect and track the pig activities. The results showed that the pigs became more inactive with the increase in GHG concentration, reducing their standing and walking activities. Moreover, the pigs also shortened their sternal-lying posture increasing the lateral lying posture duration at higher GHG concentration. The high detection accuracy (mAP: 98.67%) and tracking accuracy (MOTA: 93.86% and MOTP: 82.41%) signify the models’ efficacy in monitoring and tracking pigs' physical activities non-invasively.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Mekatronika ◽  
2020 ◽  
Vol 2 (2) ◽  
pp. 49-54
Author(s):  
Arzielah Ashiqin Alwi ◽  
Ahmad Najmuddin Ibrahim ◽  
Muhammad Nur Aiman Shapiee ◽  
Muhammad Ar Rahim Ibrahim ◽  
Mohd Azraai Mohd Razman ◽  
...  

Dynamic gameplay, fast-paced and fast-changing gameplay, where angle shooting (top and bottom corner) has the best chance of a good goal, are the main aspects of handball. When it comes to the narrow-angle area, the goalkeeper has trouble blocked the goal. Therefore, this research discusses image processing to investigate the shooting precision performance analysis to detect the ball's accuracy at high speed. In the handball goal, the participants had to complete 50 successful shots at each of the four target locations. Computer vision will then be implemented through a camera to identify the ball, followed by determining the accuracy of the ball position of floating, net tangle and farthest or smallest using object detection as the accuracy marker. The model will be trained using Deep Learning (DL)  models of YOLOv2, YOLOv3, and Faster R-CNN and the best precision models of ball detection accuracy were compared. It was found that the best performance of the accuracy of the classifier Faster R-CNN produces 99% for all ball positions.


Author(s):  
Hao Han ◽  
Jingming Hou ◽  
Ganggang Bai ◽  
Bingyao Li ◽  
Tian Wang ◽  
...  

Abstract Reports indicate that high-cost, insecurity, and difficulty in complex environments hinder the traditional urban road inundation monitoring approach. This work proposed an automatic monitoring method for experimental urban road inundation based on the YOLOv2 deep learning framework. The proposed method is an affordable, secure, with high accuracy rates in urban road inundation evaluation. The automatic detection of experimental urban road inundation was carried out under both dry and wet conditions on roads in the study area with a scale of few m2. The validation average accuracy rate of the model was high with 90.1% inundation detection, while its training average accuracy rate was 96.1%. This indicated that the model has effective performance with high detection accuracy and recognition ability. Besides, the inundated water area of the experimental inundation region and the real road inundation region in the images was computed, showing that the relative errors of the measured area and the computed area were less than 20%. The results indicated that the proposed method can provide reliable inundation area evaluation. Therefore, our findings provide an effective guide in the management of urban floods and urban flood-warning, as well as systematical validation data for hydrologic and hydrodynamic models.


Agriculture ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 843
Author(s):  
Chengqi Liu ◽  
Han Zhou ◽  
Jing Cao ◽  
Xuchao Guo ◽  
Jie Su ◽  
...  

Tracking the behavior trajectories in pigs in group is becoming increasingly important for welfare feeding. A novel method was proposed in this study to accurately track individual trajectories of pigs in group and analyze their behavior characteristics. First, a multi-pig trajectory tracking model was established based on DeepLabCut (DLC) to realize the daily trajectory tracking of piglets. Second, a high-dimensional spatiotemporal feature model was established based on kernel principal component analysis (KPCA) to achieve nonlinear trajectory optimal clustering. At the same time, the abnormal trajectory correction model was established from five dimensions (semantic, space, angle, time, and velocity) to avoid trajectory loss and drift. Finally, the thermal map of the track distribution was established to analyze the four activity areas of the piggery (resting, drinking, excretion, and feeding areas). Experimental results show that the trajectory tracking accuracy of our method reaches 96.88%, the tracking speed is 350 fps, and the loss value is 0.002. Thus, the method based on DLC–KPCA can meet the requirements of identification of piggery area and tracking of piglets’ behavior. This study is helpful for automatic monitoring of animal behavior and provides data support for breeding.


2019 ◽  
Vol 9 (14) ◽  
pp. 2862 ◽  
Author(s):  
Byoungjun Kim ◽  
Joonwhoan Lee

Fire is an abnormal event which can cause significant damage to lives and property. In this paper, we propose a deep learning-based fire detection method using a video sequence, which imitates the human fire detection process. The proposed method uses Faster Region-based Convolutional Neural Network (R-CNN) to detect the suspected regions of fire (SRoFs) and of non-fire based on their spatial features. Then, the summarized features within the bounding boxes in successive frames are accumulated by Long Short-Term Memory (LSTM) to classify whether there is a fire or not in a short-term period. The decisions for successive short-term periods are then combined in the majority voting for the final decision in a long-term period. In addition, the areas of both flame and smoke are calculated and their temporal changes are reported to interpret the dynamic fire behavior with the final fire decision. Experiments show that the proposed long-term video-based method can successfully improve the fire detection accuracy compared with the still image-based or short-term video-based method by reducing both the false detections and the misdetections.


Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

AbstractInsect monitoring methods are typically very time consuming and involves substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5,675 images per night. A customized convolutional neural network was trained on 2,000 labelled images of live moths represented by eight different species, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Author(s):  
Vinod Kumar Yadav ◽  
Dr. Pritaj Yadav ◽  
Dr. Shailja Sharma

In the current scenario on the increasing number of motor vehicles day by day, so traffic regulation faces many challenges on intelligent road surveillance and governance, this is one of the important research areas in the artificial intelligence or deep learning. Among various technologies, computer vision and machine learning algorithms have the most efficient, as a huge vehicles video or image data on road is available for study. In this paper, we proposed computer vision-based an efficient approach to vehicle detection, recognition and Tracking. We merge with one-stage (YOLOv4) and two-stage (R-FCN) detectors methods to improve vehicle detection accuracy and speed results. Two-stage object detection methods provide high localization and object recognition precision, even as one-stage detectors achieve high inference and test speed. Deep-SORT tracker method applied for detects bounding boxes to estimate trajectories. We analyze the performance of the Mask RCNN benchmark, YOLOv3 and Proposed YOLOv4 + R-FCN on the UA-DETRAC dataset and study with certain parameters like Mean Average Precisions (mAP), Precision recall.


2020 ◽  
Vol 9 (12) ◽  
pp. 758
Author(s):  
Frederik Seerup Hass ◽  
Jamal Jokar Arsanjani

Synthetic aperture radar (SAR) plays a remarkable role in ocean surveillance, with capabilities of detecting oil spills, icebergs, and marine traffic both at daytime and at night, regardless of clouds and extreme weather conditions. The detection of ocean objects using SAR relies on well-established methods, mostly adaptive thresholding algorithms. In most waters, the dominant ocean objects are ships, whereas in arctic waters the vast majority of objects are icebergs drifting in the ocean and can be mistaken for ships in terms of navigation and ocean surveillance. Since these objects can look very much alike in SAR images, the determination of what objects actually are still relies on manual detection and human interpretation. With the increasing interest in the arctic regions for marine transportation, it is crucial to develop novel approaches for automatic monitoring of the traffic in these waters with satellite data. Hence, this study aims at proposing a deep learning model based on YoloV3 for discriminating icebergs and ships, which could be used for mapping ocean objects ahead of a journey. Using dual-polarization Sentinel-1 data, we pilot-tested our approach on a case study in Greenland. Our findings reveal that our approach is capable of training a deep learning model with reliable detection accuracy. Our methodical approach along with the choice of data and classifiers can be of great importance to climate change researchers, shipping industries and biodiversity analysts. The main difficulties were faced in the creation of training data in the Arctic waters and we concluded that future work must focus on issues regarding training data.


2019 ◽  
Vol 31 (6) ◽  
pp. 844-850 ◽  
Author(s):  
Kevin T. Huang ◽  
Michael A. Silva ◽  
Alfred P. See ◽  
Kyle C. Wu ◽  
Troy Gallerani ◽  
...  

OBJECTIVERecent advances in computer vision have revolutionized many aspects of society but have yet to find significant penetrance in neurosurgery. One proposed use for this technology is to aid in the identification of implanted spinal hardware. In revision operations, knowing the manufacturer and model of previously implanted fusion systems upfront can facilitate a faster and safer procedure, but this information is frequently unavailable or incomplete. The authors present one approach for the automated, high-accuracy classification of anterior cervical hardware fusion systems using computer vision.METHODSPatient records were searched for those who underwent anterior-posterior (AP) cervical radiography following anterior cervical discectomy and fusion (ACDF) at the authors’ institution over a 10-year period (2008–2018). These images were then cropped and windowed to include just the cervical plating system. Images were then labeled with the appropriate manufacturer and system according to the operative record. A computer vision classifier was then constructed using the bag-of-visual-words technique and KAZE feature detection. Accuracy and validity were tested using an 80%/20% training/testing pseudorandom split over 100 iterations.RESULTSA total of 321 total images were isolated containing 9 different ACDF systems from 5 different companies. The correct system was identified as the top choice in 91.5% ± 3.8% of the cases and one of the top 2 or 3 choices in 97.1% ± 2.0% and 98.4 ± 13% of the cases, respectively. Performance persisted despite the inclusion of variable sizes of hardware (i.e., 1-level, 2-level, and 3-level plates). Stratification by the size of hardware did not improve performance.CONCLUSIONSA computer vision algorithm was trained to classify at least 9 different types of anterior cervical fusion systems using relatively sparse data sets and was demonstrated to perform with high accuracy. This represents one of many potential clinical applications of machine learning and computer vision in neurosurgical practice.


Sign in / Sign up

Export Citation Format

Share Document