scholarly journals Image-Based Animal Detection and Breed Identification Using Neural Networks

Having accurate, detailed, and up-to-date information about the behaviour of animals in the wild world would improve our ability to study and conserve ecosystems. We investigate the ability to automatically, accurately, and inexpensively collect such data through various sources, which could help catalyse the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, animal behaviour into “big data” sciences and many more. So extracting information from the pictures remains an expensive, time-consuming, and manual task for us. We demonstrate that such information can be automatically extracted by deep learning and convolutional neural network. Leveraging on recent advances in deep learning techniques in computer vision, we propose in this project a framework to build automated animal recognition in the wild, aiming at an automated wildlife monitoring system. In particular, we use a single-labelled dataset done by citizen scientists, and the state-of-the-art deep convolutional neural network architectures, face biometrics, to train a computational system capable of filtering animal images and identifying species automatically and counting the number of species. Our results suggest that deep learning could enable the inexpensive, unobtrusive, high-volume, and even real-time collection of a wealth of information about vast numbers of animals in the wild and this, in turn, can, therefore, speed up research findings, construct more efficient citizen science-based monitoring systems and subsequent management decisions, having the potential to make significant impacts to the world of ecology and trap camera images analysis .

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4065
Author(s):  
Piotr Bojarczak ◽  
Waldemar Nowakowski

The article presents a vision system for detecting elements of railway track. Four types of fasteners, wooden and concrete sleepers, rails, and turnouts can be recognized by our system. In addition, it is possible to determine the degree of sleeper ballast coverage. Our system is also able to work when the track is moderately covered by snow. We used a Fully Convolutional Neural Network with 8 times upsampling (FCN-8) to detect railway track elements. In order to speed up training and improve performance of the model, a pre-trained deep convolutional neural network developed by Oxford’s Visual Geometry Group (VGG16) was used as a framework for our system. We also verified the invariance of our system to changes in brightness. To do this, we artificially varied the brightness of images. We performed two types of tests. In the first test, we changed the brightness by a constant value for the whole analyzed image. In the second test, we changed the brightness according to a predefined distribution corresponding to Gaussian function.


2018 ◽  
Vol 115 (25) ◽  
pp. E5716-E5725 ◽  
Author(s):  
Mohammad Sadegh Norouzzadeh ◽  
Anh Nguyen ◽  
Margaret Kosmala ◽  
Alexandra Swanson ◽  
Meredith S. Palmer ◽  
...  

Having accurate, detailed, and up-to-date information about the location and behavior of animals in the wild would improve our ability to study and conserve ecosystems. We investigate the ability to automatically, accurately, and inexpensively collect such data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, and animal behavior into “big data” sciences. Motion-sensor “camera traps” enable collecting wildlife pictures inexpensively, unobtrusively, and frequently. However, extracting information from these pictures remains an expensive, time-consuming, manual task. We demonstrate that such information can be automatically extracted by deep learning, a cutting-edge type of artificial intelligence. We train deep convolutional neural networks to identify, count, and describe the behaviors of 48 species in the 3.2 million-image Snapshot Serengeti dataset. Our deep neural networks automatically identify animals with >93.8% accuracy, and we expect that number to improve rapidly in years to come. More importantly, if our system classifies only images it is confident about, our system can automate animal identification for 99.3% of the data while still performing at the same 96.6% accuracy as that of crowdsourced teams of human volunteers, saving >8.4 y (i.e., >17,000 h at 40 h/wk) of human labeling effort on this 3.2 million-image dataset. Those efficiency gains highlight the importance of using deep neural networks to automate data extraction from camera-trap images, reducing a roadblock for this widely used technology. Our results suggest that deep learning could enable the inexpensive, unobtrusive, high-volume, and even real-time collection of a wealth of information about vast numbers of animals in the wild.


2021 ◽  
Vol 3 (1) ◽  
pp. 1-8
Author(s):  
Aji Prasetya Wibawa ◽  
Wahyu Arbianda Yudha Pratama ◽  
Anik Nur Handayani ◽  
Anusua Ghosh

Indonesia is a country with diverse cultures. One of which is Wayang Kulit, which has been recognized by UNESCO. Wayang kulit has a variety of names and personalities, however most younger generations are not familiar with the characters of these shadow puppets. With today's rapid technological advancements, people could use this technology to detect objects using cameras. Convolutional Neural Network (CNN) is one method that can be used. CNN is a learning process that is included in the Deep Learning section and is used to find the best representation. The CNN is commonly used for object detection, would be used to classify good and bad characters. The data used consists of 100 black and white puppet images that were downloaded one at a time. The data was obtained through a training process that uses the CNN method and Google Colab to help speed up the training process. After that, a new model is created to test  the puppet images. The result obtained a 92 percent accuracy rate, means that CNN can differentiate the Wayang Kulit character


2020 ◽  
Vol 39 (6) ◽  
pp. 8521-8528
Author(s):  
Manikandan Nagarajan ◽  
A. Sasikumar ◽  
D. Muralidharan ◽  
Muthaiah Rajappa

Approximate computing is a rapidly growing technique to speed up applications with less computational effort while maintaining the accuracy of error-resilient applications such as machine learning and deep learning. Inheritance properties of the machine and deep learning process give freedom for the designer to simplify the circuitry to speed up the computation process at the expense of accuracy of computational results. Fundamental blocks of any computation are adders. In order to optimize it for better performance, 2-bit multi-bit approximate adders (MAPX) are proposed in this work which breaks the lengthy carry chain. In contrast with other approximate larger width adders, instead of using accurate adders for the most significant part, here proposed 2-bit MAPX-1 and MAPX-2 adders are arranged in various ways to compose most and least significant parts. Designed 8-bit and 16-bit adders are evaluated for their performance and error characteristics. Proposed 2-bit MAPX-2 shows better error characteristics whose MED is 0.250 while occupying less area and MAPX-1 consumes less power and delay at the cost of accuracy. Among the extended adders, MAPX 8-bit adder design1 outperforms the best performing APX based 8-bit adder design1. The error performance of it is improved by 14%, 42.1% and 50.4% compared to the existing well-performing APX 8-bit Design1, Design2 and Design3 respectively. Similarly, proposed MAPX 16-bit Design1 exhibits overwhelming performance compared to best performing APX 16-bit Design1, and its error performance is improved by 24.3%, 34.9% and 50.3% compared to APX 16-bit Design1, Design2 and Design3 respectively. In order to evaluate the proposed adder for a real application, extended MAPX 16-bit Design1 is fit in the convolution layer of Low Weights Digit Detector (LWDD) convolutional neural network-based digit classification system. Our modified system accelerates the computation process by 1.25 factors while exhibiting the accuracy of 91% and it best fits error-tolerant real applications. All the adders are synthesized and implemented in the Intel Cyclone IV EP4CE22F17C6N FPGA.


2019 ◽  
Author(s):  
Seoin Back ◽  
Junwoong Yoon ◽  
Nianhan Tian ◽  
Wen Zhong ◽  
Kevin Tran ◽  
...  

We present an application of deep-learning convolutional neural network of atomic surface structures using atomic and Voronoi polyhedra-based neighbor information to predict adsorbate binding energies for the application in catalysis.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2021 ◽  
Vol 13 (2) ◽  
pp. 274
Author(s):  
Guobiao Yao ◽  
Alper Yilmaz ◽  
Li Zhang ◽  
Fei Meng ◽  
Haibin Ai ◽  
...  

The available stereo matching algorithms produce large number of false positive matches or only produce a few true-positives across oblique stereo images with large baseline. This undesired result happens due to the complex perspective deformation and radiometric distortion across the images. To address this problem, we propose a novel affine invariant feature matching algorithm with subpixel accuracy based on an end-to-end convolutional neural network (CNN). In our method, we adopt and modify a Hessian affine network, which we refer to as IHesAffNet, to obtain affine invariant Hessian regions using deep learning framework. To improve the correlation between corresponding features, we introduce an empirical weighted loss function (EWLF) based on the negative samples using K nearest neighbors, and then generate deep learning-based descriptors with high discrimination that is realized with our multiple hard network structure (MTHardNets). Following this step, the conjugate features are produced by using the Euclidean distance ratio as the matching metric, and the accuracy of matches are optimized through the deep learning transform based least square matching (DLT-LSM). Finally, experiments on Large baseline oblique stereo images acquired by ground close-range and unmanned aerial vehicle (UAV) verify the effectiveness of the proposed approach, and comprehensive comparisons demonstrate that our matching algorithm outperforms the state-of-art methods in terms of accuracy, distribution and correct ratio. The main contributions of this article are: (i) our proposed MTHardNets can generate high quality descriptors; and (ii) the IHesAffNet can produce substantial affine invariant corresponding features with reliable transform parameters.


Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 652 ◽  
Author(s):  
Carlo Augusto Mallio ◽  
Andrea Napolitano ◽  
Gennaro Castiello ◽  
Francesco Maria Giordano ◽  
Pasquale D'Alessio ◽  
...  

Background: Coronavirus disease 2019 (COVID-19) pneumonia and immune checkpoint inhibitor (ICI) therapy-related pneumonitis share common features. The aim of this study was to determine on chest computed tomography (CT) images whether a deep convolutional neural network algorithm is able to solve the challenge of differential diagnosis between COVID-19 pneumonia and ICI therapy-related pneumonitis. Methods: We enrolled three groups: a pneumonia-free group (n = 30), a COVID-19 group (n = 34), and a group of patients with ICI therapy-related pneumonitis (n = 21). Computed tomography images were analyzed with an artificial intelligence (AI) algorithm based on a deep convolutional neural network structure. Statistical analysis included the Mann–Whitney U test (significance threshold at p < 0.05) and the receiver operating characteristic curve (ROC curve). Results: The algorithm showed low specificity in distinguishing COVID-19 from ICI therapy-related pneumonitis (sensitivity 97.1%, specificity 14.3%, area under the curve (AUC) = 0.62). ICI therapy-related pneumonitis was identified by the AI when compared to pneumonia-free controls (sensitivity = 85.7%, specificity 100%, AUC = 0.97). Conclusions: The deep learning algorithm is not able to distinguish between COVID-19 pneumonia and ICI therapy-related pneumonitis. Awareness must be increased among clinicians about imaging similarities between COVID-19 and ICI therapy-related pneumonitis. ICI therapy-related pneumonitis can be applied as a challenge population for cross-validation to test the robustness of AI models used to analyze interstitial pneumonias of variable etiology.


Sign in / Sign up

Export Citation Format

Share Document