scholarly journals Augmentation Methods for Biodiversity Training Data

Author(s):  
Mario Lasseck

The detection and identification of individual species based on images or audio recordings has shown significant performance increase over the last few years, thanks to recent advances in deep learning. Reliable automatic species recognition provides a promising tool for biodiversity monitoring, research and education. Image-based plant identification, for example, now comes close to the most advanced human expertise (Bonnet et al. 2018, Lasseck 2018a). Besides improved machine learning algorithms, neural network architectures, deep learning frameworks and computer hardware, a major reason for the gain in performance is the increasing abundance of biodiversity training data, either from observational networks and data providers like GBIF, Xeno-canto, iNaturalist, etc. or natural history museum collections like the Animal Sound Archive of the Museum für Naturkunde. However, in many cases, this occurrence data is still insufficient for data-intensive deep learning approaches and is often unbalanced, with only few examples for very rare species. To overcome these limitations, data augmentation can be used. This technique synthetically creates more training samples by applying various subtle random manipulations to the original data in a label-preserving way without changing the content. In the talk, we will present augmentation methods for images and audio data. The positive effect on identification performance will be evaluated on different large-scale data sets from recent plant and bird identification (LifeCLEF 2017, 2018) and detection (DCASE 2018) challenges (Lasseck 2017, Lasseck 2018b, Lasseck 2018c).

Database ◽  
2019 ◽  
Vol 2019 ◽  
Author(s):  
Tao Chen ◽  
Mingfen Wu ◽  
Hexi Li

Abstract The automatic extraction of meaningful relations from biomedical literature or clinical records is crucial in various biomedical applications. Most of the current deep learning approaches for medical relation extraction require large-scale training data to prevent overfitting of the training model. We propose using a pre-trained model and a fine-tuning technique to improve these approaches without additional time-consuming human labeling. Firstly, we show the architecture of Bidirectional Encoder Representations from Transformers (BERT), an approach for pre-training a model on large-scale unstructured text. We then combine BERT with a one-dimensional convolutional neural network (1d-CNN) to fine-tune the pre-trained model for relation extraction. Extensive experiments on three datasets, namely the BioCreative V chemical disease relation corpus, traditional Chinese medicine literature corpus and i2b2 2012 temporal relation challenge corpus, show that the proposed approach achieves state-of-the-art results (giving a relative improvement of 22.2, 7.77, and 38.5% in F1 score, respectively, compared with a traditional 1d-CNN classifier). The source code is available at https://github.com/chentao1999/MedicalRelationExtraction.


2020 ◽  
Vol 10 (11) ◽  
pp. 3755
Author(s):  
Eun Kyeong Kim ◽  
Hansoo Lee ◽  
Jin Yong Kim ◽  
Sungshin Kim

Deep learning is applied in various manufacturing domains. To train a deep learning network, we must collect a sufficient amount of training data. However, it is difficult to collect image datasets required to train the networks to perform object recognition, especially because target items that are to be classified are generally excluded from existing databases, and the manual collection of images poses certain limitations. Therefore, to overcome the data deficiency that is present in many domains including manufacturing, we propose a method of generating new training images via image pre-processing steps, background elimination, target extraction while maintaining the ratio of the object size in the original image, color perturbation considering the predefined similarity between the original and generated images, geometric transformations, and transfer learning. Specifically, to demonstrate color perturbation and geometric transformations, we compare and analyze the experiments of each color space and each geometric transformation. The experimental results show that the proposed method can effectively augment the original data, correctly classify similar items, and improve the image classification accuracy. In addition, it also demonstrates that the effective data augmentation method is crucial when the amount of training data is small.


2021 ◽  
Author(s):  
Ricardo Peres ◽  
Magno Guedes ◽  
Fábio Miranda ◽  
José Barata

<div>The advent of Industry 4.0 has shown the tremendous transformative potential of combining artificial intelligence, cyber-physical systems and Internet of Things concepts in industrial settings. Despite this, data availability is still a major roadblock for the successful adoption of data-driven solutions, particularly concerning deep learning approaches in manufacturing. Specifically in the quality control domain, annotated defect data can often be costly, time-consuming and inefficient to obtain, potentially compromising the viability of deep learning approaches due to data scarcity. In this context, we propose a novel method for generating annotated synthetic training data for automated quality inspections of structural adhesive applications, validated in an industrial cell for automotive parts. Our approach greatly reduces the cost of training deep learning models for this task, while simultaneously improving their performance in a scarce manufacturing data context with imbalanced training sets by 3.1% ([email protected]). Additional results can be seen at https://git.io/Jtc4b.</div>


2020 ◽  
Vol 10 (10) ◽  
pp. 2446-2451
Author(s):  
Hussain Ahmad ◽  
Muhammad Zubair Asghar ◽  
Fahad M. Alotaibi ◽  
Ibrahim A. Hameed

In social media, depression identification could be regarded as a complex task because of the complicated nature associated with mental disorders. In recent times, there has been an evolution in this research area with growing popularity of social media platforms as these have become a fundamental part of people's day-to-day life. Social media platforms and their users share a close relationship due to which the users' personal life is reflected in these platforms on several levels. Apart from the associated complexity in recognising mental illnesses via social media platforms, implementing supervised machine learning approaches like deep neural networks is yet to be adopted in a large scale because of the inherent difficulties associated with procuring sufficient quantities of annotated training data. Because of such reasons, we have made effort to identify deep learning model that is most effective from amongst selected architectures with previous successful record in supervised learning methods. The selected model is employed to recognise online users that display depression; since there is limited unstructured text data that could be extracted from Twitter.


Author(s):  
Nukabathini Mary Saroj Sahithya ◽  
Manda Prathyusha ◽  
Nakkala Rachana ◽  
Perikala Priyanka ◽  
P. J. Jyothi

Product reviews are valuable for upcoming buyers in helping them make decisions. To this end, different opinion mining techniques have been proposed, where judging a review sentence�s orientation (e.g. positive or negative) is one of their key challenges. Recently, deep learning has emerged as an effective means for solving sentiment classification problems. Deep learning is a class of machine learning algorithms that learn in supervised and unsupervised manners. A neural network intrinsically learns a useful representation automatically without human efforts. However, the success of deep learning highly relies on the large-scale training data. We propose a novel deep learning framework for product review sentiment classification which employs prevalently available ratings supervision signals. The framework consists of two steps: (1) learning a high-level representation (an embedding space) which captures the general sentiment distribution of sentences through rating information; (2) adding a category layer on top of the embedding layer and use labelled sentences for supervised fine-tuning. We explore two kinds of low-level network structure for modelling review sentences, namely, convolutional function extractors and long temporary memory. Convolutional layer is the core building block of a CNN and it consists of kernels. Applications are image and video recognition, natural language processing, image classification


2021 ◽  
Author(s):  
Mustafa Çelik ◽  
Ahmet HaydarÖrnek

Deep learning methods, especially convolutional neural networks (CNNs), have made a major contribution to computer vision. However, deep learning classifiers need large-scale annotated datasets to be trained without over-fitting. Also, in high-data diversity, trained models generalize better. However, collecting such a large-scale dataset remains challenging. Furthermore, it is invaluable for researchers to protect the subjects' confidentiality when using their personal data such as face images. In this paper, we propose a deep learning Generative Adversarial Networks (GANs) which generates synthetic samples for our mask classification model. Our contributions in this work are two-fold that the synthetics images provide. First, GANs' models can be used as an anonymization tool when the subjects' confidentiality is matters. Second, the generated masked/unmasked face images boost the performance of the mask classification model by using the synthetic images as a form of data augmentation. In our work, the classification accuracy using only traditional data augmentations is 93.71 %. By using both synthetic data and original data with traditional data augmentations the result is 95.50 %. It is shown that the GAN-generated synthetic data boosts the performance of deep learning classifiers.


2021 ◽  
Author(s):  
Ricardo Peres ◽  
Magno Guedes ◽  
Fábio Miranda ◽  
José Barata

<div>The advent of Industry 4.0 has shown the tremendous transformative potential of combining artificial intelligence, cyber-physical systems and Internet of Things concepts in industrial settings. Despite this, data availability is still a major roadblock for the successful adoption of data-driven solutions, particularly concerning deep learning approaches in manufacturing. Specifically in the quality control domain, annotated defect data can often be costly, time-consuming and inefficient to obtain, potentially compromising the viability of deep learning approaches due to data scarcity. In this context, we propose a novel method for generating annotated synthetic training data for automated quality inspections of structural adhesive applications, validated in an industrial cell for automotive parts. Our approach greatly reduces the cost of training deep learning models for this task, while simultaneously improving their performance in a scarce manufacturing data context with imbalanced training sets by 3.1% ([email protected]). Additional results can be seen at https://git.io/Jtc4b.</div>


2021 ◽  
Author(s):  
Alexander Zizka ◽  
Tobias Andermann ◽  
Daniele Silvestro

Aim: The global Red List (RL) from the International Union for the Conservation of Nature is the most comprehensive global quantification of extinction risk, and widely used in applied conservation as well as in biogeographic and ecological research. Yet, due to the time-consuming assessment process, the RL is biased taxonomically and geographically, which limits its application on large scales, in particular for understudied areas such as the tropics, or understudied taxa, such as most plants and invertebrates. Here we present IUCNN, an R-package implementing deep learning models to predict species RL status from publicly available geographic occurrence records (and other traits if available). Innovation: We implement a user-friendly workflow to train and validate neural network models, and subsequently use them to predict species RL status. IUCNN contains functions to address specific issues related to the RL framework, including a regression-based approach to account for the ordinal nature of RL categories and class imbalance in the training data, a Bayesian approach for improved uncertainty quantification, and a target accuracy threshold approach that limits predictions to only those species whose RL status can be predicted with high confidence. Most analyses can be run with few lines of code, without prior knowledge of neural network models. We demonstrate the use of IUCNN on an empirical dataset of ~14,000 orchid species, for which IUCNN models can predict extinction risk within minutes, while outperforming comparable methods. Main conclusions: IUCNN harnesses innovative methodology to estimate the RL status of large numbers of species. By providing estimates of the number and identity of threatened species in custom geographic or taxonomic datasets, IUCNN enables large-scale analyses on the extinction risk of species so far not well represented on the official RL.


2020 ◽  
Vol 12 (14) ◽  
pp. 2274
Author(s):  
Christopher Stewart ◽  
Michele Lazzarini ◽  
Adrian Luna ◽  
Sergio Albani

The availability of free and open data from Earth observation programmes such as Copernicus, and from collaborative projects such as Open Street Map (OSM), enables low cost artificial intelligence (AI) based monitoring applications. This creates opportunities, particularly in developing countries with scarce economic resources, for large–scale monitoring in remote regions. A significant portion of Earth’s surface comprises desert dune fields, where shifting sand affects infrastructure and hinders movement. A robust, cost–effective and scalable methodology is proposed for road detection and monitoring in regions covered by desert sand. The technique uses Copernicus Sentinel–1 synthetic aperture radar (SAR) satellite data as an input to a deep learning model based on the U–Net architecture for image segmentation. OSM data is used for model training. The method comprises two steps: The first involves processing time series of Sentinel–1 SAR interferometric wide swath (IW) acquisitions in the same geometry to produce multitemporal backscatter and coherence averages. These are divided into patches and matched with masks of OSM roads to form the training data, the quantity of which is increased through data augmentation. The second step includes the U–Net deep learning workflow. The methodology has been applied to three different dune fields in Africa and Asia. A performance evaluation through the calculation of the Jaccard similarity coefficient was carried out for each area, and ranges from 84% to 89% for the best available input. The rank distance, calculated from the completeness and correctness percentages, was also calculated and ranged from 75% to 80%. Over all areas there are more missed detections than false positives. In some cases, this was due to mixed infrastructure in the same resolution cell of the input SAR data. Drift sand and dune migration covering infrastructure is a concern in many desert regions, and broken segments in the resulting road detections are sometimes due to sand burial. The results also show that, in most cases, the Sentinel–1 vertical transmit–vertical receive (VV) backscatter averages alone constitute the best input to the U–Net model. The detection and monitoring of roads in desert areas are key concerns, particularly given a growing population increasingly on the move.


Author(s):  
Yi-Quan Li ◽  
Hao-Sen Chang ◽  
Daw-Tung Lin

In the field of computer vision, large-scale image classification tasks are both important and highly challenging. With the ongoing advances in deep learning and optical character recognition (OCR) technologies, neural networks designed to perform large-scale classification play an essential role in facilitating OCR systems. In this study, we developed an automatic OCR system designed to identify up to 13,070 large-scale printed Chinese characters by using deep learning neural networks and fine-tuning techniques. The proposed framework comprises four components, including training dataset synthesis and background simulation, image preprocessing and data augmentation, the process of training the model, and transfer learning. The training data synthesis procedure is composed of a character font generation step and a background simulation process. Three background models are proposed to simulate the factors of the background noise and anti-counterfeiting patterns on ID cards. To expand the diversity of the synthesized training dataset, rotation and zooming data augmentation are applied. A massive dataset comprising more than 19.6 million images was thus created to accommodate the variations in the input images and improve the learning capacity of the CNN model. Subsequently, we modified the GoogLeNet neural architecture by replacing the FC layer with a global average pooling layer to avoid overfitting caused by a massive amount of training data. Consequently, the number of model parameters was reduced. Finally, we employed the transfer learning technique to further refine the CNN model using a small number of real data samples. Experimental results show that the overall recognition performance of the proposed approach is significantly better than that of prior methods and thus demonstrate the effectiveness of proposed framework, which exhibited a recognition accuracy as high as 99.39% on the constructed real ID card dataset.


Sign in / Sign up

Export Citation Format

Share Document