scholarly journals Learning from Synthetic Dataset for Crop Seed Instance Segmentation

2019 ◽  
Author(s):  
Yosuke Toda ◽  
Fumio Okura ◽  
Jun Ito ◽  
Satoshi Okada ◽  
Toshinori Kinoshita ◽  
...  

Incorporating deep learning in the image analysis pipeline has opened the possibility of introducing precision phenotyping in the field of agriculture. However, to train the neural network, a sufficient amount of training data must be prepared, which requires a time-consuming manual data annotation process that often becomes the limiting step. Here, we show that an instance segmentation neural network (Mask R-CNN) aimed to phenotype the barley seed morphology of various cultivars, can be sufficiently trained purely by a synthetically generated dataset. Our attempt is based on the concept of domain randomization, where a large amount of image is generated by randomly orienting the seed object to a virtual canvas. After training with such a dataset, performance based on recall and the average Precision of the real-world test dataset achieved 96% and 95%, respectively. Applying our pipeline enables extraction of morphological parameters at a large scale, enabling precise characterization of the natural variation of barley from a multivariate perspective. Importantly, we show that our approach is effective not only for barley seeds but also for various crops including rice, lettuce, oat, and wheat, and thus supporting the fact that the performance benefits of this technique is generic. We propose that constructing and utilizing such synthetic data can be a powerful method to alleviate human labor costs needed to prepare the training dataset for deep learning in the agricultural domain.

2017 ◽  
Vol 26 (1) ◽  
Author(s):  
Thomas M. Boudreaux

AbstractWith several new large-scale surveys on the horizon, including LSST, TESS, ZTF, and Evryscope, faster and more accurate analysis methods will be required to adequately process the enormous amount of data produced. Deep learning, used in industry for years now, allows for advanced feature detection in minimally prepared datasets at very high speeds; however, despite the advantages of this method, its application to astrophysics has not yet been extensively explored. This dearth may be due to a lack of training data available to researchers. Here we generate synthetic data loosely mimicking the properties of acoustic mode pulsating stars and we show that two separate paradigms of deep learning - the Artificial Neural Network And the Convolutional Neural Network - can both be used to classify this synthetic data effectively. And that additionally this classification can be performed at relatively high levels of accuracy with minimal time spent adjusting network hyperparameters.


Author(s):  
Shaolei Wang ◽  
Zhongyuan Wang ◽  
Wanxiang Che ◽  
Sendong Zhao ◽  
Ting Liu

Spoken language is fundamentally different from the written language in that it contains frequent disfluencies or parts of an utterance that are corrected by the speaker. Disfluency detection (removing these disfluencies) is desirable to clean the input for use in downstream NLP tasks. Most existing approaches to disfluency detection heavily rely on human-annotated data, which is scarce and expensive to obtain in practice. To tackle the training data bottleneck, in this work, we investigate methods for combining self-supervised learning and active learning for disfluency detection. First, we construct large-scale pseudo training data by randomly adding or deleting words from unlabeled data and propose two self-supervised pre-training tasks: (i) a tagging task to detect the added noisy words and (ii) sentence classification to distinguish original sentences from grammatically incorrect sentences. We then combine these two tasks to jointly pre-train a neural network. The pre-trained neural network is then fine-tuned using human-annotated disfluency detection training data. The self-supervised learning method can capture task-special knowledge for disfluency detection and achieve better performance when fine-tuning on a small annotated dataset compared to other supervised methods. However, limited in that the pseudo training data are generated based on simple heuristics and cannot fully cover all the disfluency patterns, there is still a performance gap compared to the supervised models trained on the full training dataset. We further explore how to bridge the performance gap by integrating active learning during the fine-tuning process. Active learning strives to reduce annotation costs by choosing the most critical examples to label and can address the weakness of self-supervised learning with a small annotated dataset. We show that by combining self-supervised learning with active learning, our model is able to match state-of-the-art performance with just about 10% of the original training data on both the commonly used English Switchboard test set and a set of in-house annotated Chinese data.


2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Jordan Ott ◽  
Mike Pritchard ◽  
Natalie Best ◽  
Erik Linstead ◽  
Milan Curcic ◽  
...  

Implementing artificial neural networks is commonly achieved via high-level programming languages such as Python and easy-to-use deep learning libraries such as Keras. These software libraries come preloaded with a variety of network architectures, provide autodifferentiation, and support GPUs for fast and efficient computation. As a result, a deep learning practitioner will favor training a neural network model in Python, where these tools are readily available. However, many large-scale scientific computation projects are written in Fortran, making it difficult to integrate with modern deep learning methods. To alleviate this problem, we introduce a software library, the Fortran-Keras Bridge (FKB). This two-way bridge connects environments where deep learning resources are plentiful with those where they are scarce. The paper describes several unique features offered by FKB, such as customizable layers, loss functions, and network ensembles. The paper concludes with a case study that applies FKB to address open questions about the robustness of an experimental approach to global climate simulation, in which subgrid physics are outsourced to deep neural network emulators. In this context, FKB enables a hyperparameter search of one hundred plus candidate models of subgrid cloud and radiation physics, initially implemented in Keras, to be transferred and used in Fortran. Such a process allows the model’s emergent behavior to be assessed, i.e., when fit imperfections are coupled to explicit planetary-scale fluid dynamics. The results reveal a previously unrecognized strong relationship between offline validation error and online performance, in which the choice of the optimizer proves unexpectedly critical. This in turn reveals many new neural network architectures that produce considerable improvements in climate model stability including some with reduced error, for an especially challenging training dataset.


2020 ◽  
Vol 29 (01) ◽  
pp. 129-138 ◽  
Author(s):  
Anirudh Choudhary ◽  
Li Tong ◽  
Yuanda Zhu ◽  
May D. Wang

Introduction: There has been a rapid development of deep learning (DL) models for medical imaging. However, DL requires a large labeled dataset for training the models. Getting large-scale labeled data remains a challenge, and multi-center datasets suffer from heterogeneity due to patient diversity and varying imaging protocols. Domain adaptation (DA) has been developed to transfer the knowledge from a labeled data domain to a related but unlabeled domain in either image space or feature space. DA is a type of transfer learning (TL) that can improve the performance of models when applied to multiple different datasets. Objective: In this survey, we review the state-of-the-art DL-based DA methods for medical imaging. We aim to summarize recent advances, highlighting the motivation, challenges, and opportunities, and to discuss promising directions for future work in DA for medical imaging. Methods: We surveyed peer-reviewed publications from leading biomedical journals and conferences between 2017-2020, that reported the use of DA in medical imaging applications, grouping them by methodology, image modality, and learning scenarios. Results: We mainly focused on pathology and radiology as application areas. Among various DA approaches, we discussed domain transformation (DT) and latent feature-space transformation (LFST). We highlighted the role of unsupervised DA in image segmentation and described opportunities for future development. Conclusion: DA has emerged as a promising solution to deal with the lack of annotated training data. Using adversarial techniques, unsupervised DA has achieved good performance, especially for segmentation tasks. Opportunities include domain transferability, multi-modal DA, and applications that benefit from synthetic data.


2021 ◽  
Vol 5 (1) ◽  
pp. 9
Author(s):  
Qiang Fang ◽  
Clemente Ibarra-Castanedo ◽  
Xavier Maldague

In quality evaluation (QE) of the industrial production field, infrared thermography (IRT) is one of the most crucial techniques used for evaluating composite materials due to the properties of low cost, fast inspection of large surfaces, and safety. The application of deep neural networks tends to be a prominent direction in IRT Non-Destructive Testing (NDT). During the training of the neural network, the Achilles heel is the necessity of a large database. The collection of huge amounts of training data is the high expense task. In NDT with deep learning, synthetic data contributing to training in infrared thermography remains relatively unexplored. In this paper, synthetic data from the standard Finite Element Models are combined with experimental data to build repositories with Mask Region based Convolutional Neural Networks (Mask-RCNN) to strengthen the neural network, learning the essential features of objects of interest and achieving defect segmentation automatically. These results indicate the possibility of adapting inexpensive synthetic data merging with a certain amount of the experimental database for training the neural networks in order to achieve the compelling performance from a limited collection of the annotated experimental data of a real-world practical thermography experiment.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3336 ◽  
Author(s):  
Ta-Wei Tang ◽  
Wei-Han Kuo ◽  
Jauh-Hsiang Lan ◽  
Chien-Fang Ding ◽  
Hakiem Hsu ◽  
...  

Recently, researchers have been studying methods to introduce deep learning into automated optical inspection (AOI) systems to reduce labor costs. However, the integration of deep learning in the industry may encounter major challenges such as sample imbalance (defective products that only account for a small proportion). Therefore, in this study, an anomaly detection neural network, dual auto-encoder generative adversarial network (DAGAN), was developed to solve the problem of sample imbalance. With skip-connection and dual auto-encoder architecture, the proposed method exhibited excellent image reconstruction ability and training stability. Three datasets, namely public industrial detection training set, MVTec AD, with mobile phone screen glass and wood defect detection datasets, were used to verify the inspection ability of DAGAN. In addition, training with a limited amount of data was proposed to verify its detection ability. The results demonstrated that the areas under the curve (AUCs) of DAGAN were better than previous generative adversarial network-based anomaly detection models in 13 out of 17 categories in these datasets, especially in categories with high variability or noise. The maximum AUC improvement was 0.250 (toothbrush). Moreover, the proposed method exhibited better detection ability than the U-Net auto-encoder, which indicates the function of discriminator in this application. Furthermore, the proposed method had a high level of AUCs when using only a small amount of training data. DAGAN can significantly reduce the time and cost of collecting and labeling data when it is applied to industrial detection.


Author(s):  
Tianle Ma ◽  
Aidong Zhang

While deep learning has achieved great success in computer vision and many other fields, currently it does not work very well on patient genomic data with the “big p, small N” problem (i.e., a relatively small number of samples with highdimensional features). In order to make deep learning work with a small amount of training data, we have to design new models that facilitate few-shot learning. Here we present the Affinity Network Model (AffinityNet), a data efficient deep learning model that can learn from a limited number of training examples and generalize well. The backbone of the AffinityNet model consists of stacked k-Nearest-Neighbor (kNN) attention pooling layers. The kNN attention pooling layer is a generalization of the Graph Attention Model (GAM), and can be applied to not only graphs but also any set of objects regardless of whether a graph is given or not. As a new deep learning module, kNN attention pooling layers can be plugged into any neural network model just like convolutional layers. As a simple special case of kNN attention pooling layer, feature attention layer can directly select important features that are useful for classification tasks. Experiments on both synthetic data and cancer genomic data from TCGA projects show that our AffinityNet model has better generalization power than conventional neural network models with little training data.


2021 ◽  
Author(s):  
Alexander Zizka ◽  
Tobias Andermann ◽  
Daniele Silvestro

Aim: The global Red List (RL) from the International Union for the Conservation of Nature is the most comprehensive global quantification of extinction risk, and widely used in applied conservation as well as in biogeographic and ecological research. Yet, due to the time-consuming assessment process, the RL is biased taxonomically and geographically, which limits its application on large scales, in particular for understudied areas such as the tropics, or understudied taxa, such as most plants and invertebrates. Here we present IUCNN, an R-package implementing deep learning models to predict species RL status from publicly available geographic occurrence records (and other traits if available). Innovation: We implement a user-friendly workflow to train and validate neural network models, and subsequently use them to predict species RL status. IUCNN contains functions to address specific issues related to the RL framework, including a regression-based approach to account for the ordinal nature of RL categories and class imbalance in the training data, a Bayesian approach for improved uncertainty quantification, and a target accuracy threshold approach that limits predictions to only those species whose RL status can be predicted with high confidence. Most analyses can be run with few lines of code, without prior knowledge of neural network models. We demonstrate the use of IUCNN on an empirical dataset of ~14,000 orchid species, for which IUCNN models can predict extinction risk within minutes, while outperforming comparable methods. Main conclusions: IUCNN harnesses innovative methodology to estimate the RL status of large numbers of species. By providing estimates of the number and identity of threatened species in custom geographic or taxonomic datasets, IUCNN enables large-scale analyses on the extinction risk of species so far not well represented on the official RL.


Geophysics ◽  
2021 ◽  
pp. 1-63
Author(s):  
Wenqian Fang ◽  
Lihua Fu ◽  
Shaoyong Liu ◽  
Hongwei Li

Deep learning (DL) technology has emerged as a new approach for seismic data interpolation. DL-based methods can automatically learn the mapping between regularly subsampled and complete data from a large training dataset. Subsequently, the trained network can be used to directly interpolate new data. Therefore, compared with traditional methods, DL-based methods reduce the manual workload and render the interpolation process efficient and automatic by avoiding the selection of hyperparameters. However, two limitations of DL-based approaches exist. First, the generalization performance of the neural network is inadequate when processing new data with a different structure compared to the training data. Second, the interpretation of the trained networks is very difficult. To overcome these limitations, we combine the deep neural network and classic prediction-error filter methods, proposing a novel seismic data de-aliased interpolation framework termed PEFNet (Prediction-Error Filters Network). The PEFNet designs convolutional neural networks to learn the relationship between the subsampled data and the prediction-error filters. Thus, the filters estimated by the trained network are used for the recovery of missing traces. The learning of filters enables the network to better extract the local dip of seismic data and has a good generalization ability. In addition, PEFNet has the same interpretability as traditional prediction error-filter based methods. The applicability and the effectiveness of the proposed method are demonstrated here by synthetic and field data examples.


Author(s):  
Yi-Quan Li ◽  
Hao-Sen Chang ◽  
Daw-Tung Lin

In the field of computer vision, large-scale image classification tasks are both important and highly challenging. With the ongoing advances in deep learning and optical character recognition (OCR) technologies, neural networks designed to perform large-scale classification play an essential role in facilitating OCR systems. In this study, we developed an automatic OCR system designed to identify up to 13,070 large-scale printed Chinese characters by using deep learning neural networks and fine-tuning techniques. The proposed framework comprises four components, including training dataset synthesis and background simulation, image preprocessing and data augmentation, the process of training the model, and transfer learning. The training data synthesis procedure is composed of a character font generation step and a background simulation process. Three background models are proposed to simulate the factors of the background noise and anti-counterfeiting patterns on ID cards. To expand the diversity of the synthesized training dataset, rotation and zooming data augmentation are applied. A massive dataset comprising more than 19.6 million images was thus created to accommodate the variations in the input images and improve the learning capacity of the CNN model. Subsequently, we modified the GoogLeNet neural architecture by replacing the FC layer with a global average pooling layer to avoid overfitting caused by a massive amount of training data. Consequently, the number of model parameters was reduced. Finally, we employed the transfer learning technique to further refine the CNN model using a small number of real data samples. Experimental results show that the overall recognition performance of the proposed approach is significantly better than that of prior methods and thus demonstrate the effectiveness of proposed framework, which exhibited a recognition accuracy as high as 99.39% on the constructed real ID card dataset.


Sign in / Sign up

Export Citation Format

Share Document