scholarly journals Robust seed germination prediction using deep learning and RGB image data

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yuval Nehoshtan ◽  
Elad Carmon ◽  
Omer Yaniv ◽  
Sharon Ayal ◽  
Or Rotem

AbstractAchieving seed germination quality standards poses a real challenge to seed companies as they are compelled to abide by strict certification rules, while having only partial seed separation solutions at their disposal. This discrepancy results with wasteful disqualification of seed lots holding considerable amounts of good seeds and further translates to financial losses and supply chain insecurity. Here, we present the first-ever generic germination prediction technology that is based on deep learning and RGB image data and facilitates seed classification by seed germinability and usability, two facets of germination fate. We show technology competence to render dozens of disqualified seed lots of seven vegetable crops, representing different genetics and production pipelines, industrially appropriate, and to adequately classify lots by utilizing available crop-level image data, instead of lot-specific data. These achievements constitute a major milestone in the deployment of this technology for industrial seed sorting by germination fate for multiple crops.

2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2611
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Christopher Lawson ◽  
Paul Meek ◽  
Paul Kwan

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Dominik Jens Elias Waibel ◽  
Sayedali Shetab Boushehri ◽  
Carsten Marr

Abstract Background Deep learning contributes to uncovering molecular and cellular processes with highly performant algorithms. Convolutional neural networks have become the state-of-the-art tool to provide accurate and fast image data processing. However, published algorithms mostly solve only one specific problem and they typically require a considerable coding effort and machine learning background for their application. Results We have thus developed InstantDL, a deep learning pipeline for four common image processing tasks: semantic segmentation, instance segmentation, pixel-wise regression and classification. InstantDL enables researchers with a basic computational background to apply debugged and benchmarked state-of-the-art deep learning algorithms to their own data with minimal effort. To make the pipeline robust, we have automated and standardized workflows and extensively tested it in different scenarios. Moreover, it allows assessing the uncertainty of predictions. We have benchmarked InstantDL on seven publicly available datasets achieving competitive performance without any parameter tuning. For customization of the pipeline to specific tasks, all code is easily accessible and well documented. Conclusions With InstantDL, we hope to empower biomedical researchers to conduct reproducible image processing with a convenient and easy-to-use pipeline.


2002 ◽  
Vol 82 (3) ◽  
pp. 273-275 ◽  
Author(s):  
S Ramana ◽  
A.K Biswas ◽  
S Kundu ◽  
J.K Saha ◽  
R.B.R Yadava

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Katsumi Hagita ◽  
Takeshi Aoyagi ◽  
Yuto Abe ◽  
Shinya Genda ◽  
Takashi Honda

AbstractIn this study, deep learning (DL)-based estimation of the Flory–Huggins χ parameter of A-B diblock copolymers from two-dimensional cross-sectional images of three-dimensional (3D) phase-separated structures were investigated. 3D structures with random networks of phase-separated domains were generated from real-space self-consistent field simulations in the 25–40 χN range for chain lengths (N) of 20 and 40. To confirm that the prepared data can be discriminated using DL, image classification was performed using the VGG-16 network. We comprehensively investigated the performances of the learned networks in the regression problem. The generalization ability was evaluated from independent images with the unlearned χN. We found that, except for large χN values, the standard deviation values were approximately 0.1 and 0.5 for A-component fractions of 0.2 and 0.35, respectively. The images for larger χN values were more difficult to distinguish. In addition, the learning performances for the 4-class problem were comparable to those for the 8-class problem, except when the χN values were large. This information is useful for the analysis of real experimental image data, where the variation of samples is limited.


Sign in / Sign up

Export Citation Format

Share Document