scholarly journals imageseg: an R package for deep learning-based image segmentation

2021 ◽  
Author(s):  
Jürgen Niedballa ◽  
Jan Axtner ◽  
Timm Fabian Döbert ◽  
Andrew Tilker ◽  
An Nguyen ◽  
...  

Convolutional neural networks (CNNs) and deep learning are powerful and robust tools for ecological applications. CNNs can perform very well in various tasks, especially for visual tasks and image data. Image segmentation (the classification of all pixels in images) is one such task and can for example be used to assess forest vertical and horizontal structure. While such methods have been suggested, widespread adoption in ecological research has been slow, likely due to technical difficulties in implementation of CNNs and lack of toolboxes for ecologists. Here, we present R package imageseg which implements a workflow for general-purpose image segmentation using CNNs and the U-Net architecture in R. The workflow covers data (pre)processing, model training, and predictions. We illustrate the utility of the package with two models for forest structural metrics: tree canopy density and understory vegetation density. We trained the models using large and diverse training data sets from a variety of forest types and biomes, consisting of 3288 canopy images (both canopy cover and hemispherical canopy closure photographs) and 1468 understory vegetation images. Overall classification accuracy of the models was high with a Dice score of 0.91 for the canopy model and 0.89 for the understory vegetation model (assessed with 821 and 367 images, respectively), indicating robustness to variation in input images and good generalization strength across forest types and biomes. The package and its workflow allow simple yet powerful assessments of forest structural metrics using pre-trained models. Furthermore, the package facilitates custom image segmentation with multiple classes and based on color or grayscale images, e.g. in cell biology or for medical images. Our package is free, open source, and available from CRAN. It will enable easier and faster implementation of deep learning-based image segmentation within R for ecological applications and beyond.

Author(s):  
Lucas von Chamier ◽  
Romain F. Laine ◽  
Ricardo Henriques

Artificial Intelligence based on Deep Learning is opening new horizons in Biomedical research and promises to revolutionize the Microscopy field. Slowly, it now transitions from the hands of experts in Computer Sciences to researchers in Cell Biology. Here, we introduce recent developments in Deep Learning applied to Microscopy, in a manner accessible to non-experts. We overview its concepts, capabilities and limitations, presenting applications in image segmentation, classification and restoration. We discuss how Deep Learning shows an outstanding potential to push the limits of Microscopy, enhancing resolution, signal and information content in acquired data. Its pitfalls are carefully discussed, as well as the future directions expected in this field.


2020 ◽  
Vol 36 (12) ◽  
pp. 3863-3870
Author(s):  
Mischa Schwendy ◽  
Ronald E Unger ◽  
Sapun H Parekh

Abstract Motivation Deep learning use for quantitative image analysis is exponentially increasing. However, training accurate, widely deployable deep learning algorithms requires a plethora of annotated (ground truth) data. Image collections must contain not only thousands of images to provide sufficient example objects (i.e. cells), but also contain an adequate degree of image heterogeneity. Results We present a new dataset, EVICAN—Expert visual cell annotation, comprising partially annotated grayscale images of 30 different cell lines from multiple microscopes, contrast mechanisms and magnifications that is readily usable as training data for computer vision applications. With 4600 images and ∼26 000 segmented cells, our collection offers an unparalleled heterogeneous training dataset for cell biology deep learning application development. Availability and implementation The dataset is freely available (https://edmond.mpdl.mpg.de/imeji/collection/l45s16atmi6Aa4sI?q=). Using a Mask R-CNN implementation, we demonstrate automated segmentation of cells and nuclei from brightfield images with a mean average precision of 61.6 % at a Jaccard Index above 0.5.


2020 ◽  
Vol 6 (3) ◽  
pp. 398-401
Author(s):  
Roman Bruch ◽  
Rüdiger Rudolf ◽  
Ralf Mikut ◽  
Markus Reischl

AbstractThe analysis of microscopic images from cell cultures plays an important role in the development of drugs. The segmentation of such images is a basic step to extract the viable information on which further evaluation steps are build. Classical image processing pipelines often fail under heterogeneous conditions. In the recent years deep neuronal networks gained attention due to their great potentials in image segmentation. One main pitfall of deep learning is often seen in the amount of labeled data required for training such models. Especially for 3D images the process to generate such data is tedious and time consuming and thus seen as a possible reason for the lack of establishment of deep learning models for 3D data. Efforts have been made to minimize the time needed to create labeled training data or to reduce the amount of labels needed for training. In this paper we present a new semisupervised training method for image segmentation of microscopic cell recordings based on an iterative approach utilizing unlabeled data during training. This method helps to further reduce the amount of labels required to effectively train deep learning models for image segmentation. By labeling less than one percent of the training data, a performance of 90% compared to a full annotation with 342 nuclei can be achieved.


2021 ◽  
Author(s):  
Dejin Xun ◽  
Deheng Chen ◽  
Yitian Zhou ◽  
Volker M. Lauschke ◽  
Rui Wang ◽  
...  

Deep learning-based cell segmentation is increasingly utilized in cell biology and molecular pathology, due to massive accumulation of diverse large-scale datasets and excellent performance in cell representation. However, the development of specialized algorithms has long been hampered by a paucity of annotated training data, whereas the performance of generalist algorithm was limited without experiment-specific calibration. Here, we present a deep learning-based tool called Scellseg consisted of novel pre-trained network architecture and contrastive fine-tuning strategy. In comparison to four commonly used algorithms, Scellseg outperformed in average precision on three diverse datasets with no need for dataset-specific configuration. Interestingly, we found that eight images are sufficient for model tuning to achieve satisfied performance based on a shot data scale experiment. We also developed a graphical user interface integrated with functions of annotation, fine-tuning and inference, that allows biologists to easily specialize their own segmentation model and analyze data at the single-cell level.


Author(s):  
Lucas von Chamier ◽  
Romain F. Laine ◽  
Ricardo Henriques

Artificial Intelligence based on Deep Learning is opening new horizons in Biomedical research and promises to revolutionize the Microscopy field. Slowly, it now transitions from the hands of experts in Computer Sciences to researchers in Cell Biology. Here, we introduce recent developments in Deep Learning applied to Microscopy, in a manner accessible to non-experts. We overview its concepts, capabilities and limitations, presenting applications in image segmentation, classification and restoration. We discuss how Deep Learning shows an outstanding potential to push the limits of Microscopy, enhancing resolution, signal and information content in acquired data. Its pitfalls are carefully discussed, as well as the future directions expected in this field.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 142
Author(s):  
Wei Ouyang ◽  
Trang Le ◽  
Hao Xu ◽  
Emma Lundberg

Deep learning-based methods play an increasingly important role in bioimage analysis. User-friendly tools are crucial for increasing the adoption of deep learning models and efforts have been made to support them in existing image analysis platforms. Due to hardware and software complexities, many of them have been struggling to support re-training and fine-tuning of models which is essential  to avoid  overfitting and hallucination issues  when working with limited training data. Meanwhile, interactive machine learning provides an efficient way to train models on limited training data. It works by gradually adding new annotations by correcting the model predictions while the model is training in the background. In this work, we developed an ImJoy plugin for interactive training and an annotation tool for image segmentation. With a small example dataset obtained from the Human Protein Atlas, we demonstrate that CellPose-based segmentation models can be trained interactively from scratch within 10-40 minutes, which is at least 6x faster than the conventional annotation workflow and less labor intensive. We envision that the developed tool can make deep learning segmentation methods incrementally adoptable for new users and be used in a wide range of applications for biomedical image segmentation.


2019 ◽  
Vol 9 (22) ◽  
pp. 4749
Author(s):  
Lingyun Jiang ◽  
Kai Qiao ◽  
Linyuan Wang ◽  
Chi Zhang ◽  
Jian Chen ◽  
...  

Decoding human brain activities, especially reconstructing human visual stimuli via functional magnetic resonance imaging (fMRI), has gained increasing attention in recent years. However, the high dimensionality and small quantity of fMRI data impose restrictions on satisfactory reconstruction, especially for the reconstruction method with deep learning requiring huge amounts of labelled samples. When compared with the deep learning method, humans can recognize a new image because our human visual system is naturally capable of extracting features from any object and comparing them. Inspired by this visual mechanism, we introduced the mechanism of comparison into deep learning method to realize better visual reconstruction by making full use of each sample and the relationship of the sample pair by learning to compare. In this way, we proposed a Siamese reconstruction network (SRN) method. By using the SRN, we improved upon the satisfying results on two fMRI recording datasets, providing 72.5% accuracy on the digit dataset and 44.6% accuracy on the character dataset. Essentially, this manner can increase the training data about from n samples to 2n sample pairs, which takes full advantage of the limited quantity of training samples. The SRN learns to converge sample pairs of the same class or disperse sample pairs of different class in feature space.


Author(s):  
Saber Mirzaee Bafti ◽  
Chee Siang Ang ◽  
Md. Moinul Hossain ◽  
Gianluca Marcelli ◽  
Marc Alemany-Fornes ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xin Mao ◽  
Jun Kang Chow ◽  
Pin Siang Tan ◽  
Kuan-fu Liu ◽  
Jimmy Wu ◽  
...  

AbstractAutomatic bird detection in ornithological analyses is limited by the accuracy of existing models, due to the lack of training data and the difficulties in extracting the fine-grained features required to distinguish bird species. Here we apply the domain randomization strategy to enhance the accuracy of the deep learning models in bird detection. Trained with virtual birds of sufficient variations in different environments, the model tends to focus on the fine-grained features of birds and achieves higher accuracies. Based on the 100 terabytes of 2-month continuous monitoring data of egrets, our results cover the findings using conventional manual observations, e.g., vertical stratification of egrets according to body size, and also open up opportunities of long-term bird surveys requiring intensive monitoring that is impractical using conventional methods, e.g., the weather influences on egrets, and the relationship of the migration schedules between the great egrets and little egrets.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2611
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Christopher Lawson ◽  
Paul Meek ◽  
Paul Kwan

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.


Sign in / Sign up

Export Citation Format

Share Document