scholarly journals AnnotatorJ: an ImageJ plugin to ease hand annotation of cellular compartments

2020 ◽  
Vol 31 (20) ◽  
pp. 2179-2186 ◽  
Author(s):  
Réka Hollandi ◽  
Ákos Diósdi ◽  
Gábor Hollandi ◽  
Nikita Moshkov ◽  
Péter Horváth

To find objects on images automatically, first we must teach the computer how to recognize them by showing examples. The most robust of such methods use deep learning, needing a large annotated dataset to be efficient. We propose an ImageJ plugin, AnnotatorJ, a fast and easy-to-use annotation tool aiding manual hand-labeling with deep learning.

Author(s):  
Réka Hollandi ◽  
Ákos Diósdi ◽  
Gábor Hollandi ◽  
Nikita Moshkov ◽  
Péter Horváth

AbstractAnnotatorJ combines single-cell identification with deep learning and manual annotation. Cellular analysis quality depends on accurate and reliable detection and segmentation of cells so that the subsequent steps of analyses e.g. expression measurements may be carried out precisely and without bias. Deep learning has recently become a popular way of segmenting cells, performing unimaginably better than conventional methods. However, such deep learning applications may be trained on a large amount of annotated data to be able to match the highest expectations. High-quality annotations are unfortunately expensive as they require field experts to create them, and often cannot be shared outside the lab due to medical regulations.We propose AnnotatorJ, an ImageJ plugin for the semi-automatic annotation of cells (or generally, objects of interest) on (not only) microscopy images in 2D that helps find the true contour of individual objects by applying U-Net-based pre-segmentation. The manual labour of hand-annotating cells can be significantly accelerated by using our tool. Thus, it enables users to create such datasets that could potentially increase the accuracy of state-of-the-art solutions, deep learning or otherwise, when used as training data.


2019 ◽  
Vol 32 (4) ◽  
pp. 571-581 ◽  
Author(s):  
Kenneth A. Philbrick ◽  
Alexander D. Weston ◽  
Zeynettin Akkus ◽  
Timothy L. Kline ◽  
Panagiotis Korfiatis ◽  
...  

2018 ◽  
Vol 34 (22) ◽  
pp. 3825-3834 ◽  
Author(s):  
Cheng Yang ◽  
Longshu Yang ◽  
Man Zhou ◽  
Haoling Xie ◽  
Chengjiu Zhang ◽  
...  

2020 ◽  
Author(s):  
Tim Henning ◽  
Benjamin Bergner ◽  
Christoph Lippert

Instance segmentation is a common task in quantitative cell analysis. While there are many approaches doing this using machine learning, typically, the training process requires a large amount of manually annotated data. We present HistoFlow, a software for annotation-efficient training of deep learning models for cell segmentation and analysis with an interactive user interface.It provides an assisted annotation tool to quickly draw and correct cell boundaries and use biomarkers as weak annotations. It also enables the user to create artificial training data to lower the labeling effort. We employ a universal U-Net neural network architecture that allows accurate instance segmentation and the classification of phenotypes in only a single pass of the network. Transfer learning is available through the user interface to adapt trained models to new tissue types.We demonstrate HistoFlow for fluorescence breast cancer images. The models trained using only artificial data perform comparably to those trained with time-consuming manual annotations. They outperform traditional cell segmentation algorithms and match state-of-the-art machine learning approaches. A user test shows that cells can be annotated six times faster than without the assistance of our annotation tool. Extending a segmentation model for classification of epithelial cells can be done using only 50 to 1500 annotations.Our results show that, unlike previous assumptions, it is possible to interactively train a deep learning model in a matter of minutes without many manual annotations.


Author(s):  
Fei Teng ◽  
Minbo Ma ◽  
Zheng Ma ◽  
Lufei Huang ◽  
Ming Xiao ◽  
...  

2019 ◽  
Author(s):  
Eliot T McKinley ◽  
Joseph T Roland ◽  
Jeffrey L Franklin ◽  
Mary Catherine Macedonia ◽  
Paige N Vega ◽  
...  

AbstractIncreasingly, highly multiplexed in situ tissue imaging methods are used to profile protein expression at the single-cell level. However, a critical limitation is a lack of robust cell segmentation tools applicable for sections of tissues with a complex architecture and multiple cell types. Using human colorectal adenomas, we present a pipeline for cell segmentation and quantification that utilizes machine learning-based pixel classification to define cellular compartments, a novel method for extending incomplete cell membranes, quantification of antibody staining, and a deep learning-based cell shape descriptor. We envision that this method can be broadly applied to different imaging platforms and tissue types.


2021 ◽  
Author(s):  
Mustafa I. Jaber ◽  
Bing Song ◽  
Liudmila Beziaeva ◽  
Christopher W. Szeto ◽  
Patricia Spilman ◽  
...  

ABSTRACTWell-annotated exemplars are an important prerequisite for supervised deep learning schemes. Unfortunately, generating these annotations is a cumbersome and laborious process, due to the large amount of time and effort needed. Here we present a deep-learning-based iterative digital pathology annotation tool that is both easy to use by pathologists and easy to integrate into machine vision systems. Our pathology image annotation tool greatly reduces annotation time from hours to a few minutes, while maintaining high fidelity with human-expert manual annotations. Here we demonstrate that our active learning tool can be used for a variety of pathology annotation tasks including masking tumor, stroma, and lymphocyte-rich regions, among others. This annotation automation system was validated on 90 unseen digital pathology images with tumor content from the CAMELYON16 database and it was found that pathologists’ gold standard masks were re-produced successfully using our tool. That is, an average of 2.7 positive selections (mouse clicks) and 8.0 negative selections (mouse clicks) were sufficient to generate tumor masks similar to pathologists’ gold standard in CAMELYON16 test WSIs. Furthermore, the developed image annotation tool has been used to build gold standard masks for hundreds of TCGA digital pathology images. This set was used to train a convolutional neural network for identification of tumor epithelium. The developed pan-cancer deep neural network was then tested on TCGA and internal data with comparable performance. The validated pathology image annotation tool described herein has the potential to be of great value in facilitating accurate, rapid pathological analysis of tumor biopsies.


Agriculture ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 131
Author(s):  
André Silva Aguiar ◽  
Nuno Namora Monteiro ◽  
Filipe Neves dos Santos ◽  
Eduardo J. Solteiro Pires ◽  
Daniel Silva ◽  
...  

The development of robotic solutions in unstructured environments brings several challenges, mainly in developing safe and reliable navigation solutions. Agricultural environments are particularly unstructured and, therefore, challenging to the implementation of robotics. An example of this is the mountain vineyards, built-in steep slope hills, which are characterized by satellite signal blockage, terrain irregularities, harsh ground inclinations, and others. All of these factors impose the implementation of precise and reliable navigation algorithms, so that robots can operate safely. This work proposes the detection of semantic natural landmarks that are to be used in Simultaneous Localization and Mapping algorithms. Thus, Deep Learning models were trained and deployed to detect vine trunks. As significant contributions, we made available a novel vine trunk dataset, called VineSet, which was constituted by more than 9000 images and respective annotations for each trunk. VineSet was used to train state-of-the-art Single Shot Multibox Detector models. Additionally, we deployed these models in an Edge-AI fashion and achieve high frame rate execution. Finally, an assisted annotation tool was proposed to make the process of dataset building easier and improve models incrementally. The experiments show that our trained models can detect trunks with an Average Precision up to 84.16% and our assisted annotation tool facilitates the annotation process, even in other areas of agriculture, such as orchards and forests. Additional experiments were performed, where the impact of the amount of training data and the comparison between using Transfer Learning and training from scratch were evaluated. In these cases, some theoretical assumptions were verified.


2021 ◽  
Author(s):  
Adrian Krenzer ◽  
Kevin Makowski ◽  
Amar Hekalo ◽  
Daniel Fitting ◽  
Joel Troya ◽  
...  

Abstract Background: Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all of the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g. visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. Results: Using this framework we were able to reduce work load of domain experts on average by a factor of 20. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated pre-annotation model enhances the annotation speed further. Through a study with 10 participants we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. Conclusion: In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source.


Author(s):  
Stellan Ohlsson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document