scholarly journals HistoFlow: Label-Efficient and Interactive Deep Learning Cell Analysis

2020 ◽  
Author(s):  
Tim Henning ◽  
Benjamin Bergner ◽  
Christoph Lippert

Instance segmentation is a common task in quantitative cell analysis. While there are many approaches doing this using machine learning, typically, the training process requires a large amount of manually annotated data. We present HistoFlow, a software for annotation-efficient training of deep learning models for cell segmentation and analysis with an interactive user interface.It provides an assisted annotation tool to quickly draw and correct cell boundaries and use biomarkers as weak annotations. It also enables the user to create artificial training data to lower the labeling effort. We employ a universal U-Net neural network architecture that allows accurate instance segmentation and the classification of phenotypes in only a single pass of the network. Transfer learning is available through the user interface to adapt trained models to new tissue types.We demonstrate HistoFlow for fluorescence breast cancer images. The models trained using only artificial data perform comparably to those trained with time-consuming manual annotations. They outperform traditional cell segmentation algorithms and match state-of-the-art machine learning approaches. A user test shows that cells can be annotated six times faster than without the assistance of our annotation tool. Extending a segmentation model for classification of epithelial cells can be done using only 50 to 1500 annotations.Our results show that, unlike previous assumptions, it is possible to interactively train a deep learning model in a matter of minutes without many manual annotations.

Author(s):  
Tobias M. Rasse ◽  
Réka Hollandi ◽  
Péter Horváth

AbstractVarious pre-trained deep learning models for the segmentation of bioimages have been made available as ‘developer-to-end-user’ solutions. They usually require neither knowledge of machine learning nor coding skills, are optimized for ease of use, and deployability on laptops. However, testing these tools individually is tedious and success is uncertain.Here, we present the ‘Op’en ‘Se’gmentation ‘F’ramework (OpSeF), a Python framework for deep learning-based instance segmentation. OpSeF aims at facilitating the collaboration of biomedical users with experienced image analysts. It builds on the analysts’ knowledge in Python, machine learning, and workflow design to solve complex analysis tasks at any scale in a reproducible, well-documented way. OpSeF defines standard inputs and outputs, thereby facilitating modular workflow design and interoperability with other software. Users play an important role in problem definition, quality control, and manual refinement of results. All analyst tasks are optimized for deployment on Linux workstations or GPU clusters, all user tasks may be performed on any laptop in ImageJ.OpSeF semi-automates preprocessing, convolutional neural network (CNN)-based segmentation in 2D or 3D, and post-processing. It facilitates benchmarking of multiple models in parallel. OpSeF streamlines the optimization of parameters for pre- and post-processing such, that an available model may frequently be used without retraining. Even if sufficiently good results are not achievable with this approach, intermediate results can inform the analysts in the selection of the most promising CNN-architecture in which the biomedical user might invest the effort of manually labeling training data.We provide Jupyter notebooks that document sample workflows based on various image collections. Analysts may find these notebooks useful to illustrate common segmentation challenges, as they prepare the advanced user for gradually taking over some of their tasks and completing their projects independently. The notebooks may also be used to explore the analysis options available within OpSeF in an interactive way and to document and share final workflows.Currently, three mechanistically distinct CNN-based segmentation methods, the U-Net implementation used in Cellprofiler 3.0, StarDist, and Cellpose have been integrated within OpSeF. The addition of new networks requires little, the addition of new models requires no coding skills. Thus, OpSeF might soon become both an interactive model repository, in which pre-trained models might be shared, evaluated, and reused with ease.


2018 ◽  
Vol 3 ◽  
pp. 19
Author(s):  
Hiroaki Mano ◽  
Gopal Kotecha ◽  
Kenji Leibnitz ◽  
Takashi Matsubara ◽  
Aya Nakae ◽  
...  

Background. Chronic pain is a common, often disabling condition thought to involve a combination of peripheral and central neurobiological factors. However, the extent and nature of changes in the brain is poorly understood. Methods. We investigated brain network architecture using resting-state fMRI data in chronic back pain patients in the UK and Japan (41 patients, 56 controls), as well as open data from USA. We applied machine learning and deep learning (conditional variational autoencoder architecture) methods to explore classification of patients/controls based on network connectivity. We then studied the network topology of the data, and developed a multislice modularity method to look for consensus evidence of modular reorganisation in chronic back pain. Results. Machine learning and deep learning allowed reliable classification of patients in a third, independent open data set with an accuracy of 63%, with 68% in cross validation of all data. We identified robust evidence of network hub disruption in chronic pain, most consistently with respect to clustering coefficient and betweenness centrality. We found a consensus pattern of modular reorganisation involving extensive, bilateral regions of sensorimotor cortex, and characterised primarily by negative reorganisation - a tendency for sensorimotor cortex nodes to be less inclined to form pairwise modular links with other brain nodes. In contrast, intraparietal sulcus displayed a propensity towards positive modular reorganisation, suggesting that it might have a role in forming modules associated with the chronic pain state. Conclusion. The results provide evidence of consistent and characteristic brain network changes in chronic pain, characterised primarily by extensive reorganisation of the network architecture of the sensorimotor cortex.


2021 ◽  
Author(s):  
Dejin Xun ◽  
Deheng Chen ◽  
Yitian Zhou ◽  
Volker M. Lauschke ◽  
Rui Wang ◽  
...  

Deep learning-based cell segmentation is increasingly utilized in cell biology and molecular pathology, due to massive accumulation of diverse large-scale datasets and excellent performance in cell representation. However, the development of specialized algorithms has long been hampered by a paucity of annotated training data, whereas the performance of generalist algorithm was limited without experiment-specific calibration. Here, we present a deep learning-based tool called Scellseg consisted of novel pre-trained network architecture and contrastive fine-tuning strategy. In comparison to four commonly used algorithms, Scellseg outperformed in average precision on three diverse datasets with no need for dataset-specific configuration. Interestingly, we found that eight images are sufficient for model tuning to achieve satisfied performance based on a shot data scale experiment. We also developed a graphical user interface integrated with functions of annotation, fine-tuning and inference, that allows biologists to easily specialize their own segmentation model and analyze data at the single-cell level.


2018 ◽  
Vol 3 ◽  
pp. 19 ◽  
Author(s):  
Hiroaki Mano ◽  
Gopal Kotecha ◽  
Kenji Leibnitz ◽  
Takashi Matsubara ◽  
Christian Sprenger ◽  
...  

Background. Chronic pain is a common, often disabling condition thought to involve a combination of peripheral and central neurobiological factors. However, the extent and nature of changes in the brain is poorly understood. Methods. We investigated brain network architecture using resting-state fMRI data in chronic back pain patients in the UK and Japan (41 patients, 56 controls), as well as open data from USA. We applied machine learning and deep learning (conditional variational autoencoder architecture) methods to explore classification of patients/controls based on network connectivity. We then studied the network topology of the data, and developed a multislice modularity method to look for consensus evidence of modular reorganisation in chronic back pain. Results. Machine learning and deep learning allowed reliable classification of patients in a third, independent open data set with an accuracy of 63%, with 68% in cross validation of all data. We identified robust evidence of network hub disruption in chronic pain, most consistently with respect to clustering coefficient and betweenness centrality. We found a consensus pattern of modular reorganisation involving extensive, bilateral regions of sensorimotor cortex, and characterised primarily by negative reorganisation - a tendency for sensorimotor cortex nodes to be less inclined to form pairwise modular links with other brain nodes. Furthermore, these regions were found to display increased connectivity with the pregenual anterior cingulate cortex, a region known to be involved in endogenous pain control. In contrast, intraparietal sulcus displayed a propensity towards positive modular reorganisation, suggesting that it might have a role in forming modules associated with the chronic pain state. Conclusion. The results provide evidence of consistent and characteristic brain network changes in chronic pain, characterised primarily by extensive reorganisation of the network architecture of the sensorimotor cortex.


2021 ◽  
Vol 1 ◽  
pp. 1183-1192
Author(s):  
Sebastian Bickel ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractData-driven methods from the field of Artificial Intelligence or Machine Learning are increasingly applied in mechanical engineering. This refers to the development of digital engineering in recent years, which aims to bring these methods into practice in order to realize cost and time savings. However, a necessary step towards the implementation of such methods is the utilization of existing data. This problem is essential because the mere availability of data does not automatically imply data usability. Therefore, this paper presents a method to automatically recognize symbols from principle sketches, which allows the generation of training data for machine learning algorithms. In this approach, the symbols are created randomly and their illustration varies with each generation. . A deep learning network from the field of computer vision is used to test the generated data set and thus to recognize symbols on principle sketches. This type of drawing is especially interesting because the cost-saving potential is very high due to the application in the early phases of the product development process.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2503
Author(s):  
Taro Suzuki ◽  
Yoshiharu Amano

This paper proposes a method for detecting non-line-of-sight (NLOS) multipath, which causes large positioning errors in a global navigation satellite system (GNSS). We use GNSS signal correlation output, which is the most primitive GNSS signal processing output, to detect NLOS multipath based on machine learning. The shape of the multi-correlator outputs is distorted due to the NLOS multipath. The features of the shape of the multi-correlator are used to discriminate the NLOS multipath. We implement two supervised learning methods, a support vector machine (SVM) and a neural network (NN), and compare their performance. In addition, we also propose an automated method of collecting training data for LOS and NLOS signals of machine learning. The evaluation of the proposed NLOS detection method in an urban environment confirmed that NN was better than SVM, and 97.7% of NLOS signals were correctly discriminated.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Effective productivity estimates of fresh produced crops are very essential for efficient farming, commercial planning, and logistical support. In the past ten years, machine learning (ML) algorithms have been widely used for grading and classification of agricultural products in agriculture sector. However, the precise and accurate assessment of the maturity level of tomatoes using ML algorithms is still a quite challenging to achieve due to these algorithms being reliant on hand crafted features. Hence, in this paper we propose a deep learning based tomato maturity grading system that helps to increase the accuracy and adaptability of maturity grading tasks with less amount of training data. The performance of proposed system is assessed on the real tomato datasets collected from the open fields using Nikon D3500 CCD camera. The proposed approach achieved an average maturity classification accuracy of 99.8 % which seems to be quite promising in comparison to the other state of art methods.


2020 ◽  
Author(s):  
Charles Murphy ◽  
Edward Laurence ◽  
Antoine Allard

Abstract Forecasting the evolution of contagion dynamics is still an open problem to which mechanistic models only offer a partial answer. To remain mathematically and/or computationally tractable, these models must rely on simplifying assumptions, thereby limiting the quantitative accuracy of their predictions and the complexity of the dynamics they can model. Here, we propose a complementary approach based on deep learning where the effective local mechanisms governing a dynamic are learned automatically from time series data. Our graph neural network architecture makes very few assumptions about the dynamics, and we demonstrate its accuracy using stochastic contagion dynamics of increasing complexity on static and temporal networks. By allowing simulations on arbitrary network structures, our approach makes it possible to explore the properties of the learned dynamics beyond the training data. Our results demonstrate how deep learning offers a new and complementary perspective to build effective models of contagion dynamics on networks.


Sign in / Sign up

Export Citation Format

Share Document