scholarly journals High precision automated detection of labeled nuclei in Gigapixel resolution image data of Mouse Brain

2018 ◽  
Author(s):  
Sukhendu Das ◽  
Jaikishan Jayakumar ◽  
Samik Banerjee ◽  
Janani Ramaswamy ◽  
Venu Vangala ◽  
...  

AbstractThere is a need in modern neuroscience for accurate and automated image processing techniques for analyzing the large volume of neuroanatomical imaging data. Even at light microscopic levels, imaging mouse brains produces individual data volumes in the TerraByte range. A fundamental task involves the detection and quantification of objects of a given type, e.g. neuronal nuclei or somata, in brain scan dataset. Traditionally this quantification has been performed by human visual inspection with high accuracy, that is not scalable. When modern automated CNN and SVM-based methods are used to solve this classification problem, they achieve accuracy levels that range between 85 – 92%. However, higher rates of precision and recall that are close to that of human performance are necessary. In this paper, we describe an unsupervised, iterative algorithm, which provides a high performance for a specific problem of detecting Green Fluorescent Protein labeled nuclei in 2D scans of mouse brains. The algorithm judiciously combines classical computer vision techniques and is focused on the complex problem of decomposing strong overlapped objects of interest. Our proposed technique uses feature detection methods on ridge lines over distance transformation of the image and an arc based iterative spatial-filling method to solve the problem. We demonstrate our results on mouse brain dataset of Gigabyte resolution and compare it with manual annotation of the brains. Our results show that an aptly designed CV algorithm with classical feature extractors when tailored to this problem of interest achieves near-ideal human-like performance. Quantitative comparative analysis, using manually annotated ground truth, reveals that our approach performs better on mouse brain scans than general purpose machine learning (including deep CNN) methods.

2018 ◽  
Author(s):  
Andrea Giovannucci ◽  
Johannes Friedrich ◽  
Pat Gunn ◽  
Jérémie Kalfon ◽  
Sue Ann Koay ◽  
...  

AbstractAdvances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. Here we present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good performance on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected a corpus of ground truth annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.


2019 ◽  
Vol 36 (5) ◽  
pp. 1599-1606 ◽  
Author(s):  
Yizhi Wang ◽  
Congchao Wang ◽  
Petter Ranefall ◽  
Gerard Joey Broussard ◽  
Yinxue Wang ◽  
...  

Abstract Motivation Synapses are essential to neural signal transmission. Therefore, quantification of synapses and related neurites from images is vital to gain insights into the underlying pathways of brain functionality and diseases. Despite the wide availability of synaptic punctum imaging data, several issues are impeding satisfactory quantification of these structures by current tools. First, the antibodies used for labeling synapses are not perfectly specific to synapses. These antibodies may exist in neurites or other cell compartments. Second, the brightness of different neurites and synaptic puncta is heterogeneous due to the variation of antibody concentration and synapse-intrinsic differences. Third, images often have low signal to noise ratio due to constraints of experiment facilities and availability of sensitive antibodies. These issues make the detection of synapses challenging and necessitates developing a new tool to easily and accurately quantify synapses. Results We present an automatic probability-principled synapse detection algorithm and integrate it into our synapse quantification tool SynQuant. Derived from the theory of order statistics, our method controls the false discovery rate and improves the power of detecting synapses. SynQuant is unsupervised, works for both 2D and 3D data, and can handle multiple staining channels. Through extensive experiments on one synthetic and three real datasets with ground truth annotation or manually labeling, SynQuant was demonstrated to outperform peer specialized unsupervised synapse detection tools as well as generic spot detection methods. Availability and implementation Java source code, Fiji plug-in, and test data are available at https://github.com/yu-lab-vt/SynQuant. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
Vol 22 (4) ◽  
pp. 413-424
Author(s):  
Siddheshwar Vilas Patil ◽  
Dinesh B. Kulkarni

In modern computing, high-performance computing (HPC) and parallel computing require most of the decision-making in terms of distributing the payloads (input) uniformly across the available set of resources, majorly processors; the former deals with the hardware and its better utilization. In parallel computing, a larger, complex problem is broken down into multiple smaller calculations and executed simultaneously on several processors. The efficient use of resources (processors) plays a vital role in achieving the maximum throughput which necessitates uniform load distribution across available processors, i.e. load balancing. The load balancing in parallel computing is modeled as a graph partitioning problem. In the graph partitioning problem, the weighted nodes represent the computing cost at each node, and the weighted edges represent the communication cost between the connected nodes. The goal is to partition the graph G into k partitions such that: I) the sum of weights on the nodes is approximately equal for each partition, and, II) the sum of weights on the edges across different partitions is minimum.  In this paper, a novel node-weighted and edge-weighted k-way balanced graph partitioning (NWEWBGP) algorithm of  O(n x n)  is proposed. The algorithm works for all relevant values of k, meets or improves on earlier algorithms in terms of balanced partitioning and lowest edge-cut. For evaluation and validation, the outcome is compared with the ground truth benchmarks.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Andrea Giovannucci ◽  
Johannes Friedrich ◽  
Pat Gunn ◽  
Jérémie Kalfon ◽  
Brandon L Brown ◽  
...  

Advances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. We present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good scalability on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected and combined a corpus of manual annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.


2019 ◽  
Author(s):  
Julien Denis ◽  
Robin F. Dard ◽  
Eleonora Quiroli ◽  
Rosa Cossart ◽  
Michel A. Picardo

AbstractTwo-photon calcium imaging is now widely used to infer neuronal dynamics from changes in fluorescence of an indicator. However, state of the art computational tools are not optimized for the reliable detection of fluorescence transients from highly synchronous neurons located in densely packed regions such as the CA1 pyramidal layer of the hippocampus during early postnatal stages of development. Indeed, the latest analytical tools often lack proper benchmark measurements. To meet this challenge, we first developed a graphical user interface allowing for a precise manual detection of all calcium transients from imaged neurons based on the visualization of the calcium imaging movie. Then, we analyzed the movies using a convolutional neural network with an attention process and a bidirectional long-short term memory network. This method is able to reach human performance and offers a better F1 score (harmonic mean of sensitivity and precision) than CaImAn to infer neural activity in the developing CA1 without any user intervention. It also enables automatically identifying activity originating from GABAergic neurons. Overall, DeepCINAC offers a simple, fast and flexible open-source toolbox for processing a wide variety of calcium imaging datasets while providing the tools to evaluate its performance.Significance statementInferring neuronal activity from calcium imaging data remains a challenge due to the difficulty in obtaining a ground truth using patch clamp recordings and the problem of finding optimal tuning parameters of inference algorithms. DeepCINAC offers a flexible, fast and easy-to-use toolbox to infer neuronal activity from any kind of calcium imaging dataset through visual inspection.


Processes ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1071
Author(s):  
Lucia Billeci ◽  
Asia Badolato ◽  
Lorenzo Bachi ◽  
Alessandro Tonacci

Alzheimer’s disease is notoriously the most common cause of dementia in the elderly, affecting an increasing number of people. Although widespread, its causes and progression modalities are complex and still not fully understood. Through neuroimaging techniques, such as diffusion Magnetic Resonance (MR), more sophisticated and specific studies of the disease can be performed, offering a valuable tool for both its diagnosis and early detection. However, processing large quantities of medical images is not an easy task, and researchers have turned their attention towards machine learning, a set of computer algorithms that automatically adapt their output towards the intended goal. In this paper, a systematic review of recent machine learning applications on diffusion tensor imaging studies of Alzheimer’s disease is presented, highlighting the fundamental aspects of each work and reporting their performance score. A few examined studies also include mild cognitive impairment in the classification problem, while others combine diffusion data with other sources, like structural magnetic resonance imaging (MRI) (multimodal analysis). The findings of the retrieved works suggest a promising role for machine learning in evaluating effective classification features, like fractional anisotropy, and in possibly performing on different image modalities with higher accuracy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sakthi Kumar Arul Prakash ◽  
Conrad Tucker

AbstractThis work investigates the ability to classify misinformation in online social media networks in a manner that avoids the need for ground truth labels. Rather than approach the classification problem as a task for humans or machine learning algorithms, this work leverages user–user and user–media (i.e.,media likes) interactions to infer the type of information (fake vs. authentic) being spread, without needing to know the actual details of the information itself. To study the inception and evolution of user–user and user–media interactions over time, we create an experimental platform that mimics the functionality of real-world social media networks. We develop a graphical model that considers the evolution of this network topology to model the uncertainty (entropy) propagation when fake and authentic media disseminates across the network. The creation of a real-world social media network enables a wide range of hypotheses to be tested pertaining to users, their interactions with other users, and with media content. The discovery that the entropy of user–user and user–media interactions approximate fake and authentic media likes, enables us to classify fake media in an unsupervised learning manner.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Xiang Li ◽  
Jianzheng Liu ◽  
Jessica Baron ◽  
Khoa Luu ◽  
Eric Patterson

AbstractRecent attention to facial alignment and landmark detection methods, particularly with application of deep convolutional neural networks, have yielded notable improvements. Neither these neural-network nor more traditional methods, though, have been tested directly regarding performance differences due to camera-lens focal length nor camera viewing angle of subjects systematically across the viewing hemisphere. This work uses photo-realistic, synthesized facial images with varying parameters and corresponding ground-truth landmarks to enable comparison of alignment and landmark detection techniques relative to general performance, performance across focal length, and performance across viewing angle. Recently published high-performing methods along with traditional techniques are compared in regards to these aspects.


2021 ◽  
Vol 11 (13) ◽  
pp. 6006
Author(s):  
Huy Le ◽  
Minh Nguyen ◽  
Wei Qi Yan ◽  
Hoa Nguyen

Augmented reality is one of the fastest growing fields, receiving increased funding for the last few years as people realise the potential benefits of rendering virtual information in the real world. Most of today’s augmented reality marker-based applications use local feature detection and tracking techniques. The disadvantage of applying these techniques is that the markers must be modified to match the unique classified algorithms or they suffer from low detection accuracy. Machine learning is an ideal solution to overcome the current drawbacks of image processing in augmented reality applications. However, traditional data annotation requires extensive time and labour, as it is usually done manually. This study incorporates machine learning to detect and track augmented reality marker targets in an application using deep neural networks. We firstly implement the auto-generated dataset tool, which is used for the machine learning dataset preparation. The final iOS prototype application incorporates object detection, object tracking and augmented reality. The machine learning model is trained to recognise the differences between targets using one of YOLO’s most well-known object detection methods. The final product makes use of a valuable toolkit for developing augmented reality applications called ARKit.


Sign in / Sign up

Export Citation Format

Share Document