scholarly journals A deep learning algorithm for 3D cell detection in whole mouse brain image datasets

2021 ◽  
Vol 17 (5) ◽  
pp. e1009074
Author(s):  
Adam L. Tyson ◽  
Charly V. Rousseau ◽  
Christian J. Niedworok ◽  
Sepiedeh Keshavarzi ◽  
Chryssanthi Tsitoura ◽  
...  

Understanding the function of the nervous system necessitates mapping the spatial distributions of its constituent cells defined by function, anatomy or gene expression. Recently, developments in tissue preparation and microscopy allow cellular populations to be imaged throughout the entire rodent brain. However, mapping these neurons manually is prone to bias and is often impractically time consuming. Here we present an open-source algorithm for fully automated 3D detection of neuronal somata in mouse whole-brain microscopy images using standard desktop computer hardware. We demonstrate the applicability and power of our approach by mapping the brain-wide locations of large populations of cells labeled with cytoplasmic fluorescent proteins expressed via retrograde trans-synaptic viral infection.

2020 ◽  
Author(s):  
Adam L. Tyson ◽  
Charly V. Rousseau ◽  
Christian J. Niedworok ◽  
Sepiedeh Keshavarzi ◽  
Chryssanthi Tsitoura ◽  
...  

Understanding the function of the nervous system necessitates mapping the spatial distributions of its constituent cells defined by function, anatomy or gene expression. Recently, developments in tissue preparation and microscopy allow cellular populations to be imaged throughout the entire rodent brain. However, mapping these neurons manually is prone to bias and is often impractically time consuming. Here we present an open-source algorithm for fully automated 3D detection of neuronal somata in mouse whole-brain microscopy images using standard desktop computer hardware. We demonstrate the applicability and power of our approach by mapping the brain-wide locations of large populations of cells labeled with cytoplasmic fluorescent proteins expressed via retrograde trans-synaptic viral infection.


The brain tumor is one of the most dangerous, common and aggressive diseases which leads to a very short life expectancy at the highest grade. Thus, to prevent life from such disease, early recognition, and fast treatment is an essential step. In this approach, MRI images are used to analyze brain abnormalities. The manual investigation of brain tumor classification is a time-consuming task and there might have possibilities of human errors. Hence accurate analysis in a tiny span of time is an essential requirement. In this approach, the automatic brain tumor classification algorithm using a highly accurate Convolutional Neural Network (CNN) algorithm is presented. Initially, the brain part is segmented by thresholding approach followed by a morphological operation. The AlexNet transfer learning network of CNN is used because of the limitation of the brain MRI dataset. The classification layer of Alexnet is replaced by the softmax layer with benign and malignant training images and trained using small weights. The experimental analysis demonstrates that the proposed system achieves the F-measure of 98.44% with low complexity than the state-of-arts method.


2019 ◽  
Author(s):  
Mark Allen Thornton ◽  
Diana Tamir

The social world buzzes with action. People constantly walk, talk, eat, work, play, snooze, and so on. To interact with others successfully, we need to both recognize their current actions and predict their future actions. Here we used open fMRI data to test the hypothesis that people do both at the same time: when the brain perceives an action, it simultaneously encodes likely future actions. In the scanner, participants watched a naturalistic video. We automatically annotated the actions in that video using a deep learning algorithm, and then constructed a model which could decode participants’ action representations from multivoxel neural activity. Action representations here are defined as locations within a 6-dimensional action space identified by previous work. We hypothesized that within this space, actions are located close to other actions that they are likely to precede or follow. Using this proximity principle, we tested whether a participant’s representation of current actions predicted which actions actually occurred later in the video. Results indicated that neural representations correctly presaged actions later in the video, as-yet unseen by the participant. This finding suggests that the way the brain represents the others’ current behavior gives people an automatic glimpse into others’ future behavior.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Asif Mansoor ◽  
Muhammad Waleed Usman ◽  
Noreen Jamil ◽  
M. Asif Naeem

Electroencephalography-(EEG-) based control is a noninvasive technique which employs brain signals to control electrical devices/circuits. Currently, the brain-computer interface (BCI) systems provide two types of signals, raw signals and logic state signals. The latter signals are used to turn on/off the devices. In this paper, the capabilities of BCI systems are explored, and a survey is conducted how to extend and enhance the reliability and accuracy of the BCI systems. A structured overview was provided which consists of the data acquisition, feature extraction, and classification algorithm methods used by different researchers in the past few years. Some classification algorithms for EEG-based BCI systems are adaptive classifiers, tensor classifiers, transfer learning approach, and deep learning, as well as some miscellaneous techniques. Based on our assessment, we generally concluded that, through adaptive classifiers, accurate results are acquired as compared to the static classification techniques. Deep learning techniques were developed to achieve the desired objectives and their real-time implementation as compared to other algorithms.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 112
Author(s):  
Fangzhou Xu ◽  
Fenqi Rong ◽  
Yunjing Miao ◽  
Yanan Sun ◽  
Gege Dong ◽  
...  

This study describes a method for classifying electrocorticograms (ECoGs) based on motor imagery (MI) on the brain–computer interface (BCI) system. This method is different from the traditional feature extraction and classification method. In this paper, the proposed method employs the deep learning algorithm for extracting features and the traditional algorithm for classification. Specifically, we mainly use the convolution neural network (CNN) to extract the features from the training data and then classify those features by combing with the gradient boosting (GB) algorithm. The comprehensive study with CNN and GB algorithms will profoundly help us to obtain more feature information from brain activities, enabling us to obtain the classification results from human body actions. The performance of the proposed framework has been evaluated on the dataset I of BCI Competition III. Furthermore, the combination of deep learning and traditional algorithms provides some ideas for future research with the BCI systems.


2020 ◽  
Author(s):  
Lukas M. Simon ◽  
Yin-Ying Wang ◽  
Zhongming Zhao

AbstractEfficient integration of heterogeneous and increasingly large single cell RNA sequencing (scRNA-seq) data poses a major challenge for analysis and in particular, comprehensive atlasing efforts. Here, we developed a novel deep learning algorithm to overcome batch effects using batch-aware triplet neural networks, called INSCT (“Insight”). Using simulated and real data, we demonstrate that INSCT generates an embedding space which accurately integrates cells across experiments, platforms and species. Our benchmark comparisons with current state-of-the-art scRNA-seq integration methods revealed that INSCT outperforms competing methods in scalability while achieving comparable accuracies. Moreover, using INSCT in semi-supervised mode enables users to classify unlabeled cells by projecting them into a reference collection of annotated cells. To demonstrate scalability, we applied INSCT to integrate more than 2.6 million transcriptomes from four independent studies of mouse brains in less than 1.5 hours using less than 25 gigabytes of memory. This feature empowers researchers to perform atlasing scale data integration in a typical desktop computer environment. INSCT is freely available at https://github.com/lkmklsmn/insct.HighlightsINSCT accurately integrates multiple scRNA-seq datasetsINSCT accurately predicts cell types for an independent scRNA-seq datasetEfficient deep learning framework enables integration of millions of cells on a personal computer


2021 ◽  
Vol 11 ◽  
Author(s):  
Sarah Rudigkeit ◽  
Julian B. Reindl ◽  
Nicole Matejka ◽  
Rika Ramson ◽  
Matthias Sammer ◽  
...  

The fundamental basis in the development of novel radiotherapy methods is in-vitro cellular studies. To assess different endpoints of cellular reactions to irradiation like proliferation, cell cycle arrest, and cell death, several assays are used in radiobiological research as standard methods. For example, colony forming assay investigates cell survival and Caspase3/7-Sytox assay cell death. The major limitation of these assays is the analysis at a fixed timepoint after irradiation. Thus, not much is known about the reactions before or after the assay is performed. Additionally, these assays need special treatments, which influence cell behavior and health. In this study, a completely new method is proposed to tackle these challenges: A deep-learning algorithm called CeCILE (Cell Classification and In-vitroLifecycle Evaluation), which is used to detect and analyze cells on videos obtained from phase-contrast microscopy. With this method, we can observe and analyze the behavior and the health conditions of single cells over several days after treatment, up to a sample size of 100 cells per image frame. To train CeCILE, we built a dataset by labeling cells on microscopic images and assign class labels to each cell, which define the cell states in the cell cycle. After successful training of CeCILE, we irradiated CHO-K1 cells with 4 Gy protons, imaged them for 2 days by a microscope equipped with a live-cell-imaging set-up, and analyzed the videos by CeCILE and by hand. From analysis, we gained information about cell numbers, cell divisions, and cell deaths over time. We could show that similar results were achieved in the first proof of principle compared with colony forming and Caspase3/7-Sytox assays in this experiment. Therefore, CeCILE has the potential to assess the same endpoints as state-of-the-art assays but gives extra information about the evolution of cell numbers, cell state, and cell cycle. Additionally, CeCILE will be extended to track individual cells and their descendants throughout the whole video to follow the behavior of each cell and the progeny after irradiation. This tracking method is capable to put radiobiologic research to the next level to obtain a better understanding of the cellular reactions to radiation.


Sign in / Sign up

Export Citation Format

Share Document