scholarly journals Frontal Cortex Neuron Type Classification with Deep Learning and Recurrence Plot

2021 ◽  
Vol 38 (3) ◽  
pp. 807-819
Author(s):  
Fatma Özcan ◽  
Ahmet Alkan

One of the goals of neural decoding in neuroscience is to create Brain-Computer Interfaces (BCI) that use nerve signals. In this context, we are interested in the activity of nerve cells. It is possible to classify nerve cells as excitatory or inhibitors by evaluating individual extra-cellular measurements taken from the frontal cortex of rats. Classification of neurons with only spike timing values has not been studied before, with deep learning, without knowing all of the wave properties and the intercellular interactions. In this study, inter-spike interval values of individual neuronal spike sequences were converted into recurrence plot images to analyze as point processing, image features were extracted using the pre-trained AlexNet with CNN deep learning method, and frontal cortex nerve cell type classification was made. Kernel classification, SVM, Naive Bayes, Ensemble, decision trees classification methods were used. The accuracy, sensitivity and specificity evaluate the proposed methods. A success of more than 81% has been achieved. Thus, the cell type is defined automatically. It has been observed that the ISI properties of spike trains can carry out information on cell type and thus neural network activity. Under these circumstances, these values are significant and important for neuroscientists.

2020 ◽  
Author(s):  
Christopher M. Wilson ◽  
Brooke L. Fridley ◽  
José Conejo-Garcia ◽  
Xuefeng Wang ◽  
Xiaoqing Yu

AbstractCell type classification is an important problem in cancer research, especially with the advent of single cell technologies. Correctly identifying cells within the tumor microenvironment can provide oncologists with a snapshot of how a patient’s immune system is reacting to the tumor. Wide deep learning (WDL) is an approach to construct a cell-classification prediction model that can learn patterns within high-dimensional data (deep) and ensure that biologically relevant features (wide) remain in the final model. In this paper, we demonstrate that the use of regularization can prevent overfitting and adding a wide component to a neural network can result in a model with better predictive performance. In particular, we observed that a combination of dropout and ℓ2 regularization can lead to a validation loss function that does not depend on the number of training iterations and does not experience a significant decrease in prediction accuracy compared to models with ℓ1, dropout, or no regularization. Additionally, we show WDL can have superior classification accuracy when the training and testing of a model is completed data on that arise from the same cancer type, but from different platforms. More specifically, WDL compared to traditional deep learning models can substantially increase the overall cell type prediction accuracy (41 to 90%) and T-cell sub-types (CD4: 0 to 76%, and CD8: 61 to 96%) when the models were trained using melanoma data obtained from the 10X platform and tested on basal cell carcinoma data obtained using SMART-seq.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


2021 ◽  
pp. 1-11
Author(s):  
Yaning Liu ◽  
Lin Han ◽  
Hexiang Wang ◽  
Bo Yin

Papillary thyroid carcinoma (PTC) is a common carcinoma in thyroid. As many benign thyroid nodules have the papillary structure which could easily be confused with PTC in morphology. Thus, pathologists have to take a lot of time on differential diagnosis of PTC besides personal diagnostic experience and there is no doubt that it is subjective and difficult to obtain consistency among observers. To address this issue, we applied deep learning to the differential diagnosis of PTC and proposed a histological image classification method for PTC based on the Inception Residual convolutional neural network (IRCNN) and support vector machine (SVM). First, in order to expand the dataset and solve the problem of histological image color inconsistency, a pre-processing module was constructed that included color transfer and mirror transform. Then, to alleviate overfitting of the deep learning model, we optimized the convolution neural network by combining Inception Network and Residual Network to extract image features. Finally, the SVM was trained via image features extracted by IRCNN to perform the classification task. Experimental results show effectiveness of the proposed method in the classification of PTC histological images.


2021 ◽  
Vol 7 (3) ◽  
pp. 51
Author(s):  
Emanuela Paladini ◽  
Edoardo Vantaggiato ◽  
Fares Bougourzi ◽  
Cosimo Distante ◽  
Abdenour Hadid ◽  
...  

In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required data for creating automatic tissue phenotyping systems. In this paper, we study different hand-crafted feature-based and deep learning methods using two popular multi-classes CRC-tissue-type databases: Kather-CRC-2016 and CRC-TP. For the hand-crafted features, we use two texture descriptors (LPQ and BSIF) and their combination. In addition, two classifiers are used (SVM and NN) to classify the texture features into distinct CRC tissue types. For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. The experimental results show that the proposed approaches outperformed the hand-crafted feature-based methods, CNN architectures and the state-of-the-art methods in both databases.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maiki Higa ◽  
Shinya Tanahara ◽  
Yoshitaka Adachi ◽  
Natsumi Ishiki ◽  
Shin Nakama ◽  
...  

AbstractIn this report, we propose a deep learning technique for high-accuracy estimation of the intensity class of a typhoon from a single satellite image, by incorporating meteorological domain knowledge. By using the Visual Geometric Group’s model, VGG-16, with images preprocessed with fisheye distortion, which enhances a typhoon’s eye, eyewall, and cloud distribution, we achieved much higher classification accuracy than that of a previous study, even with sequential-split validation. Through comparison of t-distributed stochastic neighbor embedding (t-SNE) plots for the feature maps of VGG with the original satellite images, we also verified that the fisheye preprocessing facilitated cluster formation, suggesting that our model could successfully extract image features related to the typhoon intensity class. Moreover, gradient-weighted class activation mapping (Grad-CAM) was applied to highlight the eye and the cloud distributions surrounding the eye, which are important regions for intensity classification; the results suggest that our model qualitatively gained a viewpoint similar to that of domain experts. A series of analyses revealed that the data-driven approach using only deep learning has limitations, and the integration of domain knowledge could bring new breakthroughs.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Yinghao Chu ◽  
Chen Huang ◽  
Xiaodan Xie ◽  
Bohai Tan ◽  
Shyam Kamal ◽  
...  

This study proposes a multilayer hybrid deep-learning system (MHS) to automatically sort waste disposed of by individuals in the urban public area. This system deploys a high-resolution camera to capture waste image and sensors to detect other useful feature information. The MHS uses a CNN-based algorithm to extract image features and a multilayer perceptrons (MLP) method to consolidate image features and other feature information to classify wastes as recyclable or the others. The MHS is trained and validated against the manually labelled items, achieving overall classification accuracy higher than 90% under two different testing scenarios, which significantly outperforms a reference CNN-based method relying on image-only inputs.


2021 ◽  
Vol 9 ◽  
Author(s):  
Joshua J. Levy ◽  
Rebecca M. Lebeaux ◽  
Anne G. Hoen ◽  
Brock C. Christensen ◽  
Louis J. Vaickus ◽  
...  

What is the relationship between mortality and satellite images as elucidated through the use of Convolutional Neural Networks?Background: Following a century of increase, life expectancy in the United States has stagnated and begun to decline in recent decades. Using satellite images and street view images, prior work has demonstrated associations of the built environment with income, education, access to care, and health factors such as obesity. However, assessment of learned image feature relationships with variation in crude mortality rate across the United States has been lacking.Objective: We sought to investigate if county-level mortality rates in the U.S. could be predicted from satellite images.Methods: Satellite images of neighborhoods surrounding schools were extracted with the Google Static Maps application programming interface for 430 counties representing ~68.9% of the US population. A convolutional neural network was trained using crude mortality rates for each county in 2015 to predict mortality. Learned image features were interpreted using Shapley Additive Feature Explanations, clustered, and compared to mortality and its associated covariate predictors.Results: Predicted mortality from satellite images in a held-out test set of counties was strongly correlated to the true crude mortality rate (Pearson r = 0.72). Direct prediction of mortality using a deep learning model across a cross-section of 430 U.S. counties identified key features in the environment (e.g., sidewalks, driveways, and hiking trails) associated with lower mortality. Learned image features were clustered, and we identified 10 clusters that were associated with education, income, geographical region, race, and age.Conclusions: The application of deep learning techniques to remotely-sensed features of the built environment can serve as a useful predictor of mortality in the United States. Although we identified features that were largely associated with demographic information, future modeling approaches that directly identify image features associated with health-related outcomes have the potential to inform targeted public health interventions.


2021 ◽  
Author(s):  
Alexei M. Bygrave ◽  
Ayesha Sengupta ◽  
Ella P. Jackert ◽  
Mehroz Ahmed ◽  
Beloved Adenuga ◽  
...  

Synapses in the brain exhibit cell–type–specific differences in basal synaptic transmission and plasticity. Here, we evaluated cell–type–specific differences in the composition of glutamatergic synapses, identifying Btbd11, as an inhibitory interneuron–specific synapse–enriched protein. Btbd11 is highly conserved across species and binds to core postsynaptic proteins including Psd–95. Intriguingly, we show that Btbd11 can undergo liquid–liquid phase separation when expressed with Psd–95, supporting the idea that the glutamatergic post synaptic density in synapses in inhibitory and excitatory neurons exist in a phase separated state. Knockout of Btbd11 from inhibitory interneurons decreased glutamatergic signaling onto parvalbumin–positive interneurons. Further, both in vitro and in vivo, we find that Btbd11 knockout disrupts network activity. At the behavioral level, Btbd11 knockout from interneurons sensitizes mice to pharmacologically induced hyperactivity following NMDA receptor antagonist challenge. Our findings identify a cell–type–specific protein that supports glutamatergic synapse function in inhibitory interneurons–with implication for circuit function and animal behavior.


Sign in / Sign up

Export Citation Format

Share Document