scholarly journals A General LSTM-based Deep Learning Method for Estimating Neuronal Models and Inferring Neural Circuitry

2021 ◽  
Author(s):  
Kaiwen Sheng ◽  
Peng Qu ◽  
Le Yang ◽  
Xiaofei Liu ◽  
Liuyuan He ◽  
...  

AbstractComputational neural models are essential tools for neuroscientists to study the functional roles of single neurons or neural circuits. With the recent advances in experimental techniques, there is a growing demand to build up neural models at single neuron or large-scale circuit levels. A long-standing challenge to build up such models lies in tuning the free parameters of the models to closely reproduce experimental recordings. There are many advanced machine-learning-based methods developed recently for parameter tuning, but many of them are task-specific or requires onerous manual interference. There lacks a general and fully-automated method since now. Here, we present a Long Short-Term Memory (LSTM)-based deep learning method, General Neural Estimator (GNE), to fully automate the parameter tuning procedure, which can be directly applied to both single neuronal models and large-scale neural circuits. We made comprehensive comparisons with many advanced methods, and GNE showed outstanding performance on both synthesized data and experimental data. Finally, we proposed a roadmap centered on GNE to help guide neuroscientists to computationally reconstruct single neurons and neural circuits, which might inspire future brain reconstruction techniques and corresponding experimental design. The code of our work will be publicly available upon acceptance of this paper.

2021 ◽  
Vol 11 (5) ◽  
pp. 615
Author(s):  
Yiping Wang ◽  
Yang Dai ◽  
Zimo Liu ◽  
Jinjie Guo ◽  
Gongpeng Cao ◽  
...  

Surgical intervention or the control of drug-refractory epilepsy requires accurate analysis of invasive inspection intracranial EEG (iEEG) data. A multi-branch deep learning fusion model is proposed to identify epileptogenic signals from the epileptogenic area of the brain. The classical approach extracts multi-domain signal wave features to construct a time-series feature sequence and then abstracts it through the bi-directional long short-term memory attention machine (Bi-LSTM-AM) classifier. The deep learning approach uses raw time-series signals to build a one-dimensional convolutional neural network (1D-CNN) to achieve end-to-end deep feature extraction and signal detection. These two branches are integrated to obtain deep fusion features and results. Resampling is employed to split the imbalanced epileptogenic and non-epileptogenic samples into balanced subsets for clinical validation. The model is validated over two publicly available benchmark iEEG databases to verify its effectiveness on a private, large-scale, clinical stereo EEG database. The model achieves high sensitivity (97.78%), accuracy (97.60%), and specificity (97.42%) on the Bern–Barcelona database, surpassing the performance of existing state-of-the-art techniques. It is then demonstrated on a clinical dataset with an average intra-subject accuracy of 92.53% and cross-subject accuracy of 88.03%. The results suggest that the proposed method is a valuable and extremely robust approach to help researchers and clinicians develop an automated method to identify the source of iEEG signals.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


Author(s):  
Yuheng Hu ◽  
Yili Hong

Residents often rely on newspapers and television to gather hyperlocal news for community awareness and engagement. More recently, social media have emerged as an increasingly important source of hyperlocal news. Thus far, the literature on using social media to create desirable societal benefits, such as civic awareness and engagement, is still in its infancy. One key challenge in this research stream is to timely and accurately distill information from noisy social media data streams to community members. In this work, we develop SHEDR (social media–based hyperlocal event detection and recommendation), an end-to-end neural event detection and recommendation framework with a particular use case for Twitter to facilitate residents’ information seeking of hyperlocal events. The key model innovation in SHEDR lies in the design of the hyperlocal event detector and the event recommender. First, we harness the power of two popular deep neural network models, the convolutional neural network (CNN) and long short-term memory (LSTM), in a novel joint CNN-LSTM model to characterize spatiotemporal dependencies for capturing unusualness in a region of interest, which is classified as a hyperlocal event. Next, we develop a neural pairwise ranking algorithm for recommending detected hyperlocal events to residents based on their interests. To alleviate the sparsity issue and improve personalization, our algorithm incorporates several types of contextual information covering topic, social, and geographical proximities. We perform comprehensive evaluations based on two large-scale data sets comprising geotagged tweets covering Seattle and Chicago. We demonstrate the effectiveness of our framework in comparison with several state-of-the-art approaches. We show that our hyperlocal event detection and recommendation models consistently and significantly outperform other approaches in terms of precision, recall, and F-1 scores. Summary of Contribution: In this paper, we focus on a novel and important, yet largely underexplored application of computing—how to improve civic engagement in local neighborhoods via local news sharing and consumption based on social media feeds. To address this question, we propose two new computational and data-driven methods: (1) a deep learning–based hyperlocal event detection algorithm that scans spatially and temporally to detect hyperlocal events from geotagged Twitter feeds; and (2) A personalized deep learning–based hyperlocal event recommender system that systematically integrates several contextual cues such as topical, geographical, and social proximity to recommend the detected hyperlocal events to potential users. We conduct a series of experiments to examine our proposed models. The outcomes demonstrate that our algorithms are significantly better than the state-of-the-art models and can provide users with more relevant information about the local neighborhoods that they live in, which in turn may boost their community engagement.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1010
Author(s):  
Nouar AlDahoul ◽  
Hezerul Abdul Karim ◽  
Abdulaziz Saleh Ba Wazir ◽  
Myles Joshua Toledo Tan ◽  
Mohammad Faizal Ahmad Fauzi

Background: Laparoscopy is a surgery performed in the abdomen without making large incisions in the skin and with the aid of a video camera, resulting in laparoscopic videos. The laparoscopic video is prone to various distortions such as noise, smoke, uneven illumination, defocus blur, and motion blur. One of the main components in the feedback loop of video enhancement systems is distortion identification, which automatically classifies the distortions affecting the videos and selects the video enhancement algorithm accordingly. This paper aims to address the laparoscopic video distortion identification problem by developing fast and accurate multi-label distortion classification using a deep learning model. Current deep learning solutions based on convolutional neural networks (CNNs) can address laparoscopic video distortion classification, but they learn only spatial information. Methods: In this paper, utilization of both spatial and temporal features in a CNN-long short-term memory (CNN-LSTM) model is proposed as a novel solution to enhance the classification. First, pre-trained ResNet50 CNN was used to extract spatial features from each video frame by transferring representation from large-scale natural images to laparoscopic images. Next, LSTM was utilized to consider the temporal relation between the features extracted from the laparoscopic video frames to produce multi-label categories. A novel laparoscopic video dataset proposed in the ICIP2020 challenge was used for training and evaluation of the proposed method. Results: The experiments conducted show that the proposed CNN-LSTM outperforms the existing solutions in terms of accuracy (85%), and F1-score (94.2%). Additionally, the proposed distortion identification model is able to run in real-time with low inference time (0.15 sec). Conclusions: The proposed CNN-LSTM model is a feasible solution to be utilized in laparoscopic videos for distortion identification.


Sensor Review ◽  
2019 ◽  
Vol 39 (2) ◽  
pp. 208-217 ◽  
Author(s):  
Jinghan Du ◽  
Haiyan Chen ◽  
Weining Zhang

Purpose In large-scale monitoring systems, sensors in different locations are deployed to collect massive useful time-series data, which can help in real-time data analytics and its related applications. However, affected by hardware device itself, sensor nodes often fail to work, resulting in a common phenomenon that the collected data are incomplete. The purpose of this study is to predict and recover the missing data in sensor networks. Design/methodology/approach Considering the spatio-temporal correlation of large-scale sensor data, this paper proposes a data recover model in sensor networks based on a deep learning method, i.e. deep belief network (DBN). Specifically, when one sensor fails, the historical time-series data of its own and the real-time data from surrounding sensor nodes, which have high similarity with a failure observed using the proposed similarity filter, are collected first. Then, the high-level feature representation of these spatio-temporal correlation data is extracted by DBN. Moreover, to determine the structure of a DBN model, a reconstruction error-based algorithm is proposed. Finally, the missing data are predicted based on these features by a single-layer neural network. Findings This paper collects a noise data set from an airport monitoring system for experiments. Various comparative experiments show that the proposed algorithms are effective. The proposed data recovery model is compared with several other classical models, and the experimental results prove that the deep learning-based model can not only get a better prediction accuracy but also get a better performance in training time and model robustness. Originality/value A deep learning method is investigated in data recovery task, and it proved to be effective compared with other previous methods. This might provide a practical experience in the application of a deep learning method.


2020 ◽  
Vol 36 (19) ◽  
pp. 4935-4941 ◽  
Author(s):  
Yao Yao ◽  
Ihor Smal ◽  
Ilya Grigoriev ◽  
Anna Akhmanova ◽  
Erik Meijering

Abstract Motivation Biological studies of dynamic processes in living cells often require accurate particle tracking as a first step toward quantitative analysis. Although many particle tracking methods have been developed for this purpose, they are typically based on prior assumptions about the particle dynamics, and/or they involve careful tuning of various algorithm parameters by the user for each application. This may make existing methods difficult to apply by non-expert users and to a broader range of tracking problems. Recent advances in deep-learning techniques hold great promise in eliminating these disadvantages, as they can learn how to optimally track particles from example data. Results Here, we present a deep-learning-based method for the data association stage of particle tracking. The proposed method uses convolutional neural networks and long short-term memory networks to extract relevant dynamics features and predict the motion of a particle and the cost of linking detected particles from one time point to the next. Comprehensive evaluations on datasets from the particle tracking challenge demonstrate the competitiveness of the proposed deep-learning method compared to the state of the art. Additional tests on real-time-lapse fluorescence microscopy images of various types of intracellular particles show the method performs comparably with human experts. Availability and implementation The software code implementing the proposed method as well as a description of how to obtain the test data used in the presented experiments will be available for non-commercial purposes from https://github.com/yoyohoho0221/pt_linking. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Hao Lv ◽  
Fu-Ying Dao ◽  
Zheng-Xing Guan ◽  
Hui Yang ◽  
Yan-Wen Li ◽  
...  

Abstract As a newly discovered protein posttranslational modification, histone lysine crotonylation (Kcr) involved in cellular regulation and human diseases. Various proteomics technologies have been developed to detect Kcr sites. However, experimental approaches for identifying Kcr sites are often time-consuming and labor-intensive, which is difficult to widely popularize in large-scale species. Computational approaches are cost-effective and can be used in a high-throughput manner to generate relatively precise identification. In this study, we develop a deep learning-based method termed as Deep-Kcr for Kcr sites prediction by combining sequence-based features, physicochemical property-based features and numerical space-derived information with information gain feature selection. We investigate the performances of convolutional neural network (CNN) and five commonly used classifiers (long short-term memory network, random forest, LogitBoost, naive Bayes and logistic regression) using 10-fold cross-validation and independent set test. Results show that CNN could always display the best performance with high computational efficiency on large dataset. We also compare the Deep-Kcr with other existing tools to demonstrate the excellent predictive power and robustness of our method. Based on the proposed model, a webserver called Deep-Kcr was established and is freely accessible at http://lin-group.cn/server/Deep-Kcr.


2021 ◽  
Vol 11 (5) ◽  
pp. 2149
Author(s):  
Moumita Sen Sarma ◽  
Kaushik Deb ◽  
Pranab Kumar Dhar ◽  
Takeshi Koshiba

Sports activities play a crucial role in preserving our health and mind. Due to the rapid growth of sports video repositories, automatized classification has become essential for easy access and retrieval, content-based recommendations, contextual advertising, etc. Traditional Bangladeshi sport is a genre of sports that bears the cultural significance of Bangladesh. Classification of this genre can act as a catalyst in reviving their lost dignity. In this paper, the Deep Learning method is utilized to classify traditional Bangladeshi sports videos by extracting both the spatial and temporal features from the videos. In this regard, a new Traditional Bangladeshi Sports Video (TBSV) dataset is constructed containing five classes: Boli Khela, Kabaddi, Lathi Khela, Kho Kho, and Nouka Baich. A key contribution of this paper is to develop a scratch model by incorporating the two most prominent deep learning algorithms: convolutional neural network (CNN) and long short term memory (LSTM). Moreover, the transfer learning approach with the fine-tuned VGG19 and LSTM is used for TBSV classification. Furthermore, the proposed model is assessed over four challenging datasets: KTH, UCF-11, UCF-101, and UCF Sports. This model outperforms some recent works on these datasets while showing 99% average accuracy on the TBSV dataset.


Sign in / Sign up

Export Citation Format

Share Document