scholarly journals Automated sleep state classification of wide-field calcium imaging data via multiplex visibility graphs and deep learning

Author(s):  
Xiaohui Zhang ◽  
Eric C. Landsness ◽  
Wei Chen ◽  
Hanyang Miao ◽  
Michelle Tang ◽  
...  
2020 ◽  
Author(s):  
Na Yao ◽  
Fuchuan Ni ◽  
Ziyan Wang ◽  
Jun Luo ◽  
Wing-Kin Sung ◽  
...  

Abstract Background: Peach diseases can cause severe yield reduction and decreased quality for peach production. Rapid and accurate detection and identification of peach diseases is of great importance. Deep learning has been applied to detect peach diseases using imaging data. However, peach disease image data is difficult to collect and samples are imbalance. The popular deep networks perform poor for this issue.Results: This paper proposed an improved Xception network named as L2MXception which ensembles regularization term of L2-norm and mean. With the peach disease image dataset collected, results on seven mainstream deep learning models were compared in details and an improved loss function was integrated with regularization term L2-norm and mean (L2M Loss). Experiments showed that the Xception model with L2M Loss outperformed the current best method for peach disease prediction. Compared to the original Xception model, the validation accuracy of L2MXception was up to 93.85%, increased by 28.48%. Conclusions: The proposed L2MXception network may have great potential in early identification of peach diseases.


2021 ◽  
Author(s):  
Martin Žofka ◽  
Linh Thuy Nguyen ◽  
Eva Mašátová ◽  
Petra Matoušková

Poor efficacy of some anthelmintics and rising concerns about the widespread drug resistance have highlighted the need for new drug discovery. The parasitic nematode Haemonchus contortus is an important model organism widely used for studies of drug resistance and drug screening with the current gold standard being the motility assay. We applied a deep learning approach Mask R-CNN for analysing motility videos and compared it to other commonly used algorithms with different levels of complexity, namely Wiggle Index and Wide Field-of-View Nematode Tracking Platform. Mask R-CNN consistently outperformed the other algorithms in terms of the forecast precision across the videos containing varying rates of motile worms with a mean absolute error of 5.6%. Using Mask R-CNN for motility assays confirmed the common problem of algorithms that use Non-Maximum Suppression in detecting overlapping objects, which negatively impacted the overall precision. The use of intersect over union (IoU) as a measure of the classification of motile / non-motile instances had an overall accuracy of 89%. In comparison to the existing methods evaluated here, Mask R-CNN performed better and we can anticipate that this method will broaden the number of possible approaches to video analysis of worm motility. IoU has shown promise as a good metric for evaluating motility of individual worms.


2021 ◽  
Author(s):  
Roberto Augusto Philippi Martins ◽  
Danilo Silva

The lack of labeled data is one of the main prohibiting issues on the development of deep learning models, as they rely on large labeled datasets in order to achieve high accuracy in complex tasks. Our objective is to evaluate the performance gain of having additional unlabeled data in the training of a deep learning model when working with medical imaging data. We present a semi-supervised learning algorithm that utilizes a teacher-student paradigm in order to leverage unlabeled data in the classification of chest X-ray images. Using our algorithm on the ChestX-ray14 dataset, we manage to achieve a substantial increase in performance when using small labeled datasets. With our method, a model achieves an AUROC of 0.822 with only 2% labeled data and 0.865 with 5% labeled data, while a fully supervised method achieves an AUROC of 0.807 with 5% labeled data and only 0.845 with 10%.


2021 ◽  
Vol 4 ◽  
Author(s):  
Paul Y. Wang ◽  
Sandalika Sapra ◽  
Vivek Kurien George ◽  
Gabriel A. Silva

Although a number of studies have explored deep learning in neuroscience, the application of these algorithms to neural systems on a microscopic scale, i.e. parameters relevant to lower scales of organization, remains relatively novel. Motivated by advances in whole-brain imaging, we examined the performance of deep learning models on microscopic neural dynamics and resulting emergent behaviors using calcium imaging data from the nematode C. elegans. As one of the only species for which neuron-level dynamics can be recorded, C. elegans serves as the ideal organism for designing and testing models bridging recent advances in deep learning and established concepts in neuroscience. We show that neural networks perform remarkably well on both neuron-level dynamics prediction and behavioral state classification. In addition, we compared the performance of structure agnostic neural networks and graph neural networks to investigate if graph structure can be exploited as a favourable inductive bias. To perform this experiment, we designed a graph neural network which explicitly infers relations between neurons from neural activity and leverages the inferred graph structure during computations. In our experiments, we found that graph neural networks generally outperformed structure agnostic models and excel in generalization on unseen organisms, implying a potential path to generalizable machine learning in neuroscience.


2020 ◽  
Vol 56 ◽  
pp. 101663 ◽  
Author(s):  
Jan Werth ◽  
Mustafa Radha ◽  
Peter Andriessen ◽  
Ronald M. Aarts ◽  
Xi Long

2021 ◽  
Vol 11 (16) ◽  
pp. 7412
Author(s):  
Grigorios-Aris Cheimariotis ◽  
Maria Riga ◽  
Kostas Haris ◽  
Konstantinos Toutouzas ◽  
Aggelos K. Katsaggelos ◽  
...  

Intravascular Optical Coherence Tomography (IVOCT) images provide important insight into every aspect of atherosclerosis. Specifically, the extent of plaque and its type, which are indicative of the patient’s condition, are better assessed by OCT images in comparison to other in vivo modalities. A large amount of imaging data per patient require automatic methods for rapid results. An effective step towards automatic plaque detection and plaque characterization is axial lines (A-lines) based classification into normal and various plaque types. In this work, a novel automatic method for A-line classification is proposed. The method employed convolutional neural networks (CNNs) for classification in its core and comprised the following pre-processing steps: arterial wall segmentation and an OCT-specific (depth-resolved) transformation and a post-processing step based on the majority of classifications. The important step was the OCT-specific transformation, which was based on the estimation of the attenuation coefficient in every pixel of the OCT image. The dataset used for training and testing consisted of 183 images from 33 patients. In these images, four different plaque types were delineated. The method was evaluated by cross-validation. The mean values of accuracy, sensitivity and specificity were 74.73%, 87.78%, and 61.45%, respectively, when classifying into plaque and normal A-lines. When plaque A-lines were classified into fibrolipidic and fibrocalcific, the overall accuracy was 83.47% for A-lines of OCT-specific transformed images and 74.94% for A-lines of original images. This large improvement in accuracy indicates the advantage of using attenuation coefficients when characterizing plaque types. The proposed automatic deep-learning pipeline constitutes a positive contribution to the accurate classification of A-lines in intravascular OCT images.


2021 ◽  
Vol 15 ◽  
Author(s):  
Laura Tomaz Da Silva ◽  
Nathalia Bianchini Esper ◽  
Duncan D. Ruiz ◽  
Felipe Meneguzzi ◽  
Augusto Buchweitz

Problem: Brain imaging studies of mental health and neurodevelopmental disorders have recently included machine learning approaches to identify patients based solely on their brain activation. The goal is to identify brain-related features that generalize from smaller samples of data to larger ones; in the case of neurodevelopmental disorders, finding these patterns can help understand differences in brain function and development that underpin early signs of risk for developmental dyslexia. The success of machine learning classification algorithms on neurofunctional data has been limited to typically homogeneous data sets of few dozens of participants. More recently, larger brain imaging data sets have allowed for deep learning techniques to classify brain states and clinical groups solely from neurofunctional features. Indeed, deep learning techniques can provide helpful tools for classification in healthcare applications, including classification of structural 3D brain images. The adoption of deep learning approaches allows for incremental improvements in classification performance of larger functional brain imaging data sets, but still lacks diagnostic insights about the underlying brain mechanisms associated with disorders; moreover, a related challenge involves providing more clinically-relevant explanations from the neural features that inform classification.Methods: We target this challenge by leveraging two network visualization techniques in convolutional neural network layers responsible for learning high-level features. Using such techniques, we are able to provide meaningful images for expert-backed insights into the condition being classified. We address this challenge using a dataset that includes children diagnosed with developmental dyslexia, and typical reader children.Results: Our results show accurate classification of developmental dyslexia (94.8%) from the brain imaging alone, while providing automatic visualizations of the features involved that match contemporary neuroscientific knowledge (brain regions involved in the reading process for the dyslexic reader group and brain regions associated with strategic control and attention processes for the typical reader group).Conclusions: Our visual explanations of deep learning models turn the accurate yet opaque conclusions from the models into evidence to the condition being studied.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Na Yao ◽  
Fuchuan Ni ◽  
Ziyan Wang ◽  
Jun Luo ◽  
Wing-Kin Sung ◽  
...  

Abstract Background Peach diseases can cause severe yield reduction and decreased quality for peach production. Rapid and accurate detection and identification of peach diseases is of great importance. Deep learning has been applied to detect peach diseases using imaging data. However, peach disease image data is difficult to collect and samples are imbalance. The popular deep networks perform poor for this issue. Results This paper proposed an improved Xception network named as L2MXception which ensembles regularization term of L2-norm and mean. With the peach disease image dataset collected, results on seven mainstream deep learning models were compared in details and an improved loss function was integrated with regularization term L2-norm and mean (L2M Loss). Experiments showed that the Xception model with L2M Loss outperformed the current best method for peach disease prediction. Compared to the original Xception model, the validation accuracy of L2MXception was up to 93.85%, increased by 28.48%. Conclusions The proposed L2MXception network may have great potential in early identification of peach diseases.


Sign in / Sign up

Export Citation Format

Share Document