scholarly journals Large-Scale Brain Functional Network Integration for Discrimination of Autism Using a 3-D Deep Learning Model

2021 ◽  
Vol 15 ◽  
Author(s):  
Ming Yang ◽  
Menglin Cao ◽  
Yuhao Chen ◽  
Yanni Chen ◽  
Geng Fan ◽  
...  

GoalBrain functional networks (BFNs) constructed using resting-state functional magnetic resonance imaging (fMRI) have proven to be an effective way to understand aberrant functional connectivity in autism spectrum disorder (ASD) patients. It is still challenging to utilize these features as potential biomarkers for discrimination of ASD. The purpose of this work is to classify ASD and normal controls (NCs) using BFNs derived from rs-fMRI.MethodsA deep learning framework was proposed that integrated convolutional neural network (CNN) and channel-wise attention mechanism to model both intra- and inter-BFN associations simultaneously for ASD diagnosis. We investigate the effects of each BFN on performance and performed inter-network connectivity analysis between each pair of BFNs. We compared the performance of our CNN model with some state-of-the-art algorithms using functional connectivity features.ResultsWe collected 79 ASD patients and 105 NCs from the ABIDE-I dataset. The mean accuracy of our classification algorithm was 77.74% for classification of ASD versus NCs.ConclusionThe proposed model is able to integrate information from multiple BFNs to improve detection accuracy of ASD.SignificanceThese findings suggest that large-scale BFNs is promising to serve as reliable biomarkers for diagnosis of ASD.

2018 ◽  
Author(s):  
Štefan Holiga ◽  
Joerg F. Hipp ◽  
Christopher H. Chatham ◽  
Pilar Garces ◽  
Will Spooren ◽  
...  

AbstractDespite the high clinical burden little is known about pathophysiology underlying autism spectrum disorder (ASD). Recent resting state functional magnetic resonance imaging (rs-fMRI) studies have found atypical synchronization of brain activity in ASD. However, no consensus has been reached on the nature and clinical relevance of these alterations. Here we address these questions in the most comprehensive, large-scale effort to date comprising evaluation of four large ASD cohorts. We followed a strict exploration and replication procedure to identify core rs-fMRI functional connectivity (degree centrality) alterations associated with ASD as compared to typically developing (TD) controls (ASD: N=841, TD: N=984). We then tested for associations of these imaging phenotypes with clinical and demographic factors such as age, sex, medication status and clinical symptom severity. We find reproducible patterns of ASD-associated functional hyper- and hypo-connectivity with hypo-connectivity being primarily restricted to sensory-motor regions and hyper-connectivity hubs being predominately located in prefrontal and parietal cortices. We establish shifts in between-network connectivity from outside to within the identified regions as a key driver of these abnormalities. The magnitude of these alterations is linked to core ASD symptoms related to communication and social interaction and is not affected by age, sex or medication status. The identified brain functional alterations provide a reproducible pathophysiological phenotype underlying the diagnosis of ASD reconciling previous divergent findings. The large effect sizes in standardized cohorts and the link to clinical symptoms emphasize the importance of the identified imaging alterations as potential treatment and stratification biomarkers for ASD.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Manjit Kaur ◽  
Vijay Kumar ◽  
Vaishali Yadav ◽  
Dilbag Singh ◽  
Naresh Kumar ◽  
...  

COVID-19 has affected the whole world drastically. A huge number of people have lost their lives due to this pandemic. Early detection of COVID-19 infection is helpful for treatment and quarantine. Therefore, many researchers have designed a deep learning model for the early diagnosis of COVID-19-infected patients. However, deep learning models suffer from overfitting and hyperparameter-tuning issues. To overcome these issues, in this paper, a metaheuristic-based deep COVID-19 screening model is proposed for X-ray images. The modified AlexNet architecture is used for feature extraction and classification of the input images. Strength Pareto evolutionary algorithm-II (SPEA-II) is used to tune the hyperparameters of modified AlexNet. The proposed model is tested on a four-class (i.e., COVID-19, tuberculosis, pneumonia, or healthy) dataset. Finally, the comparisons are drawn among the existing and the proposed models.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Zhenbo Lu ◽  
Wei Zhou ◽  
Shixiang Zhang ◽  
Chen Wang

Quick and accurate crash detection is important for saving lives and improved traffic incident management. In this paper, a feature fusion-based deep learning framework was developed for video-based urban traffic crash detection task, aiming at achieving a balance between detection speed and accuracy with limited computing resource. In this framework, a residual neural network (ResNet) combined with attention modules was proposed to extract crash-related appearance features from urban traffic videos (i.e., a crash appearance feature extractor), which were further fed to a spatiotemporal feature fusion model, Conv-LSTM (Convolutional Long Short-Term Memory), to simultaneously capture appearance (static) and motion (dynamic) crash features. The proposed model was trained by a set of video clips covering 330 crash and 342 noncrash events. In general, the proposed model achieved an accuracy of 87.78% on the testing dataset and an acceptable detection speed (FPS > 30 with GTX 1060). Thanks to the attention module, the proposed model can capture the localized appearance features (e.g., vehicle damage and pedestrian fallen-off) of crashes better than conventional convolutional neural networks. The Conv-LSTM module outperformed conventional LSTM in terms of capturing motion features of crashes, such as the roadway congestion and pedestrians gathering after crashes. Compared to traditional motion-based crash detection model, the proposed model achieved higher detection accuracy. Moreover, it could detect crashes much faster than other feature fusion-based models (e.g., C3D). The results show that the proposed model is a promising video-based urban traffic crash detection algorithm that could be used in practice in the future.


2021 ◽  
Vol 14 (3) ◽  
pp. 1-28
Author(s):  
Abeer Al-Hyari ◽  
Hannah Szentimrey ◽  
Ahmed Shamli ◽  
Timothy Martin ◽  
Gary Gréwal ◽  
...  

The ability to accurately and efficiently estimate the routability of a circuit based on its placement is one of the most challenging and difficult tasks in the Field Programmable Gate Array (FPGA) flow. In this article, we present a novel, deep learning framework based on a Convolutional Neural Network (CNN) model for predicting the routability of a placement. Since the performance of the CNN model is strongly dependent on the hyper-parameters selected for the model, we perform an exhaustive parameter tuning that significantly improves the model’s performance and we also avoid overfitting the model. We also incorporate the deep learning model into a state-of-the-art placement tool and show how the model can be used to (1) avoid costly, but futile, place-and-route iterations, and (2) improve the placer’s ability to produce routable placements for hard-to-route circuits using feedback based on routability estimates generated by the proposed model. The model is trained and evaluated using over 26K placement images derived from 372 benchmarks supplied by Xilinx Inc. We also explore several opportunities to further improve the reliability of the predictions made by the proposed DLRoute technique by splitting the model into two separate deep learning models for (a) global and (b) detailed placement during the optimization process. Experimental results show that the proposed framework achieves a routability prediction accuracy of 97% while exhibiting runtimes of only a few milliseconds.


Deep learning has been getting more attention towards the researchers for transforming input data into an effective representation through various learning algorithms. Hence it requires a large and variety of datasets to ensure good performance and generalization. But manually labeling a dataset is really a time consuming and expensive process, limiting its size. Some of websites like YouTube and Freesound etc. provide large volume of audio data along with their metadata. General purpose audio tagging is one of the newly proposed tasks in DCASE that can give valuable insights into classification of various acoustic sound events. The proposed work analyzes a large scale imbalanced audio data for a audio tagging system. The baseline of the proposed audio tagging system is based on Convolutional Neural Network with Mel Frequency Cepstral Coefficients. Audio tagging system is developed with Google Colaboratory on free Telsa K80 GPU using keras, Tensorflow, and PyTorch. The experimental result shows the performance of proposed audio tagging system with an average mean precision of 0.92 .


2017 ◽  
Vol 2017 ◽  
pp. 1-6 ◽  
Author(s):  
Yu Sun ◽  
Yuan Liu ◽  
Guan Wang ◽  
Haiyan Zhang

Plant image identification has become an interdisciplinary focus in both botanical taxonomy and computer vision. The first plant image dataset collected by mobile phone in natural scene is presented, which contains 10,000 images of 100 ornamental plant species in Beijing Forestry University campus. A 26-layer deep learning model consisting of 8 residual building blocks is designed for large-scale plant classification in natural environment. The proposed model achieves a recognition rate of 91.78% on the BJFU100 dataset, demonstrating that deep learning is a promising technology for smart forestry.


Author(s):  
Chi-Chih Wang ◽  
Yu-Ching Chiu ◽  
Wei-Liang Chen ◽  
Tzu-Wei Yang ◽  
Ming-Chang Tsai ◽  
...  

Gastroesophageal reflux disease (GERD) is a common disease with high prevalence, and its endoscopic severity can be evaluated using the Los Angeles classification (LA grade). This paper proposes a deep learning model (i.e., GERD-VGGNet) that employs convolutional neural networks for automatic classification and interpretation of routine GERD LA grade. The proposed model employs a data augmentation technique, a two-stage no-freezing fine-tuning policy, and an early stopping criterion. As a result, the proposed model exhibits high generalizability. A dataset of images from 464 patients was used for model training and validation. An additional 32 patients served as a test set to evaluate the accuracy of both the model and our trainees. Experimental results demonstrate that the best model for the development set exhibited an overall accuracy of 99.2% (grade A–B), 100% (grade C–D), and 100% (normal group) using narrow-band image (NBI) endoscopy. On the test set, the proposed model resulted in an accuracy of 87.9%, which was significantly higher than the results of the trainees (75.0% and 65.6%). The proposed GERD-VGGNet model can assist automatic classification of GERD in conventional and NBI environments and thereby increase the accuracy of interpretation of the results by inexperienced endoscopists.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Hao Wu ◽  
Zhi Zhou

Computer vision provides effective solutions in many imaging relation problems, including automatic image segmentation and classification. Artificially trained models can be employed to tag images and identify objects spontaneously. In large-scale manufacturing, industrial cameras are utilized to take constant images of components for several reasons. Due to the limitations caused by motion, lens distortion, and noise, some defective images are captured, which are to be identified and separated. One common way to address this problem is by looking into these images manually. However, this solution is not only very time-consuming but is also inaccurate. The paper proposes a deep learning-based artificially intelligent system that can quickly train and identify faulty images. For this purpose, a pretrained convolution neural network based on the PyTorch framework is employed to extract discriminating features from the dataset, which is then used for the classification task. In order to eliminate the chances of overfitting, the proposed model also employed Dropout technology to adjust the network. The experimental study reveals that the system can precisely classify the normal and defective images with an accuracy of over 91%.


2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Thomas Haugland Johansen ◽  
Steffen Aagaard Sørensen

Foraminifera are single-celled marine organisms, which may have a planktic or benthic lifestyle. During their life cycle they construct shells consisting of one or more chambers, and these shells remain as fossils in marine sediments. Classifying and counting these fossils have become an important tool in e.g. oceanography and climatology.Currently the process of identifying and counting microfossils is performed manually using a microscope and is very time consuming. Developing methods to automate this process is therefore considered important across a range of research fields.The first steps towards developing a deep learning model that can detect and classify microscopic foraminifera are proposed. The proposed model is based on a VGG16 model that has been pretrained on the ImageNet dataset, and adapted to the foraminifera task using transfer learning. Additionally, a novel image dataset consisting of microscopic foraminifera and sediments from the Barents Sea region is introduced.


Author(s):  
Ozal Yildirim ◽  
Ulas Baloglu ◽  
U Acharya

Sleep disorder is a symptom of many neurological diseases that may significantly affect the quality of daily life. Traditional methods are time-consuming and involve the manual scoring of polysomnogram (PSG) signals obtained in a laboratory environment. However, the automated monitoring of sleep stages can help detect neurological disorders accurately as well. In this study, a flexible deep learning model is proposed using raw PSG signals. A one-dimensional convolutional neural network (1D-CNN) is developed using electroencephalogram (EEG) and electrooculogram (EOG) signals for the classification of sleep stages. The performance of the system is evaluated using two public databases (sleep-edf and sleep-edfx). The developed model yielded the highest accuracies of 98.06%, 94.64%, 92.36%, 91.22%, and 91.00% for two to six sleep classes, respectively, using the sleep-edf database. Further, the proposed model obtained the highest accuracies of 97.62%, 94.34%, 92.33%, 90.98%, and 89.54%, respectively for the same two to six sleep classes using the sleep-edfx dataset. The developed deep learning model is ready for clinical usage, and can be tested with big PSG data.


Sign in / Sign up

Export Citation Format

Share Document