scholarly journals Assessing learned features of Deep Learning applied to EEG

Author(s):  
Dung Truong ◽  
Scott Makeig ◽  
Armaud Delorme
Author(s):  
L. Chen ◽  
F. Rottensteiner ◽  
C. Heipke

Abstract. Matching images containing large viewpoint and viewing direction changes, resulting in large perspective differences, still is a very challenging problem. Affine shape estimation, orientation assignment and feature description algorithms based on detected hand crafted features have shown to be error prone. In this paper, affine shape estimation, orientation assignment and description of local features is achieved through deep learning. Those three modules are trained based on loss functions optimizing the matching performance of input patch pairs. The trained descriptors are first evaluated on the Brown dataset (Brown et al., 2011), a standard descriptor performance benchmark. The whole pipeline is then tested on images of small blocks acquired with an aerial penta camera, to compute image orientation. The results show that learned features perform significantly better than alternatives based on hand crafted features.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7799
Author(s):  
Xiao Cheng ◽  
Hao Zhang

In signal analysis and processing, underwater target recognition (UTR) is one of the most important technologies. Simply and quickly identify target types using conventional methods in underwater acoustic conditions is quite a challenging task. The problem can be conveniently handled by a deep learning network (DLN), which yields better classification results than conventional methods. In this paper, a novel deep learning method with a hybrid routing network is considered, which can abstract the features of time-domain signals. The used network comprises multiple routing structures and several options for the auxiliary branch, which promotes impressive effects as a result of exchanging the learned features of different branches. The experiment shows that the used network possesses more advantages in the underwater signal classification task.


2018 ◽  
Vol 4 (1) ◽  
pp. 71-74 ◽  
Author(s):  
Jannis Hagenah ◽  
Mattias Heinrich ◽  
Floris Ernst

AbstractPre-operative planning of valve-sparing aortic root reconstruction relies on the automatic discrimination of healthy and pathologically dilated aortic roots. The basis of this classification are features extracted from 3D ultrasound images. In previously published approaches, handcrafted features showed a limited classification accuracy. However, feature learning is insufficient due to the small data sets available for this specific problem. In this work, we propose transfer learning to use deep learning on these small data sets. For this purpose, we used the convolutional layers of the pretrained deep neural network VGG16 as a feature extractor. To simplify the problem, we only took two prominent horizontal slices throgh the aortic root, the coaptation plane and the commissure plane, into account by stitching the features of both images together and training a Random Forest classifier on the resulting feature vectors. We evaluated this method on a data set of 48 images (24 healthy, 24 dilated) using 10-fold cross validation. Using the deep learned features we could reach a classification accuracy of 84 %, which clearly outperformed the handcrafted features (71 % accuracy). Even though the VGG16 network was trained on RGB photos and for different classification tasks, the learned features are still relevant for ultrasound image analysis of aortic root pathology identification. Hence, transfer learning makes deep learning possible even on very small ultrasound data sets.


Author(s):  
Prerna Mishra ◽  
Santosh Kumar ◽  
Mithilesh Kumar Chaube

Chart images exhibit significant variabilities that make each image different from others even though they belong to the same class or categories. Classification of charts is a major challenge because each chart class has variations in features, structure, and noises. However, due to the lack of affiliation between the dissimilar features and the structure of the chart, it is a challenging task to model these variations for automatic chart recognition. In this article, we present a novel dissimilarity-based learning model for similar structured but diverse chart classification. Our approach jointly learns the features of both dissimilar and similar regions. The model is trained by an improved loss function, which is fused by a structural variation-aware dissimilarity index and incorporated with regularization parameters, making the model more prone toward dissimilar regions. The dissimilarity index enhances the discriminative power of the learned features not only from dissimilar regions but also from similar regions. Extensive comparative evaluations demonstrate that our approach significantly outperforms other benchmark methods, including both traditional and deep learning models, over publicly available datasets.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 204
Author(s):  
Yuchai Wan ◽  
Hongen Zhou ◽  
Xun Zhang

The Coronavirus disease 2019 (COVID-19) has become one of the threats to the world. Computed tomography (CT) is an informative tool for the diagnosis of COVID-19 patients. Many deep learning approaches on CT images have been proposed and brought promising performance. However, due to the high complexity and non-transparency of deep models, the explanation of the diagnosis process is challenging, making it hard to evaluate whether such approaches are reliable. In this paper, we propose a visual interpretation architecture for the explanation of the deep learning models and apply the architecture in COVID-19 diagnosis. Our architecture designs a comprehensive interpretation about the deep model from different perspectives, including the training trends, diagnostic performance, learned features, feature extractors, the hidden layers, the support regions for diagnostic decision, and etc. With the interpretation architecture, researchers can make a comparison and explanation about the classification performance, gain insight into what the deep model learned from images, and obtain the supports for diagnostic decisions. Our deep model achieves the diagnostic result of 94.75%, 93.22%, 96.69%, 97.27%, and 91.88% in the criteria of accuracy, sensitivity, specificity, positive predictive value, and negative predictive value, which are 8.30%, 4.32%, 13.33%, 10.25%, and 6.19% higher than that of the compared traditional methods. The visualized features in 2-D and 3-D spaces provide the reasons for the superiority of our deep model. Our interpretation architecture would allow researchers to understand more about how and why deep models work, and can be used as interpretation solutions for any deep learning models based on convolutional neural network. It can also help deep learning methods to take a step forward in the clinical COVID-19 diagnosis field.


2021 ◽  
Author(s):  
Jiajia Cao ◽  
Qin Zhou ◽  
Yi Chen ◽  
Lin Yin ◽  
Fei Zhang

The segmentation of the retinal vascular tree is the fundamental step for diagnosing ophthalmological diseases and cardiovascular diseases. Most existing vessel segmentation methods based on deep learning give the learned features equal importance. Ignored the highly imbalanced ratio between background and vessels (the majority of vessel pixels belong to the background), the learned features would be dominantly guided by background, and relatively little influence comes from vessels, often leading to low model sensitivity and prediction accuracy. The reduction of model size is also a challenge. We propose a mixed attention mechanism and asymmetric convolution encoder-decoder structure(MAAC) for segmentation in Retinal Vessels to solve these problems. In MAAC, the mixed attention is designed to emphasize the valid features and suppress the invalid features. It not only identifies information that helps retinal vessels recognition but also locates the position of the vessel. All square convolutions are replaced by asymmetric convolutions because it is more robust to rotational distortions and small convolutions are more suitable for extracting vessel features (based on the thin characteristics of vessels). The employment of asymmetric convolution reduces model parameters and improve the recognition of thin vessel. The experiments on public datasets DRIVE, STARE, and CHASE\_DB1 demonstrated that the proposed MAAC could more accurately segment vessels with a global AUC of 98.17$\%$, 98.67$\%$, and 98.53$\%$, respectively. The mixed attention proposed in this study can be applied to other deep learning models for performance improvement without changing the network architectures. <br>


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3722 ◽  
Author(s):  
Nasrullah Nasrullah ◽  
Jun Sang ◽  
Mohammad S. Alam ◽  
Muhammad Mateen ◽  
Bin Cai ◽  
...  

Lung cancer is one of the major causes of cancer-related deaths due to its aggressive nature and delayed detections at advanced stages. Early detection of lung cancer is very important for the survival of an individual, and is a significant challenging problem. Generally, chest radiographs (X-ray) and computed tomography (CT) scans are used initially for the diagnosis of the malignant nodules; however, the possible existence of benign nodules leads to erroneous decisions. At early stages, the benign and the malignant nodules show very close resemblance to each other. In this paper, a novel deep learning-based model with multiple strategies is proposed for the precise diagnosis of the malignant nodules. Due to the recent achievements of deep convolutional neural networks (CNN) in image analysis, we have used two deep three-dimensional (3D) customized mixed link network (CMixNet) architectures for lung nodule detection and classification, respectively. Nodule detections were performed through faster R-CNN on efficiently-learned features from CMixNet and U-Net like encoder–decoder architecture. Classification of the nodules was performed through a gradient boosting machine (GBM) on the learned features from the designed 3D CMixNet structure. To reduce false positives and misdiagnosis results due to different types of errors, the final decision was performed in connection with physiological symptoms and clinical biomarkers. With the advent of the internet of things (IoT) and electro-medical technology, wireless body area networks (WBANs) provide continuous monitoring of patients, which helps in diagnosis of chronic diseases—especially metastatic cancers. The deep learning model for nodules’ detection and classification, combined with clinical factors, helps in the reduction of misdiagnosis and false positive (FP) results in early-stage lung cancer diagnosis. The proposed system was evaluated on LIDC-IDRI datasets in the form of sensitivity (94%) and specificity (91%), and better results were obatined compared to the existing methods.


Cancers ◽  
2019 ◽  
Vol 11 (1) ◽  
pp. 53 ◽  
Author(s):  
Kelvin K. Wong ◽  
Robert Rostomily ◽  
Stephen T. C. Wong

This study aims to discover genes with prognostic potential for glioblastoma (GBM) patients’ survival in a patient group that has gone through standard of care treatments including surgeries and chemotherapies, using tumor gene expression at initial diagnosis before treatment. The Cancer Genome Atlas (TCGA) GBM gene expression data are used as inputs to build a deep multilayer perceptron network to predict patient survival risk using partial likelihood as loss function. Genes that are important to the model are identified by the input permutation method. Univariate and multivariate Cox survival models are used to assess the predictive value of deep learned features in addition to clinical, mutation, and methylation factors. The prediction performance of the deep learning method was compared to other machine learning methods including the ridge, adaptive Lasso, and elastic net Cox regression models. Twenty-seven deep-learned features are extracted through deep learning to predict overall survival. The top 10 ranked genes with the highest impact on these features are related to glioblastoma stem cells, stem cell niche environment, and treatment resistance mechanisms, including POSTN, TNR, BCAN, GAD1, TMSB15B, SCG3, PLA2G2A, NNMT, CHI3L1 and ELAVL4.


Author(s):  
Zinah Mohsin Arkah ◽  
Dalya S. Al-Dulaimi ◽  
Ahlam R. Khekan

<p>Skin cancer is an example of the most dangerous disease. Early diagnosis of skin cancer can save many people’s lives. Manual classification methods are time-consuming and costly. Deep learning has been proposed for the automated classification of skin cancer. Although deep learning showed impressive performance in several medical imaging tasks, it requires a big number of images to achieve a good performance. The skin cancer classification task suffers from providing deep learning with sufficient data due to the expensive annotation process and required experts. One of the most used solutions is transfer learning of pre-trained models of the ImageNet dataset. However, the learned features of pre-trained models are different from skin cancer image features. To end this, we introduce a novel approach of transfer learning by training the pre-trained models of the ImageNet (VGG, GoogleNet, and ResNet50) on a large number of unlabelled skin cancer images, first. We then train them on a small number of labeled skin images. Our experimental results proved that the proposed method is efficient by achieving an accuracy of 84% with ResNet50 when directly trained with a small number of labeled skin and 93.7% when trained with the proposed approach.</p>


Sign in / Sign up

Export Citation Format

Share Document