scholarly journals Deep Convolutional Neural Network-Assisted Feature Extraction for Diagnostic Discrimination and Feature Visualization in Pancreatic Ductal Adenocarcinoma (PDAC) versus Autoimmune Pancreatitis (AIP)

2020 ◽  
Vol 9 (12) ◽  
pp. 4013
Author(s):  
Sebastian Ziegelmayer ◽  
Georgios Kaissis ◽  
Felix Harder ◽  
Friederike Jungmann ◽  
Tamara Müller ◽  
...  

The differentiation of autoimmune pancreatitis (AIP) and pancreatic ductal adenocarcinoma (PDAC) poses a relevant diagnostic challenge and can lead to misdiagnosis and consequently poor patient outcome. Recent studies have shown that radiomics-based models can achieve high sensitivity and specificity in predicting both entities. However, radiomic features can only capture low level representations of the input image. In contrast, convolutional neural networks (CNNs) can learn and extract more complex representations which have been used for image classification to great success. In our retrospective observational study, we performed a deep learning-based feature extraction using CT-scans of both entities and compared the predictive value against traditional radiomic features. In total, 86 patients, 44 with AIP and 42 with PDACs, were analyzed. Whole pancreas segmentation was automatically performed on CT-scans during the portal venous phase. The segmentation masks were manually checked and corrected if necessary. In total, 1411 radiomic features were extracted using PyRadiomics and 256 features (deep features) were extracted using an intermediate layer of a convolutional neural network (CNN). After feature selection and normalization, an extremely randomized trees algorithm was trained and tested using a two-fold shuffle-split cross-validation with a test sample of 20% (n = 18) to discriminate between AIP or PDAC. Feature maps were plotted and visual difference was noted. The machine learning (ML) model achieved a sensitivity, specificity, and ROC-AUC of 0.89 ± 0.11, 0.83 ± 0.06, and 0.90 ± 0.02 for the deep features and 0.72 ± 0.11, 0.78 ± 0.06, and 0.80 ± 0.01 for the radiomic features. Visualization of feature maps indicated different activation patterns for AIP and PDAC. We successfully trained a machine learning model using deep feature extraction from CT-images to differentiate between AIP and PDAC. In comparison to traditional radiomic features, deep features achieved a higher sensitivity, specificity, and ROC-AUC. Visualization of deep features could further improve the diagnostic accuracy of non-invasive differentiation of AIP and PDAC.

2021 ◽  
Vol 9 ◽  
Author(s):  
Ashwini K ◽  
P. M. Durai Raj Vincent ◽  
Kathiravan Srinivasan ◽  
Chuan-Yu Chang

Neonatal infants communicate with us through cries. The infant cry signals have distinct patterns depending on the purpose of the cries. Preprocessing, feature extraction, and feature selection need expert attention and take much effort in audio signals in recent days. In deep learning techniques, it automatically extracts and selects the most important features. For this, it requires an enormous amount of data for effective classification. This work mainly discriminates the neonatal cries into pain, hunger, and sleepiness. The neonatal cry auditory signals are transformed into a spectrogram image by utilizing the short-time Fourier transform (STFT) technique. The deep convolutional neural network (DCNN) technique takes the spectrogram images for input. The features are obtained from the convolutional neural network and are passed to the support vector machine (SVM) classifier. Machine learning technique classifies neonatal cries. This work combines the advantages of machine learning and deep learning techniques to get the best results even with a moderate number of data samples. The experimental result shows that CNN-based feature extraction and SVM classifier provides promising results. While comparing the SVM-based kernel techniques, namely radial basis function (RBF), linear and polynomial, it is found that SVM-RBF provides the highest accuracy of kernel-based infant cry classification system provides 88.89% accuracy.


2019 ◽  
Author(s):  
Georgios Kaissis ◽  
Sebastian Ziegelmayer ◽  
Fabian Lohöfer ◽  
Katja Steiger ◽  
Hana Algül ◽  
...  

AbstractPurposeDevelopment of a supervised machine-learning model capable of predicting clinically relevant molecular subtypes of pancreatic ductal adenocarcinoma (PDAC) from diffusion-weighted-imaging-derived radiomic features.MethodsThe retrospective observational study assessed 55 surgical PDAC patients. Molecular subtypes were defined by immunohistochemical staining of KRT81. Tumors were manually segmented and 1606 radiomic features were extracted withPyRadiomics. A gradient-boosted-tree algorithm (XGBoost) was trained on 70% of the patients (N=28) and tested on 30% (N=17) to predict KRT81+ vs. KRT81-tumor subtypes. The average sensitivity, specificity and ROC-AUC value were calculated. Chemotherapy response was assessed stratified by subtype. Radiomic feature importance was ranked.ResultsThe mean±STDEV sensitivity, specificity and ROC-AUC were 0.90±0.07, 0.92±0.11, and 0.93±0.07, respectively. Patients with a KRT81+ subtype experienced significantly diminished median overall survival compared to KRT81-patients (7.0 vs. 22.6 months, HR 1.44, log-rank-test P=<0.001) and a significantly improved response to gemcitabine-based chemotherapy over FOLFIRINOX (10.14 vs. 3.8 months median overall survival, HR 0.85, P=0.037) compared to KRT81-patients, who responded significantly better to FOLFIRINOX over gemcitabine-based treatment (30.8 vs. 13.4 months median overall survival, HR 0.88, P=0.027).ConclusionsThe machine-learning based analysis of radiomic features enables the prediction of subtypes of PDAC, which are highly relevant for overall patient survival and response to chemotherapy.


2021 ◽  
Author(s):  
Lakpa Dorje Tamang

In this paper, we propose a symmetric series convolutional neural network (SS-CNN), which is a novel deep convolutional neural network (DCNN)-based super-resolution (SR) technique for ultrasound medical imaging. The proposed model comprises two parts: a feature extraction network (FEN) and an up-sampling layer. In the FEN, the low-resolution (LR) counterpart of the ultrasound image passes through a symmetric series of two different DCNNs. The low-level feature maps obtained from the subsequent layers of both DCNNs are concatenated in a feed forward manner, aiding in robust feature extraction to ensure high reconstruction quality. Subsequently, the final concatenated features serve as an input map to the latter 2D convolutional layers, where the textural information of the input image is connected via skip connections. The second part of the proposed model is a sub-pixel convolutional (SPC) layer, which up-samples the output of the FEN by multiplying it with a multi-dimensional kernel followed by a periodic shuffling operation to reconstruct a high-quality SR ultrasound image. We validate the performance of the SS-CNN with publicly available ultrasound image datasets. Experimental results show that the proposed model achieves an exquisite reconstruction performance of ultrasound image over the conventional methods in terms of peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM), while providing compelling SR reconstruction time.


Author(s):  
Aires Da Conceicao ◽  
Sheshang D. Degadwala

Self-driving vehicle is a vehicle that can drive by itself it means without human interaction. This system shows how the computer can learn and the over the art of driving using machine learning techniques. This technique includes line lane tracker, robust feature extraction and convolutional neural network.


Author(s):  
Tianshu Wang ◽  
Yanpin Chao ◽  
Fangzhou Yin ◽  
Xichen Yang ◽  
Chenjun Hu ◽  
...  

Background: The identification of Fructus Crataegi processed products manually is inefficient and unreliable. Therefore, how to identify the Fructus Crataegis processed products efficiently is important. Objective: In order to efficiently identify Fructus Grataegis processed products with different odor characteristics, a new method based on an electronic nose and convolutional neural network is proposed. Methods: First, the original smell of Fructus Grataegis processed products is obtained by using the electronic nose and then preprocessed. Next, feature extraction is carried out on the preprocessed data through convolution pooling layer Results: The experimental results show that the proposed method has higher accuracy for the identification of Fructus Grataegis processed products, and is competitive with other machine learning based methods. Conclusion: The method proposed in this paper is effective for the identification of Fructus Grataegi processed products.


2021 ◽  
Vol 160 (6) ◽  
pp. S-18
Author(s):  
Sanne A. Hoogenboom ◽  
Kamalakkannan Ravi ◽  
Megan M. Engels ◽  
Ismail Irmakci ◽  
Elif Keles ◽  
...  

IoT ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 222-235
Author(s):  
Guillaume Coiffier ◽  
Ghouthi Boukli Hacene ◽  
Vincent Gripon

Deep Neural Networks are state-of-the-art in a large number of challenges in machine learning. However, to reach the best performance they require a huge pool of parameters. Indeed, typical deep convolutional architectures present an increasing number of feature maps as we go deeper in the network, whereas spatial resolution of inputs is decreased through downsampling operations. This means that most of the parameters lay in the final layers, while a large portion of the computations are performed by a small fraction of the total parameters in the first layers. In an effort to use every parameter of a network at its maximum, we propose a new convolutional neural network architecture, called ThriftyNet. In ThriftyNet, only one convolutional layer is defined and used recursively, leading to a maximal parameter factorization. In complement, normalization, non-linearities, downsamplings and shortcut ensure sufficient expressivity of the model. ThriftyNet achieves competitive performance on a tiny parameters budget, exceeding 91% accuracy on CIFAR-10 with less than 40 k parameters in total, 74.3% on CIFAR-100 with less than 600 k parameters, and 67.1% On ImageNet ILSVRC 2012 with no more than 4.15 M parameters. However, the proposed method typically requires more computations than existing counterparts.


2020 ◽  
Vol 6 ◽  
pp. e268 ◽  
Author(s):  
Abder-Rahman Ali ◽  
Jingpeng Li ◽  
Guang Yang ◽  
Sally Jane O’Shea

Skin lesion border irregularity is considered an important clinical feature for the early diagnosis of melanoma, representing the B feature in the ABCD rule. In this article we propose an automated approach for skin lesion border irregularity detection. The approach involves extracting the skin lesion from the image, detecting the skin lesion border, measuring the border irregularity, training a Convolutional Neural Network and Gaussian naive Bayes ensemble, to the automatic detection of border irregularity, which results in an objective decision on whether the skin lesion border is considered regular or irregular. The approach achieves outstanding results, obtaining an accuracy, sensitivity, specificity, and F-score of 93.6%, 100%, 92.5% and 96.1%, respectively.


Sign in / Sign up

Export Citation Format

Share Document