scholarly journals Task-Driven Learned Hyperspectral Data Reduction Using End-to-End Supervised Deep Learning

2020 ◽  
Vol 6 (12) ◽  
pp. 132
Author(s):  
Mathé T. Zeegers ◽  
Daniël M. Pelt ◽  
Tristan van Leeuwen ◽  
Robert van Liere ◽  
Kees Joost Batenburg

An important challenge in hyperspectral imaging tasks is to cope with the large number of spectral bins. Common spectral data reduction methods do not take prior knowledge about the task into account. Consequently, sparsely occurring features that may be essential for the imaging task may not be preserved in the data reduction step. Convolutional neural network (CNN) approaches are capable of learning the specific features relevant to the particular imaging task, but applying them directly to the spectral input data is constrained by the computational efficiency. We propose a novel supervised deep learning approach for combining data reduction and image analysis in an end-to-end architecture. In our approach, the neural network component that performs the reduction is trained such that image features most relevant for the task are preserved in the reduction step. Results for two convolutional neural network architectures and two types of generated datasets show that the proposed Data Reduction CNN (DRCNN) approach can produce more accurate results than existing popular data reduction methods, and can be used in a wide range of problem settings. The integration of knowledge about the task allows for more image compression and higher accuracies compared to standard data reduction methods.

2021 ◽  
Vol 13 (2) ◽  
pp. 274
Author(s):  
Guobiao Yao ◽  
Alper Yilmaz ◽  
Li Zhang ◽  
Fei Meng ◽  
Haibin Ai ◽  
...  

The available stereo matching algorithms produce large number of false positive matches or only produce a few true-positives across oblique stereo images with large baseline. This undesired result happens due to the complex perspective deformation and radiometric distortion across the images. To address this problem, we propose a novel affine invariant feature matching algorithm with subpixel accuracy based on an end-to-end convolutional neural network (CNN). In our method, we adopt and modify a Hessian affine network, which we refer to as IHesAffNet, to obtain affine invariant Hessian regions using deep learning framework. To improve the correlation between corresponding features, we introduce an empirical weighted loss function (EWLF) based on the negative samples using K nearest neighbors, and then generate deep learning-based descriptors with high discrimination that is realized with our multiple hard network structure (MTHardNets). Following this step, the conjugate features are produced by using the Euclidean distance ratio as the matching metric, and the accuracy of matches are optimized through the deep learning transform based least square matching (DLT-LSM). Finally, experiments on Large baseline oblique stereo images acquired by ground close-range and unmanned aerial vehicle (UAV) verify the effectiveness of the proposed approach, and comprehensive comparisons demonstrate that our matching algorithm outperforms the state-of-art methods in terms of accuracy, distribution and correct ratio. The main contributions of this article are: (i) our proposed MTHardNets can generate high quality descriptors; and (ii) the IHesAffNet can produce substantial affine invariant corresponding features with reliable transform parameters.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 742
Author(s):  
Canh Nguyen ◽  
Vasit Sagan ◽  
Matthew Maimaitiyiming ◽  
Maitiniyazi Maimaitijiang ◽  
Sourav Bhadra ◽  
...  

Early detection of grapevine viral diseases is critical for early interventions in order to prevent the disease from spreading to the entire vineyard. Hyperspectral remote sensing can potentially detect and quantify viral diseases in a nondestructive manner. This study utilized hyperspectral imagery at the plant level to identify and classify grapevines inoculated with the newly discovered DNA virus grapevine vein-clearing virus (GVCV) at the early asymptomatic stages. An experiment was set up at a test site at South Farm Research Center, Columbia, MO, USA (38.92 N, −92.28 W), with two grapevine groups, namely healthy and GVCV-infected, while other conditions were controlled. Images of each vine were captured by a SPECIM IQ 400–1000 nm hyperspectral sensor (Oulu, Finland). Hyperspectral images were calibrated and preprocessed to retain only grapevine pixels. A statistical approach was employed to discriminate two reflectance spectra patterns between healthy and GVCV vines. Disease-centric vegetation indices (VIs) were established and explored in terms of their importance to the classification power. Pixel-wise (spectral features) classification was performed in parallel with image-wise (joint spatial–spectral features) classification within a framework involving deep learning architectures and traditional machine learning. The results showed that: (1) the discriminative wavelength regions included the 900–940 nm range in the near-infrared (NIR) region in vines 30 days after sowing (DAS) and the entire visual (VIS) region of 400–700 nm in vines 90 DAS; (2) the normalized pheophytization index (NPQI), fluorescence ratio index 1 (FRI1), plant senescence reflectance index (PSRI), anthocyanin index (AntGitelson), and water stress and canopy temperature (WSCT) measures were the most discriminative indices; (3) the support vector machine (SVM) was effective in VI-wise classification with smaller feature spaces, while the RF classifier performed better in pixel-wise and image-wise classification with larger feature spaces; and (4) the automated 3D convolutional neural network (3D-CNN) feature extractor provided promising results over the 2D convolutional neural network (2D-CNN) in learning features from hyperspectral data cubes with a limited number of samples.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2085 ◽  
Author(s):  
Rami M. Jomaa ◽  
Hassan Mathkour ◽  
Yakoub Bazi ◽  
Md Saiful Islam

Although fingerprint-based systems are the commonly used biometric systems, they suffer from a critical vulnerability to a presentation attack (PA). Therefore, several approaches based on a fingerprint biometrics have been developed to increase the robustness against a PA. We propose an alternative approach based on the combination of fingerprint and electrocardiogram (ECG) signals. An ECG signal has advantageous characteristics that prevent the replication. Combining a fingerprint with an ECG signal is a potentially interesting solution to reduce the impact of PAs in biometric systems. We also propose a novel end-to-end deep learning-based fusion neural architecture between a fingerprint and an ECG signal to improve PA detection in fingerprint biometrics. Our model uses state-of-the-art EfficientNets for generating a fingerprint feature representation. For the ECG, we investigate three different architectures based on fully-connected layers (FC), a 1D-convolutional neural network (1D-CNN), and a 2D-convolutional neural network (2D-CNN). The 2D-CNN converts the ECG signals into an image and uses inverted Mobilenet-v2 layers for feature generation. We evaluated the method on a multimodal dataset, that is, a customized fusion of the LivDet 2015 fingerprint dataset and ECG data from real subjects. Experimental results reveal that this architecture yields a better average classification accuracy compared to a single fingerprint modality.


2020 ◽  
Author(s):  
Zicheng Hu ◽  
Alice Tang ◽  
Jaiveer Singh ◽  
Sanchita Bhattacharya ◽  
Atul J. Butte

AbstractCytometry technologies are essential tools for immunology research, providing high-throughput measurements of the immune cells at the single-cell level. Traditional approaches in interpreting and using cytometry measurements include manual or automated gating to identify cell subsets from the cytometry data, providing highly intuitive results but may lead to significant information loss, in that additional details in measured or correlated cell signals might be missed. In this study, we propose and test a deep convolutional neural network for analyzing cytometry data in an end-to-end fashion, allowing a direct association between raw cytometry data and the clinical outcome of interest. Using nine large CyTOF studies from the open-access ImmPort database, we demonstrated that the deep convolutional neural network model can accurately diagnose the latent cytomegalovirus (CMV) in healthy individuals, even when using highly heterogeneous data from different studies. In addition, we developed a permutation-based method for interpreting the deep convolutional neural network model and identified a CD27-CD94+ CD8+ T cell population significantly associated with latent CMV infection. Finally, we provide a tutorial for creating, training and interpreting the tailored deep learning model for cytometry data using Keras and TensorFlow (github.com/hzc363/DeepLearningCyTOF).


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Khaled Z. Abd-Elmoniem ◽  
Inas A. Yassine ◽  
Nader S. Metwalli ◽  
Ahmed Hamimi ◽  
Ronald Ouwerkerk ◽  
...  

AbstractRegional soft tissue mechanical strain offers crucial insights into tissue's mechanical function and vital indicators for different related disorders. Tagging magnetic resonance imaging (tMRI) has been the standard method for assessing the mechanical characteristics of organs such as the heart, the liver, and the brain. However, constructing accurate artifact-free pixelwise strain maps at the native resolution of the tagged images has for decades been a challenging unsolved task. In this work, we developed an end-to-end deep-learning framework for pixel-to-pixel mapping of the two-dimensional Eulerian principal strains $$\varvec{{\varepsilon }}_{\boldsymbol{p1}}$$ ε p 1 and $$\varvec{{\varepsilon }}_{\boldsymbol{p2}}$$ ε p 2 directly from 1-1 spatial modulation of magnetization (SPAMM) tMRI at native image resolution using convolutional neural network (CNN). Four different deep learning conditional generative adversarial network (cGAN) approaches were examined. Validations were performed using Monte Carlo computational model simulations, and in-vivo datasets, and compared to the harmonic phase (HARP) method, a conventional and validated method for tMRI analysis, with six different filter settings. Principal strain maps of Monte Carlo tMRI simulations with various anatomical, functional, and imaging parameters demonstrate artifact-free solid agreements with the corresponding ground-truth maps. Correlations with the ground-truth strain maps were R = 0.90 and 0.92 for the best-proposed cGAN approach compared to R = 0.12 and 0.73 for the best HARP method for $$\varvec{{\varepsilon }}_{\boldsymbol{p1}}$$ ε p 1 and $$\varvec{{\varepsilon }}_{\boldsymbol{p2}}$$ ε p 2 , respectively. The proposed cGAN approach's error was substantially lower than the error in the best HARP method at all strain ranges. In-vivo results are presented for both healthy subjects and patients with cardiac conditions (Pulmonary Hypertension). Strain maps, obtained directly from their corresponding tagged MR images, depict for the first time anatomical, functional, and temporal details at pixelwise native high resolution with unprecedented clarity. This work demonstrates the feasibility of using the deep learning cGAN for direct myocardial and liver Eulerian strain mapping from tMRI at native image resolution with minimal artifacts.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Kanghan Oh ◽  
Young-Chul Chung ◽  
Ko Woon Kim ◽  
Woo-Sung Kim ◽  
Il-Seok Oh

AbstractRecently, deep-learning-based approaches have been proposed for the classification of neuroimaging data related to Alzheimer’s disease (AD), and significant progress has been made. However, end-to-end learning that is capable of maximizing the impact of deep learning has yet to receive much attention due to the endemic challenge of neuroimaging caused by the scarcity of data. Thus, this study presents an approach meant to encourage the end-to-end learning of a volumetric convolutional neural network (CNN) model for four binary classification tasks (AD vs. normal control (NC), progressive mild cognitive impairment (pMCI) vs. NC, stable mild cognitive impairment (sMCI) vs. NC and pMCI vs. sMCI) based on magnetic resonance imaging (MRI) and visualizes its outcomes in terms of the decision of the CNNs without any human intervention. In the proposed approach, we use convolutional autoencoder (CAE)-based unsupervised learning for the AD vs. NC classification task, and supervised transfer learning is applied to solve the pMCI vs. sMCI classification task. To detect the most important biomarkers related to AD and pMCI, a gradient-based visualization method that approximates the spatial influence of the CNN model’s decision was applied. To validate the contributions of this study, we conducted experiments on the ADNI database, and the results demonstrated that the proposed approach achieved the accuracies of 86.60% and 73.95% for the AD and pMCI classification tasks respectively, outperforming other network models. In the visualization results, the temporal and parietal lobes were identified as key regions for classification.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3608
Author(s):  
Chiao-Sheng Wang ◽  
I-Hsi Kao ◽  
Jau-Woei Perng

The early diagnosis of a motor is important. Many researchers have used deep learning to diagnose motor applications. This paper proposes a one-dimensional convolutional neural network for the diagnosis of permanent magnet synchronous motors. The one-dimensional convolutional neural network model is weakly supervised and consists of multiple convolutional feature-extraction modules. Through the analysis of the torque and current signals of the motors, the motors can be diagnosed under a wide range of speeds, variable loads, and eccentricity effects. The advantage of the proposed method is that the feature-extraction modules can extract multiscale features from complex conditions. The number of training parameters was reduced so as to solve the overfitting problem. Furthermore, the class feature map was proposed to automatically determine the frequency component that contributes to the classification using the weak learning method. The experimental results reveal that the proposed model can effectively diagnose three different motor states—healthy state, demagnetization fault state, and bearing fault state. In addition, the model can detect eccentric effects. By combining the current and torque features, the classification accuracy of the proposed model is up to 98.85%, which is higher than that of classical machine-learning methods such as the k-nearest neighbor and support vector machine.


2021 ◽  
Vol 13 (13) ◽  
pp. 2450
Author(s):  
Aaron E. Maxwell ◽  
Timothy A. Warner ◽  
Luis Andrés Guillén

Convolutional neural network (CNN)-based deep learning (DL) is a powerful, recently developed image classification approach. With origins in the computer vision and image processing communities, the accuracy assessment methods developed for CNN-based DL use a wide range of metrics that may be unfamiliar to the remote sensing (RS) community. To explore the differences between traditional RS and DL RS methods, we surveyed a random selection of 100 papers from the RS DL literature. The results show that RS DL studies have largely abandoned traditional RS accuracy assessment terminology, though some of the accuracy measures typically used in DL papers, most notably precision and recall, have direct equivalents in traditional RS terminology. Some of the DL accuracy terms have multiple names, or are equivalent to another measure. In our sample, DL studies only rarely reported a complete confusion matrix, and when they did so, it was even more rare that the confusion matrix estimated population properties. On the other hand, some DL studies are increasingly paying attention to the role of class prevalence in designing accuracy assessment approaches. DL studies that evaluate the decision boundary threshold over a range of values tend to use the precision-recall (P-R) curve, the associated area under the curve (AUC) measures of average precision (AP) and mean average precision (mAP), rather than the traditional receiver operating characteristic (ROC) curve and its AUC. DL studies are also notable for testing the generalization of their models on entirely new datasets, including data from new areas, new acquisition times, or even new sensors.


2020 ◽  
Vol 117 (35) ◽  
pp. 21373-21380
Author(s):  
Zicheng Hu ◽  
Alice Tang ◽  
Jaiveer Singh ◽  
Sanchita Bhattacharya ◽  
Atul J. Butte

Cytometry technologies are essential tools for immunology research, providing high-throughput measurements of the immune cells at the single-cell level. Existing approaches in interpreting and using cytometry measurements include manual or automated gating to identify cell subsets from the cytometry data, providing highly intuitive results but may lead to significant information loss, in that additional details in measured or correlated cell signals might be missed. In this study, we propose and test a deep convolutional neural network for analyzing cytometry data in an end-to-end fashion, allowing a direct association between raw cytometry data and the clinical outcome of interest. Using nine large cytometry by time-of-flight mass spectrometry or mass cytometry (CyTOF) studies from the open-access ImmPort database, we demonstrated that the deep convolutional neural network model can accurately diagnose the latent cytomegalovirus (CMV) in healthy individuals, even when using highly heterogeneous data from different studies. In addition, we developed a permutation-based method for interpreting the deep convolutional neural network model. We were able to identify a CD27- CD94+ CD8+ T cell population significantly associated with latent CMV infection, confirming the findings in previous studies. Finally, we provide a tutorial for creating, training, and interpreting the tailored deep learning model for cytometry data using Keras and TensorFlow (https://github.com/hzc363/DeepLearningCyTOF).


2021 ◽  
Vol 11 (12) ◽  
pp. 3117-3122
Author(s):  
A. Sasidhar ◽  
M. S. Thanabal

Deep learning plays a key role in medical image processing. One of the applications of deep learning models in this domain is bone fracture detection from X-ray images. Convolutional neural network and its variants are used in wide range of medical image processing applications. MURA Dataset is commonly used in various studies that detect bone fractures and this work also uses that dataset, in specific the Humerus bone radiograph images. The humerus dataset in the MURA dataset contains both images with fracture and without fracture. The image with fracture includes images with metals which are removed in this work. Experimental analysis was made with two variants of convolutional neural network, DenseNet169 Model and the VGG Model. In case of the DenseNet169 model, a model with the pre trained weights of ImageNet and one without it is experimented. Results obtained with these variants of CNN are comparedand it shows that DenseNet169 model that uses pre-trained weights of ImageNet model performs better than the other two models.


Sign in / Sign up

Export Citation Format

Share Document