A near infrared angioscope visualizing lipid within arterial vessel wall based on multi-spectral image in 1.7 μm wavelength band

Author(s):  
Takemi Hasegawa ◽  
Ichiro Sogawa ◽  
Hiroshi Suganuma
Author(s):  
Jun-Li Xu ◽  
Cecilia Riccioli ◽  
Ana Herrero-Langreo ◽  
Aoife Gowen

Deep learning (DL) has recently achieved considerable successes in a wide range of applications, such as speech recognition, machine translation and visual recognition. This tutorial provides guidelines and useful strategies to apply DL techniques to address pixel-wise classification of spectral images. A one-dimensional convolutional neural network (1-D CNN) is used to extract features from the spectral domain, which are subsequently used for classification. In contrast to conventional classification methods for spectral images that examine primarily the spectral context, a three-dimensional (3-D) CNN is applied to simultaneously extract spatial and spectral features to enhance classificationaccuracy. This tutorial paper explains, in a stepwise manner, how to develop 1-D CNN and 3-D CNN models to discriminate spectral imaging data in a food authenticity context. The example image data provided consists of three varieties of puffed cereals imaged in the NIR range (943–1643 nm). The tutorial is presented in the MATLAB environment and scripts and dataset used are provided. Starting from spectral image pre-processing (background removal and spectral pre-treatment), the typical steps encountered in development of CNN models are presented. The example dataset provided demonstrates that deep learning approaches can increase classification accuracy compared to conventional approaches, increasing the accuracy of the model tested on an independent image from 92.33 % using partial least squares-discriminant analysis to 99.4 % using 3-CNN model at pixel level. The paper concludes with a discussion on the challenges and suggestions in the application of DL techniques for spectral image classification.


Author(s):  
Hiroaki ISHIZAWA ◽  
Toyokazu YOTSUDA ◽  
Hiroyuki KANAI ◽  
Toyonori NISHIMATSU ◽  
Eiji TOBA

2014 ◽  
Vol 22 (2) ◽  
pp. 129-139 ◽  
Author(s):  
Susanne Wiklund Lindström ◽  
David Nilsson ◽  
Anders Nordin ◽  
Martin Nordwaeger ◽  
Ingemar Olofsson ◽  
...  

2019 ◽  
Vol 2019 (1) ◽  
pp. 300-303
Author(s):  
Prakhar Amba ◽  
David Alleysson

A hyperspectral camera can record a cube of data with both spatial 2D and spectral 1D dimensions. Spectral Filter Arrays (SFAs) overlaid on a single sensor allows a snapshot version of a hyperspectral camera. But acquired image is subsampled both spatially and spectrally, and a recovery method should be applied. In this paper we present a linear model of spectral and spatial recovery based on Linear Minimum Mean Square Error (LMMSE) approach. The method learns a stable linear solution for which redundancy is controlled using spatial neighborhood. We evaluate results in simulation using gaussian shaped filter's sensitivities on SFA mosaics of upto 9 filters with sensitivities both in visible and Near-Infrared (NIR) wavelength. We show by experiment that by using big neighborhood sizes in our model we can accurately recover the spectra from the RAW images taken by such a camera. We also present results on recovered spectra of Macbeth color chart from a Bayer SFA having 3 filters.


Sign in / Sign up

Export Citation Format

Share Document