Investigating the Upper-Bound Performance of Sparse-Coding-Based Spectral Reconstruction from RGB Images

2021 ◽  
Vol 2021 (29) ◽  
pp. 19-24
Author(s):  
Yi-Tun Lin ◽  
Graham D. Finlayson

In Spectral Reconstruction (SR), we recover hyperspectral images from their RGB counterparts. Most of the recent approaches are based on Deep Neural Networks (DNN), where millions of parameters are trained mainly to extract and utilize the contextual features in large image patches as part of the SR process. On the other hand, the leading Sparse Coding method ‘A+’—which is among the strongest point-based baselines against the DNNs—seeks to divide the RGB space into neighborhoods, where locally a simple linear regression (comprised by roughly 102 parameters) suffices for SR. In this paper, we explore how the performance of Sparse Coding can be further advanced. We point out that in the original A+, the sparse dictionary used for neighborhood separations are optimized for the spectral data but used in the projected RGB space. In turn, we demonstrate that if the local linear mapping is trained for each spectral neighborhood instead of RGB neighborhood (and theoretically if we could recover each spectrum based on where it locates in the spectral space), the Sparse Coding algorithm can actually perform much better than the leading DNN method. In effect, our result defines one potential (and very appealing) upper-bound performance of point-based SR.

2020 ◽  
Vol 2020 (1) ◽  
pp. 144-148
Author(s):  
Yi-Tun Lin

Spectral reconstruction (SR) aims to recover high resolution spectra from RGB images. Recent developments - leading by Convolutional Neural Networks (CNN) - can already solve this problem with low errors. However, those leading methods do not explicitly ensure the predicted spectra will re-integrate (with the underlying camera response functions) into the same RGB colours as the ones they are recovered from, namely the 'colour fidelity' problem. The purpose of this paper is to show, visually and quantitatively, how well (or bad) the existing SR models maintain colour fidelity. Three main approaches are evaluated - regression, sparse coding and CNN. Furthermore, aiming for a more realistic setting, the evaluations are done on real RGB images and the 'end-of-pipe' images (i.e.rendered images shown to the end users) are provided for visual comparisons. It is shown that the state-of-the-art CNN-based model, despite of the superior performance in spectral recovery, introduces significant colour shifts in the final images. Interestingly, the leading sparse coding and the simple linear regression model, both of which are based on linear mapping, best preserve the colour fidelity in SR.


2019 ◽  
Vol 2019 (1) ◽  
pp. 284-289
Author(s):  
Yi-Tun Lin ◽  
Graham D. Finlayson

In the spectral reconstruction (SR) problem, reflectance and/or radiance spectra are recovered from RGB images. Most of the prior art only attempts to solve this problem for fixed exposure conditions, and this limits the usefulness of these approaches (they can work inside the lab but not in the real world). In this paper, we seek methods that work well even when exposure is unknown or varies across an image, namely 'exposure invariance'. We begin by re-examining three main approaches - regression, sparse coding and Deep Neural Networks (DNN) - from a varying exposure viewpoint. All three of these approaches are predominantly implemented assuming a fixed capturing condition. However, the leading sparse coding approach (which is almost the best approach overall) is shown to be exposure-invariant, and this teaches that exposure invariance need not come at the cost of poorer overall performance. This result in turn encouraged us to revisit the regression approach. Remarkably, we show that a very simple root-polynomial regression model - which by construction is exposure-invariant - provides competitive performance without any of the complexity inherent in sparse coding or DNNs.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 158492-158502 ◽  
Author(s):  
Pengfei Wang ◽  
Fugui Qi ◽  
Miao Liu ◽  
Fulai Liang ◽  
Huijun Xue ◽  
...  

2016 ◽  
Vol 73 ◽  
pp. 56-70 ◽  
Author(s):  
Maryam Afzali ◽  
Aboozar Ghaffari ◽  
Emad Fatemizadeh ◽  
Hamid Soltanian-Zadeh

2009 ◽  
Vol 85 (99) ◽  
pp. 107-110 ◽  
Author(s):  
Slavko Simic

We give another global upper bound for Jensen's discrete inequality which is better than already existing ones. For instance, we determine a new converses for generalized A-G and G-H inequalities.


2011 ◽  
Vol 15 (2) ◽  
pp. 1 ◽  
Author(s):  
Anurag Agarwal

<span>In this study, a new Artificial Intelligence technique for non-linear mapping called Abductive Networks is used for two-group classification of firms. The results are compared with Neural Networks, another AI technique, which has been shown to perform better than the traditional statistical techniques such as multivariate discriminant analysis and logit. In empirical tests, Abductive Networks perform as well or better than Neural Networks on various criteria of measurement such as Type 1 / Type II accuracy criteria and Distance Between Centroids.</span>


2007 ◽  
Vol 7 (1) ◽  
pp. 151-167 ◽  
Author(s):  
Dmitri B. Strukov ◽  
Konstantin K. Likharev

We have calculated the maximum useful bit density that may be achieved by the synergy of bad bit exclusion and advanced (BCH) error correcting codes in prospective crossbar nanoelectronic memories, as a function of defective memory cell fraction. While our calculations are based on a particular ("CMOL") memory topology, with naturally segmented nanowires and an area-distributed nano/CMOS interface, for realistic parameters our results are also applicable to "global" crossbar memories with peripheral interfaces. The results indicate that the crossbar memories with a nano/CMOS pitch ratio close to 1/3 (which is typical for the current, initial stage of the nanoelectronics development) may overcome purely semiconductor memories in useful bit density if the fraction of nanodevice defects (stuck-on-faults) is below ∼15%, even under rather tough, 30 ns upper bound on the total access time. Moreover, as the technology matures, and the pitch ratio approaches an order of magnitude, the crossbar memories may be far superior to the densest semiconductor memories by providing, e.g., a 1 Tbit/cm2 density even for a plausible defect fraction of 2%. These highly encouraging results are much better than those reported in literature earlier, including our own early work, mostly due to more advanced error correcting codes.


Sign in / Sign up

Export Citation Format

Share Document