scholarly journals Online reconstruction-free single-pixel image classification

2019 ◽  
Vol 86 ◽  
pp. 28-37 ◽  
Author(s):  
Pedro Latorre-Carmona ◽  
V. Javier Traver ◽  
J. Salvador Sánchez ◽  
Enrique Tajahuerce
2021 ◽  
Author(s):  
Santosh Kumar ◽  
ting bu ◽  
He Zhang ◽  
Irwin Huang ◽  
Yuping Huang

2021 ◽  
Author(s):  
Santosh Kumar ◽  
Ting Bu ◽  
He Zhang ◽  
Irwin Huang ◽  
Yu-Ping Huang

2021 ◽  
Vol 11 (18) ◽  
pp. 8694
Author(s):  
Mehak Maqbool Memon ◽  
Manzoor Ahmed Hashmani ◽  
Aisha Zahid Junejo ◽  
Syed Sajjad Rizvi ◽  
Adnan Ashraf Arain

Image classification of a visual scene based on visibility is significant due to the rise in readily available automated solutions. Currently, there are only two known spectrums of image visibility i.e., dark, and bright. However, normal environments include semi-dark scenarios. Hence, visual extremes that will lead to the accurate extraction of image features should be duly discarded. Fundamentally speaking there are two broad methods to perform visual scene-based image classification, i.e., machine learning (ML) methods and computer vision methods. In ML, the issues of insufficient data, sophisticated hardware and inadequate image classifier training time remain significant problems to be handled. These techniques fail to classify the visual scene-based images with high accuracy. The other alternative is computer vision (CV) methods, which also have major issues. CV methods do provide some basic procedures which may assist in such classification but, to the best of our knowledge, no CV algorithm exists to perform such classification, i.e., these do not account for semi-dark images in the first place. Moreover, these methods do not provide a well-defined protocol to calculate images’ content visibility and thereby classify images. One of the key algorithms for calculation of images’ content visibility is backed by the HSL (hue, saturation, lightness) color model. The HSL color model allows the visibility calculation of a scene by calculating the lightness/luminance of a single pixel. Recognizing the high potential of the HSL color model, we propose a novel framework relying on the simple approach of the statistical manipulation of an entire image’s pixel intensities, represented by HSL color model. The proposed algorithm, namely, Relative Perceived Luminance Classification (RPLC) uses the HSL (hue, saturation, lightness) color model to correctly identify the luminosity values of the entire image. Our findings prove that the proposed method yields high classification accuracy (over 78%) with a small error rate. We show that the computational complexity of RPLC is much less than that of the state-of-the-art ML algorithms.


2021 ◽  
Author(s):  
Antonio Lorente Mur ◽  
Nicolas Ducros ◽  
Françoise Peyrin ◽  
Pierre Leclerc

2020 ◽  
Vol 79 (9) ◽  
pp. 781-791
Author(s):  
V. О. Gorokhovatskyi ◽  
I. S. Tvoroshenko ◽  
N. V. Vlasenko

2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document