Histograms of oriented mosaic gradients for snapshot spectral image description

2022 ◽  
Vol 183 ◽  
pp. 79-93
Author(s):  
Lulu Chen ◽  
Yongqiang Zhao ◽  
Jonathan Cheung-Wai Chan ◽  
Seong G. Kong
2009 ◽  
Vol 35 (10) ◽  
pp. 1278-1282
Author(s):  
Jia-Min LIU ◽  
Hai-Jun XIE ◽  
Qiang LIU ◽  
Sheng-Jun ZHU ◽  
Wei ZHANG

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1544
Author(s):  
Chunpeng Wang ◽  
Hongling Gao ◽  
Meihong Yang ◽  
Jian Li ◽  
Bin Ma ◽  
...  

Continuous orthogonal moments, for which continuous functions are used as kernel functions, are invariant to rotation and scaling, and they have been greatly developed over the recent years. Among continuous orthogonal moments, polar harmonic Fourier moments (PHFMs) have superior performance and strong image description ability. In order to improve the performance of PHFMs in noise resistance and image reconstruction, PHFMs, which can only take integer numbers, are extended to fractional-order polar harmonic Fourier moments (FrPHFMs) in this paper. Firstly, the radial polynomials of integer-order PHFMs are modified to obtain fractional-order radial polynomials, and FrPHFMs are constructed based on the fractional-order radial polynomials; subsequently, the strong reconstruction ability, orthogonality, and geometric invariance of the proposed FrPHFMs are proven; and, finally, the performance of the proposed FrPHFMs is compared with that of integer-order PHFMs, fractional-order radial harmonic Fourier moments (FrRHFMs), fractional-order polar harmonic transforms (FrPHTs), and fractional-order Zernike moments (FrZMs). The experimental results show that the FrPHFMs constructed in this paper are superior to integer-order PHFMs and other fractional-order continuous orthogonal moments in terms of performance in image reconstruction and object recognition, as well as that the proposed FrPHFMs have strong image description ability and good stability.


Author(s):  
Huimin Lu ◽  
Rui Yang ◽  
Zhenrong Deng ◽  
Yonglin Zhang ◽  
Guangwei Gao ◽  
...  

Chinese image description generation tasks usually have some challenges, such as single-feature extraction, lack of global information, and lack of detailed description of the image content. To address these limitations, we propose a fuzzy attention-based DenseNet-BiLSTM Chinese image captioning method in this article. In the proposed method, we first improve the densely connected network to extract features of the image at different scales and to enhance the model’s ability to capture the weak features. At the same time, a bidirectional LSTM is used as the decoder to enhance the use of context information. The introduction of an improved fuzzy attention mechanism effectively improves the problem of correspondence between image features and contextual information. We conduct experiments on the AI Challenger dataset to evaluate the performance of the model. The results show that compared with other models, our proposed model achieves higher scores in objective quantitative evaluation indicators, including BLEU , BLEU , METEOR, ROUGEl, and CIDEr. The generated description sentence can accurately express the image content.


Author(s):  
Mehwish Iqbal ◽  
Muhammad Mohsin Riaz ◽  
Abdul Ghafoor ◽  
Attiq Ahmad ◽  
Syed Sohaib Ali

Author(s):  
Xueyang Fu ◽  
Wu Wang ◽  
Yue Huang ◽  
Xinghao Ding ◽  
John Paisley

2018 ◽  
Vol 312 ◽  
pp. 154-164 ◽  
Author(s):  
Pengjie Tang ◽  
Hanli Wang ◽  
Sam Kwong

2014 ◽  
Author(s):  
Yoonsuk Choi ◽  
Ershad Sharifahmadian ◽  
Shahram Latifi
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document