multiscale transform
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 10)

H-INDEX

4
(FIVE YEARS 2)

Author(s):  
Naoki Saito ◽  
Yiqun Shao

AbstractExtending computational harmonic analysis tools from the classical setting of regular lattices to the more general setting of graphs and networks is very important, and much research has been done recently. The generalized Haar–Walsh transform (GHWT) developed by Irion and Saito (2014) is a multiscale transform for signals on graphs, which is a generalization of the classical Haar and Walsh–Hadamard transforms. We propose the extended generalized Haar–Walsh transform (eGHWT), which is a generalization of the adapted time–frequency tilings of Thiele and Villemoes (1996). The eGHWT examines not only the efficiency of graph-domain partitions but also that of “sequency-domain” partitions simultaneously. Consequently, the eGHWT and its associated best-basis selection algorithm for graph signals significantly improve the performance of the previous GHWT with the similar computational cost, $$O(N \log N)$$ O ( N log N ) , where N is the number of nodes of an input graph. While the GHWT best-basis algorithm seeks the most suitable orthonormal basis for a given task among more than $$(1.5)^N$$ ( 1.5 ) N possible orthonormal bases in $$\mathbb {R}^N$$ R N , the eGHWT best-basis algorithm can find a better one by searching through more than $$0.618\cdot (1.84)^N$$ 0.618 · ( 1.84 ) N possible orthonormal bases in $$\mathbb {R}^N$$ R N . This article describes the details of the eGHWT best-basis algorithm and demonstrates its superiority using several examples including genuine graph signals as well as conventional digital images viewed as graph signals. Furthermore, we also show how the eGHWT can be extended to 2D signals and matrix-form data by viewing them as a tensor product of graphs generated from their columns and rows and demonstrate its effectiveness on applications such as image approximation.


2021 ◽  
Vol 60 (12) ◽  
Author(s):  
Zunlin Fan ◽  
Naiyang Guan ◽  
Zhiyuan Wang ◽  
Longfei Su ◽  
Jiangang Wu ◽  
...  

Author(s):  
Chengfang Zhang

Multifocus image fusion can obtain an image with all objects in focus, which is beneficial for understanding the target scene. Multiscale transform (MST) and sparse representation (SR) have been widely used in multifocus image fusion. However, the contrast of the fused image is lost after multiscale reconstruction, and fine details tend to be smoothed for SR-based fusion. In this paper, we propose a fusion method based on MST and convolutional sparse representation (CSR) to address the inherent defects of both the MST- and SR-based fusion methods. MST is first performed on each source image to obtain the low-frequency components and detailed directional components. Then, CSR is applied in the low-pass fusion, while the high-pass bands are fused using the popular “max-absolute” rule as the activity level measurement. The fused image is finally obtained by performing inverse MST on the fused coefficients. The experimental results on multifocus images show that the proposed algorithm exhibits state-of-the-art performance in terms of definition.


2020 ◽  
Vol 28 (17) ◽  
pp. 25293
Author(s):  
Qi Mao ◽  
Yunlong Zhu ◽  
Cixing Lv ◽  
Yao Lu ◽  
Xiaohui Yan ◽  
...  

2020 ◽  
Vol 508 ◽  
pp. 64-78 ◽  
Author(s):  
Jun Chen ◽  
Xuejiao Li ◽  
Linbo Luo ◽  
Xiaoguang Mei ◽  
Jiayi Ma

Sign in / Sign up

Export Citation Format

Share Document