A NOVEL DESIGN METHOD FOR DIRECTIONAL SELECTION BASED ON 2-DIMENSIONAL COMPLEX WAVELET PACKET TRANSFORM

Author(s):  
TAKESHI KATO ◽  
ZHONG ZHANG ◽  
HIROSHI TODA ◽  
TAKASHI IMAMURA ◽  
TETSUO MIYAKE

In this paper, we propose a design method for directional selection in the two-dimensional complex wavelet packet transform (2D-CWPT). Current two-dimensional complex discrete wavelet transforms (2D-CDWT) can extract directional components from images, but the number of directions is small, and the directions and resolutions are fixed. Thus the current 2D-CDWTs are not flexible enough. In this study, we propose a new design method of the directional filters that can detect desirable direction components. Additionally flexible directional selection is achieved because the directional filters are added to the 2D-CWPT. Finally, the proposed method is applied to defect detection in semiconductor wafer circuits and an encouraging result is obtained.

Author(s):  
HIROSHI TODA ◽  
ZHONG ZHANG ◽  
TAKASHI IMAMURA

The useful theorems for achieving perfect translation invariance have already been proved, and based on these theorems, dual-tree complex discrete wavelet transforms with perfect translation invariance have been proposed. However, due to the complication of frequency divisions with wavelet packets, it is difficult to design complex wavelet packet transforms with perfect translation invariance. In this paper, based on the aforementioned theorems, novel complex wavelet packet transforms are designed to achieve perfect translation invariance. These complex wavelet packet transforms are based on the Meyer wavelet, which has the important characteristic of possessing a wide range of shapes. In this paper, two types of complex wavelet packet transforms are designed with the optimized Meyer wavelet. One of them is based on a single Meyer wavelet and the other is based on a number of different shapes of the Meyer wavelets to create good localization of wavelet packets.


Author(s):  
PARUL SHAH ◽  
S. N. MERCHANT ◽  
U. B. DESAI

This paper presents two methods for fusion of infrared (IR) and visible surveillance images. The first method combines Curvelet Transform (CT) with Discrete Wavelet Transform (DWT). As wavelets do not represent long edges well while curvelets are challenged with small features, our objective is to combine both to achieve better performance. The second approach uses Discrete Wavelet Packet Transform (DWPT), which provides multiresolution in high frequency band as well and hence helps in handling edges better. The performance of the proposed methods have been extensively tested for a number of multimodal surveillance images and compared with various existing transform domain fusion methods. Experimental results show that evaluation based on entropy, gradient, contrast etc., the criteria normally used, are not enough, as in some cases, these criteria are not consistent with the visual quality. It also demonstrates that the Petrovic and Xydeas image fusion metric is a more appropriate criterion for fusion of IR and visible images, as in all the tested fused images, visual quality agrees with the Petrovic and Xydeas metric evaluation. The analysis shows that there is significant increase in the quality of fused image, both visually and quantitatively. The major achievement of the proposed fusion methods is its reduced artifacts, one of the most desired feature for fusion used in surveillance applications.


2004 ◽  
Vol 17 (S1) ◽  
pp. 117-122 ◽  
Author(s):  
Zhou-min Xie ◽  
En-fu Wang ◽  
Guo-hong Zhang ◽  
Guo-cun Zhao ◽  
Xu-geng Chen

Sign in / Sign up

Export Citation Format

Share Document