Low Level Multispectral Palmprint Image Fusion for Large Scale Biometrics Authentication

Author(s):  
Dakshina Ranjan Kisku ◽  
Phalguni Gupta ◽  
Jamuna Kanta Sing ◽  
Massimo Tistarelli ◽  
C. Jinsong Hwang

Continuous biometric authentication is a process where the installed biometric systems continuously monitor and authenticate the users. Biometric system could be an exciting application to log in to computers and in a network system. However, due to malfunctioning in high-security zones, it is necessary to prevent those loopholes that often occur in security zones. It has been seen that when a user is logged in to such systems by authenticating to the biometric system installed, he/she often takes short breaks. In the meantime some imposter may attack the network or access to the computer system until the real user is logged out. Therefore, it is necessary to monitor the log in process of the system or network by continuous authentication of users. To accomplish this work we propose in this chapter a continuous biometric authentication system using low level fusion of multispectral palm images where the fusion is performed using wavelet transformation and decomposition. Fusion of palmprint instances is performed by wavelet transform and decomposition. To capture the palm characteristics, a fused image is convolved with Gabor wavelet transform. The Gabor wavelet feature representation reflects very high dimensional space. To reduce the high dimensionality, ant colony optimization algorithm is applied to select relevant, distinctive, and reduced feature set from Gabor responses. Finally, the reduced set of features is trained with support vector machines and accomplishes user recognition tasks. For evaluation, CASIA multispectral palmprint database is used. The experimental results reveal that the system is found to be robust and encouraging while variations of classifiers are used. Also a comparative study of the proposed system with a well-known method is presented.

2019 ◽  
Vol 20 (S25) ◽  
Author(s):  
Beiji Zou ◽  
Changlong Chen ◽  
Rongchang Zhao ◽  
Pingbo Ouyang ◽  
Chengzhang Zhu ◽  
...  

Abstract Background Glaucoma is an irreversible eye disease caused by the optic nerve injury. Therefore, it usually changes the structure of the optic nerve head (ONH). Clinically, ONH assessment based on fundus image is one of the most useful way for glaucoma detection. However, the effective representation for ONH assessment is a challenging task because its structural changes result in the complex and mixed visual patterns. Method We proposed a novel feature representation based on Radon and Wavelet transform to capture these visual patterns. Firstly, Radon transform (RT) is used to map the fundus image into Radon domain, in which the spatial radial variations of ONH are converted to a discrete signal for the description of image structural features. Secondly, the discrete wavelet transform (DWT) is utilized to capture differences and get quantitative representation. Finally, principal component analysis (PCA) and support vector machine (SVM) are used for dimensionality reduction and glaucoma detection. Results The proposed method achieves the state-of-the-art detection performance on RIMONE-r2 dataset with the accuracy and area under the curve (AUC) at 0.861 and 0.906, respectively. Conclusion In conclusion, we showed that the proposed method has the capacity as an effective tool for large-scale glaucoma screening, and it can provide a reference for the clinical diagnosis on glaucoma.


2021 ◽  
Vol 13 (4) ◽  
pp. 683
Author(s):  
Lang Huyan ◽  
Yunpeng Bai ◽  
Ying Li ◽  
Dongmei Jiang ◽  
Yanning Zhang ◽  
...  

Onboard real-time object detection in remote sensing images is a crucial but challenging task in this computation-constrained scenario. This task not only requires the algorithm to yield excellent performance but also requests limited time and space complexity of the algorithm. However, previous convolutional neural networks (CNN) based object detectors for remote sensing images suffer from heavy computational cost, which hinders them from being deployed on satellites. Moreover, an onboard detector is desired to detect objects at vastly different scales. To address these issues, we proposed a lightweight one-stage multi-scale feature fusion detector called MSF-SNET for onboard real-time object detection of remote sensing images. Using lightweight SNET as the backbone network reduces the number of parameters and computational complexity. To strengthen the detection performance of small objects, three low-level features are extracted from the three stages of SNET respectively. In the detection part, another three convolutional layers are designed to further extract deep features with rich semantic information for large-scale object detection. To improve detection accuracy, the deep features and low-level features are fused to enhance the feature representation. Extensive experiments and comprehensive evaluations on the openly available NWPU VHR-10 dataset and DIOR dataset are conducted to evaluate the proposed method. Compared with other state-of-art detectors, the proposed detection framework has fewer parameters and calculations, while maintaining consistent accuracy.


Author(s):  
S. Anu H. Nair ◽  
P. Aruna

With the wide spread utilization of Biometric identification systems, establishing the authenticity of biometric data itself has emerged as an important issue. In this chapter, a novel approach for creating a multimodal biometric system has been suggested. The multimodal biometric system is implemented using the different fusion schemes such as Average Fusion, Minimum Fusion, Maximum Fusion, Principal Component Analysis Fusion, Discrete Wavelet Transform Fusion, Stationary Wavelet Transform Fusion, Intensity Hue Saturation Fusion, Laplacian Gradient Fusion, Pyramid Gradient Fusion and Sparse Representation Fusion. In modality extraction level, the information extracted from different modalities is stored in vectors on the basis of their modality. These are then blended to produce a joint template which is the basis for the watermarking system. The fused image is applied as input along with the cover image to the Genetic Algorithm based Bacterial Foraging Optimization Algorithm watermarking system. The standard images are used as cover images and performance was compared.


2011 ◽  
Vol 36 (5) ◽  
pp. 3205-3213 ◽  
Author(s):  
Şafak Saraydemir ◽  
Necmi Taşpınar ◽  
Osman Eroğul ◽  
Hülya Kayserili ◽  
Nuriye Dinçkan

2014 ◽  
Vol 14 (2) ◽  
pp. 102-108 ◽  
Author(s):  
Yong Yang ◽  
Shuying Huang ◽  
Junfeng Gao ◽  
Zhongsheng Qian

Abstract In this paper, by considering the main objective of multi-focus image fusion and the physical meaning of wavelet coefficients, a discrete wavelet transform (DWT) based fusion technique with a novel coefficients selection algorithm is presented. After the source images are decomposed by DWT, two different window-based fusion rules are separately employed to combine the low frequency and high frequency coefficients. In the method, the coefficients in the low frequency domain with maximum sharpness focus measure are selected as coefficients of the fused image, and a maximum neighboring energy based fusion scheme is proposed to select high frequency sub-bands coefficients. In order to guarantee the homogeneity of the resultant fused image, a consistency verification procedure is applied to the combined coefficients. The performance assessment of the proposed method was conducted in both synthetic and real multi-focus images. Experimental results demonstrate that the proposed method can achieve better visual quality and objective evaluation indexes than several existing fusion methods, thus being an effective multi-focus image fusion method.


2015 ◽  
Vol 28 (17) ◽  
pp. 6743-6762 ◽  
Author(s):  
Catherine M. Naud ◽  
Derek J. Posselt ◽  
Susan C. van den Heever

Abstract The distribution of cloud and precipitation properties across oceanic extratropical cyclone cold fronts is examined using four years of combined CloudSat radar and CALIPSO lidar retrievals. The global annual mean cloud and precipitation distributions show that low-level clouds are ubiquitous in the postfrontal zone while higher-level cloud frequency and precipitation peak in the warm sector along the surface front. Increases in temperature and moisture within the cold front region are associated with larger high-level but lower mid-/low-level cloud frequencies and precipitation decreases in the cold sector. This behavior seems to be related to a shift from stratiform to convective clouds and precipitation. Stronger ascent in the warm conveyor belt tends to enhance cloudiness and precipitation across the cold front. A strong temperature contrast between the warm and cold sectors also encourages greater post-cold-frontal cloud occurrence. While the seasonal contrasts in environmental temperature, moisture, and ascent strength are enough to explain most of the variations in cloud and precipitation across cold fronts in both hemispheres, they do not fully explain the differences between Northern and Southern Hemisphere cold fronts. These differences are better explained when the impact of the contrast in temperature across the cold front is also considered. In addition, these large-scale parameters do not explain the relatively large frequency in springtime postfrontal precipitation.


2013 ◽  
Vol 26 (21) ◽  
pp. 8378-8391 ◽  
Author(s):  
Yi Zhang ◽  
Rucong Yu ◽  
Jian Li ◽  
Weihua Yuan ◽  
Minghua Zhang

Abstract Given the large discrepancies that exist in climate models for shortwave cloud forcing over eastern China (EC), the dynamic (vertical motion and horizontal circulation) and thermodynamic (stability) relations of stratus clouds and the associated cloud radiative forcing in the cold season are examined. Unlike the stratus clouds over the southeastern Pacific Ocean (as a representative of marine boundary stratus), where thermodynamic forcing plays a primary role, the stratus clouds over EC are affected by both dynamic and thermodynamic factors. The Tibetan Plateau (TP)-forced low-level large-scale lifting and high stability over EC favor the accumulation of abundant saturated moist air, which contributes to the formation of stratus clouds. The TP slows down the westerly overflow through a frictional effect, resulting in midlevel divergence, and forces the low-level surrounding flows, resulting in convergence. Both midlevel divergence and low-level convergence sustain a rising motion and vertical water vapor transport over EC. The surface cold air is advected from the Siberian high by the surrounding northerly flow, causing low-level cooling. The cooling effect is enhanced by the blocking of the YunGui Plateau. The southwesterly wind carrying warm, moist air from the east Bay of Bengal is uplifted by the HengDuan Mountains via topographical forcing; the midtropospheric westerly flow further advects the warm air downstream of the TP, moistening and warming the middle troposphere on the lee side of the TP. The low-level cooling and midlevel warming together increase the stability. The favorable dynamic and thermodynamic large-scale environment allows for the formation of stratus clouds over EC during the cold season.


2010 ◽  
Vol 138 (4) ◽  
pp. 1368-1382 ◽  
Author(s):  
Jeffrey S. Gall ◽  
William M. Frank ◽  
Matthew C. Wheeler

Abstract This two-part series of papers examines the role of equatorial Rossby (ER) waves in tropical cyclone (TC) genesis. To do this, a unique initialization procedure is utilized to insert n = 1 ER waves into a numerical model that is able to faithfully produce TCs. In this first paper, experiments are carried out under the idealized condition of an initially quiescent background environment. Experiments are performed with varying initial wave amplitudes and with and without diabatic effects. This is done to both investigate how the properties of the simulated ER waves compare to the properties of observed ER waves and explore the role of the initial perturbation strength of the ER wave on genesis. In the dry, frictionless ER wave simulation the phase speed is slightly slower than the phase speed predicted from linear theory. Large-scale ascent develops in the region of low-level poleward flow, which is in good agreement with the theoretical structure of an n = 1 ER wave. The structures and phase speeds of the simulated full-physics ER waves are in good agreement with recent observational studies of ER waves that utilize wavenumber–frequency filtering techniques. Convection occurs primarily in the eastern half of the cyclonic gyre, as do the most favorable conditions for TC genesis. This region features sufficient midlevel moisture, anomalously strong low-level cyclonic vorticity, enhanced convection, and minimal vertical shear. Tropical cyclogenesis occurs only in the largest initial-amplitude ER wave simulation. The formation of the initial tropical disturbance that ultimately develops into a tropical cyclone is shown to be sensitive to the nonlinear horizontal momentum advection terms. When the largest initial-amplitude simulation is rerun with the nonlinear horizontal momentum advection terms turned off, tropical cyclogenesis does not occur, but the convectively coupled ER wave retains the properties of the ER wave observed in the smaller initial-amplitude simulations. It is shown that this isolated wave-only genesis process only occurs for strong ER waves in which the nonlinear advection is large. Part II will look at the more realistic case of ER wave–related genesis in which a sufficiently intense ER wave interacts with favorable large-scale flow features.


2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


Sign in / Sign up

Export Citation Format

Share Document