walsh transform
Recently Published Documents


TOTAL DOCUMENTS

242
(FIVE YEARS 26)

H-INDEX

18
(FIVE YEARS 2)

Author(s):  
Naoki Saito ◽  
Yiqun Shao

AbstractExtending computational harmonic analysis tools from the classical setting of regular lattices to the more general setting of graphs and networks is very important, and much research has been done recently. The generalized Haar–Walsh transform (GHWT) developed by Irion and Saito (2014) is a multiscale transform for signals on graphs, which is a generalization of the classical Haar and Walsh–Hadamard transforms. We propose the extended generalized Haar–Walsh transform (eGHWT), which is a generalization of the adapted time–frequency tilings of Thiele and Villemoes (1996). The eGHWT examines not only the efficiency of graph-domain partitions but also that of “sequency-domain” partitions simultaneously. Consequently, the eGHWT and its associated best-basis selection algorithm for graph signals significantly improve the performance of the previous GHWT with the similar computational cost, $$O(N \log N)$$ O ( N log N ) , where N is the number of nodes of an input graph. While the GHWT best-basis algorithm seeks the most suitable orthonormal basis for a given task among more than $$(1.5)^N$$ ( 1.5 ) N possible orthonormal bases in $$\mathbb {R}^N$$ R N , the eGHWT best-basis algorithm can find a better one by searching through more than $$0.618\cdot (1.84)^N$$ 0.618 · ( 1.84 ) N possible orthonormal bases in $$\mathbb {R}^N$$ R N . This article describes the details of the eGHWT best-basis algorithm and demonstrates its superiority using several examples including genuine graph signals as well as conventional digital images viewed as graph signals. Furthermore, we also show how the eGHWT can be extended to 2D signals and matrix-form data by viewing them as a tensor product of graphs generated from their columns and rows and demonstrate its effectiveness on applications such as image approximation.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 136
Author(s):  
Moataz Z. Salim ◽  
Ali J. Abboud ◽  
Remzi Yildirim

The usage of images in different fields has increased dramatically, especially in medical image analysis and social media. Many risks can threaten the integrity and confidentiality of digital images transmitted through the internet. As such, the preservation of the contents of these images is of the utmost importance for sensitive healthcare systems. In this paper, the researchers propose a block-based approach to protect the integrity of digital images by detecting and localizing forgeries. It employs a visual cryptography-based watermarking approach to provide the capabilities of forgery detection and localization. In this watermarking scheme, features and key and secret shares are generated. The feature share is constructed by extracting features from equal-sized blocks of the image by using a Walsh transform, a local binary pattern and a discrete wavelet transform. Then, the key share is generated randomly from each image block, and the secret share is constructed by applying the XOR operation between the watermark, feature share and key share. The CASIA V 1.0 and SIPI datasets were used to check the performance and robustness of the proposed method. The experimental results from these datasets revealed that the percentages of the precision, recall and F1 score classification indicators were approximately 97% for these indicators, while the percentages of the TAF and NC image quality indicators were approximately 97% and 96% after applying several known image processing and geometric attacks. Furthermore, the comparative experimental results with the state-of-art approaches proved the robustness and noticeable improvement in the proposed approach for the detection and localization of image forgeries in terms of classification and quality measures.


Author(s):  
Xiao Liang ◽  
Xuewei Wang ◽  
Litong Lyu ◽  
Yanjun Han ◽  
Jinjin Zheng ◽  
...  

AbstractBlur detection is aimed to differentiate the blurry and sharp regions from a given image. This task has attracted much attention in recent years due to its importance in computer vision with the integration of image processing and artificial intelligence. However, blur detection still suffers from problems such as the oversensitivity to image noise and the difficulty in cost–benefit balance. To deal with these issues, we propose an accurate and efficient blur detection method, which is concise in architecture and robust against noise. First, we develop a sequency spectrum-based blur metric to estimate the blurriness of each pixel by integrating a re-blur scheme and the Walsh transform. Meanwhile, to eliminate the noise interference, we propose an adaptive sequency spectrum truncation strategy by which we can obtain an accurate blur map even in noise-polluted cases. Finally, a multi-scale fusion segmentation framework is designed to extract the blur region based on the clustering-guided region growth. Experimental results on benchmark datasets demonstrate that the proposed method achieves state-of-the-art performance and the best balance between cost and benefit. It offers an average F1 score of 0.887, MAE of 0.101, detecting time of 0.7 s, and training time of 0.5 s. Especially for noise-polluted blurry images, the proposed method achieves the F1 score of 0.887 and MAE of 0.101, which significantly surpasses other competitive approaches. Our method yields a cost–benefit advantage and noise immunity that has great application prospect in complex sensing environment.


2021 ◽  
Vol 26 (6) ◽  
pp. 453-458
Author(s):  
Niu JIANG ◽  
Zepeng ZHUO ◽  
Guolong CHEN ◽  
Liting WANG

The Walsh transform is an important tool to investigate cryptographic properties of Boolean functions. This paper is devoted to study the Walsh transform of a class of Boolean functions defined as [see formula in PDF], by making use of the known conclusions of Walsh transform and the properties of trace function, and the conclusion is obtained by generalizing an existing result.


Doklady BGUIR ◽  
2021 ◽  
Vol 19 (7) ◽  
pp. 31-39
Author(s):  
A. A. Budzko ◽  
T. N. Dvornikova

The work is devoted to the development of circuits for fast Walsh transform processors of the serialparallel type. The fast Walsh transform processors are designed for decoding error-correcting codes and synchronization; their use can reduce the cost of calculating the instantaneous Walsh spectrum by almost 2 times. The class of processors for computing the instantaneous spectrum according to Walsh is called serialparallel processors. Circuits of the fast Walsh transform processors of serial-parallel type have been developed. A comparative analysis of the constructed graphs of the fast Walsh transform processors is carried out. A method and a processor for calculating the Walsh transform coefficients are proposed, which allows increasing the speed of the transformations performed. When calculating the conversion coefficients using processors of parallel, serial and serial-parallel types, it was found that controllers of the serial-parallel type require 2(N–1) operations when calculating the instantaneous spectrum according to Walsh. The results obtained can be used in the design of discrete information processing devices, in telecommunication systems when coding signals for their noise-immune transmission and decoding, which ensures the optimal number of operations, and therefore the optimal hardware costs.


Author(s):  
Ana Sălăgean ◽  
Pantelimon Stănică

AbstractIn this paper we want to estimate the nonlinearity of Boolean functions, by probabilistic methods, when it is computationally very expensive, or perhaps not feasible to compute the full Walsh transform (which is the case for almost all functions in a larger number of variables, say more than 30). Firstly, we significantly improve upon the bounds of Zhang and Zheng (1999) on the probabilities of failure of affinity tests based on nonhomomorphicity, in particular, we prove a new lower bound that we have previously conjectured. This new lower bound generalizes the one of Bellare et al. (IEEE Trans. Inf. Theory 42(6), 1781–1795 1996) to nonhomomorphicity tests of arbitrary order. Secondly, we prove bounds on the probability of failure of a proposed affinity test that uses the BLR linearity test. All these bounds are expressed in terms of the function’s nonlinearity, and we exploit that to provide probabilistic methods for estimating the nonlinearity based upon these affinity tests. We analyze our estimates and conclude that they have reasonably good accuracy, particularly so when the nonlinearity is low.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Yu Zhou ◽  
Yongzhuang Wei ◽  
Hailong Zhang ◽  
Wenzheng Zhang

The concept of transparency order is introduced to measure the resistance of n , m -functions against multi-bit differential power analysis in the Hamming weight model, including the original transparency order (denoted by TO ), redefined transparency order (denoted by RTO ), and modified transparency order (denoted by MTO ). In this paper, we firstly give a relationship between MTO and RTO and show that RTO is less than or equal to MTO for any n , m -functions. We also give a tight upper bound and a tight lower bound on MTO for balanced n , m -functions. Secondly, some relationships between MTO and the maximal absolute value of the Walsh transform (or the sum-of-squares indicator, algebraic immunity, and the nonlinearity of its coordinates) for n , m -functions are obtained, respectively. Finally, we give MTO and RTO for (4,4) S-boxes which are commonly used in the design of lightweight block ciphers, respectively.


2021 ◽  
Vol 27 (2) ◽  
Author(s):  
L. Thesing ◽  
A. C. Hansen

AbstractDue to the many applications in Magnetic Resonance Imaging (MRI), Nuclear Magnetic Resonance (NMR), radio interferometry, helium atom scattering etc., the theory of compressed sensing with Fourier transform measurements has reached a mature level. However, for binary measurements via the Walsh transform, the theory has long been merely non-existent, despite the large number of applications such as fluorescence microscopy, single pixel cameras, lensless cameras, compressive holography, laser-based failure-analysis etc. Binary measurements are a mainstay in signal and image processing and can be modelled by the Walsh transform and Walsh series that are binary cousins of the respective Fourier counterparts. We help bridging the theoretical gap by providing non-uniform recovery guarantees for infinite-dimensional compressed sensing with Walsh samples and wavelet reconstruction. The theoretical results demonstrate that compressed sensing with Walsh samples, as long as the sampling strategy is highly structured and follows the structured sparsity of the signal, is as effective as in the Fourier case. However, there is a fundamental difference in the asymptotic results when the smoothness and vanishing moments of the wavelet increase. In the Fourier case, this changes the optimal sampling patterns, whereas this is not the case in the Walsh setting.


Sign in / Sign up

Export Citation Format

Share Document