linear transform
Recently Published Documents


TOTAL DOCUMENTS

164
(FIVE YEARS 38)

H-INDEX

14
(FIVE YEARS 3)

Author(s):  
B Murali Krishna ◽  
◽  
B.T. Krishna ◽  
K Babulu ◽  
◽  
...  

A comparison of linear and quadratic transform implementation on field programmable gate array (FPGA) is presented. Popular linear transform namely Stockwell Transform and Smoothed Pseudo Wigner Ville Distribution (SPWVD) transform from Quadratic transforms is considered for the implementation on FPGA. Both the transforms are coded in Verilog hardware description language (Verilog HDL). Complex calculations of transformation are performed by using CORDIC algorithm. From FPGA family, Spartan-6 is chosen as hardware device to implement. Synthetic chirp signal is taken as input to test the both designed transforms. Summary of hardware resource utilization on Spartan-6 for both the transforms is presented. Finally, it is observed that both the transforms S-Transform and SPWVD are computed with low elapsed time with respect to MATLAB simulation.


2021 ◽  
Author(s):  
Shekhar S Kausalye ◽  
Sanjeev Kumar Sharma

In cloud computing security, privacy and data confidentiality plays important role when popularity in terms of cloud computing services is consider. Till now there are various schemes, protocols and architecture for cloud computing privacy and data protection are proposed which are based on data confidentiality, cryptographic solution, cipher text blocks, various transforms, symmetric encryption schemes, attribute-based encryption, trust and reputation, access control, etc., but they are scattered and lacking uniformity without proper security logic. This paper systematically reviews as well as analyze research done in this relevant area. First various shortcomings in cloud computing, architectures, framework and schemes proposed for data confidentiality will be discussed; then existing cryptographic schemes, encryption functions, linear transform, grid storage system, key exposure, secret sharing, AONT (All or Nothing Transform), dispersed storage, trust, block encryption mechanism, attribute-based encryption, access control will be discussed; thirdly propose future direction with research challenges for data confidentiality in cloud computing; finally focus is on concern data confidentiality scheme to overcome the technical deficiency and existing schemes.


Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 295
Author(s):  
Shijian Lin ◽  
Qi Luo ◽  
Hongze Leng ◽  
Junqiang Song

We propose a family of multi-moment methods with arbitrary orders of accuracy for the hyperbolic equation via the reconstructed interpolating differential operator (RDO) approach. Reconstruction up to arbitrary order can be achieved on a single cell from properly allocated model variables including spatial derivatives of varying orders. Then we calculate the temporal derivatives of coefficients of the reconstructed polynomial and transform them into the temporal derivatives of the model variables. Unlike the conventional multi-moment methods which evolve different types of moments by deriving different equations, RDO can update all derivatives uniformly via a simple linear transform more efficiently. Based on difference in introducing interaction from adjacent cells, the central RDO and the upwind RDO are proposed. Both schemes enjoy high-order accuracy which is verified by Fourier analysis and numerical experiments.


2021 ◽  
Vol 13 (21) ◽  
pp. 4390
Author(s):  
Yuanyuan Guo ◽  
Yanwen Chong ◽  
Yun Ding ◽  
Shaoming Pan ◽  
Xiaolin Gu

Hyperspectral compression is one of the most common techniques in hyperspectral image processing. Most recent learned image compression methods have exhibited excellent rate-distortion performance for natural images, but they have not been fully explored for hyperspectral compression tasks. In this paper, we propose a trainable network architecture for hyperspectral compression tasks, which not only considers the anisotropic characteristic of hyperspectral images but also embeds an accurate entropy model using the non-Gaussian prior knowledge of hyperspectral images and nonlinear transform. Specifically, we first design a spatial-spectral block, involving a spatial net and a spectral net as the base components of the core autoencoder, which is more consistent with the anisotropic hyperspectral cubes than the existing compression methods based on deep learning. Then, we design a Student’s T hyperprior that merges the statistics of the latents and the side information concepts into a unified neural network to provide an accurate entropy model used for entropy coding. This not only remarkably enhances the flexibility of the entropy model by adjusting various values of the degree of freedom, but also leads to a superior rate-distortion performance. The results illustrate that the proposed compression scheme supersedes the Gaussian hyperprior universally for virtually all learned natural image codecs and the optimal linear transform coding methods for hyperspectral compression. Specifically, the proposed method provides a 1.51% to 59.95% average increase in peak signal-to-noise ratio, a 0.17% to 18.17% average increase in the structural similarity index metric and a 6.15% to 64.60% average reduction in spectral angle mapping over three public hyperspectral datasets compared to the Gaussian hyperprior and the optimal linear transform coding methods.


Author(s):  
Mohamed Irfan Mohamed Refai ◽  
Mique Saes ◽  
Bouke L. Scheltinga ◽  
Joost van Kordelaar ◽  
Johannes B. J. Bussmann ◽  
...  

Abstract Background Smoothness is commonly used for measuring movement quality of the upper paretic limb during reaching tasks after stroke. Many different smoothness metrics have been used in stroke research, but a ‘valid’ metric has not been identified. A systematic review and subsequent rigorous analysis of smoothness metrics used in stroke research, in terms of their mathematical definitions and response to simulated perturbations, is needed to conclude whether they are valid for measuring smoothness. Our objective was to provide a recommendation for metrics that reflect smoothness after stroke based on: (1) a systematic review of smoothness metrics for reaching used in stroke research, (2) the mathematical description of the metrics, and (3) the response of metrics to simulated changes associated with smoothness deficits in the reaching profile. Methods The systematic review was performed by screening electronic databases using combined keyword groups Stroke, Reaching and Smoothness. Subsequently, each metric identified was assessed with mathematical criteria regarding smoothness: (a) being dimensionless, (b) being reproducible, (c) being based on rate of change of position, and (d) not being a linear transform of other smoothness metrics. The resulting metrics were tested for their response to simulated changes in reaching using models of velocity profiles with varying reaching distances and durations, harmonic disturbances, noise, and sub-movements. Two reaching tasks were simulated; reach-to-point and reach-to-grasp. The metrics that responded as expected in all simulation analyses were considered to be valid. Results The systematic review identified 32 different smoothness metrics, 17 of which were excluded based on mathematical criteria, and 13 more as they did not respond as expected in all simulation analyses. Eventually, we found that, for reach-to-point and reach-to-grasp movements, only Spectral Arc Length (SPARC) was found to be a valid metric. Conclusions Based on this systematic review and simulation analyses, we recommend the use of SPARC as a valid smoothness metric in both reach-to-point and reach-to-grasp tasks of the upper limb after stroke. However, further research is needed to understand the time course of smoothness measured with SPARC for the upper limb early post stroke, preferably in longitudinal studies.


2021 ◽  
Author(s):  
David J Maisson ◽  
Justin M Fine ◽  
Seng Bum Michael Yoo ◽  
Tyler Daniel Cash-Padgett ◽  
Maya Zhe Wang ◽  
...  

Our ability to effectively choose between dissimilar options implies that information regarding the options values must be available, either explicitly or implicitly, in the brain. Explicit realizations of value involve single neurons whose responses depend on value and not on the specific features that determine it. Implicit realizations, by contrast, come from the coordinated action of neurons that encode specific features. One signature of implicit value coding is that population responses to offers with the same value but different features should occupy semi- or fully orthogonal neural subspaces that are nonetheless linked. Here, we examined responses of neurons in six core value-coding areas in a choice task with risky and safe options. Using stricter criteria than some past studies have used, we find, surprisingly, no evidence for abstract value neurons (i.e., neurons with the response to equally valued risky and safe options) in any of these regions. Moreover, population codes for value resided in orthogonal subspaces; these subspaces were linked through a linear transform of each of their constituent subspaces. These results suggest that in all six regions, populations of neurons embed value implicitly in a distributed population.


Author(s):  
Gaoming Du ◽  
Jiting Wu ◽  
Hongfang Cao ◽  
Kun Xing ◽  
Zhenmin Li ◽  
...  

Foggy weather reduces the visibility of photographed objects, causing image distortion and decreasing overall image quality. Many approaches (e.g., image restoration, image enhancement, and fusion-based methods) have been proposed to work out the problem. However, most of these defogging algorithms are facing challenges such as algorithm complexity or real-time processing requirements. To simplify the defogging process, we propose a fusional defogging algorithm on the linear transmission of gray single-channel. This method combines gray single-channel linear transform with high-boost filtering according to different proportions. To enhance the visibility of the defogging image more effectively, we convert the RGB channel into a gray-scale single channel without decreasing the defogging results. After gray-scale fusion, the data in the gray-scale domain should be linearly transmitted. With the increasing real-time requirements for clear images, we also propose an efficient real-time FPGA defogging architecture. The architecture optimizes the data path of the guided filtering to speed up the defogging speed and save area and resources. Because the pixel reading order of mean and square value calculations are identical, the shift register in the box filter after the average and the computation of the square values is separated from the box filter and put on the input terminal for sharing, saving the storage area. What’s more, using LUTs instead of the multiplier can decrease the time delays of the square value calculation module and increase efficiency. Experimental results show that the linear transmission can save 66.7% of the total time. The architecture we proposed can defog efficiently and accurately, meeting the real-time defogging requirements on 1920 × 1080 image size.


Sign in / Sign up

Export Citation Format

Share Document