Elimination of minimal FFT grid-size limitations

2002 ◽  
Vol 35 (4) ◽  
pp. 505-505 ◽  
Author(s):  
David A. Langs

The fast Fourier transform (FFT) algorithm as normally formulated allows one to compute the Fourier transform of up toNcomplex structure factors,F(h),N/2 ≥h> −N/2, if the transform ρ(r) is computed on anN-point grid. Most crystallographic FFT programs test the ranges of the Miller indices of the input data to ensure that the total number of grid divisions in thex,yandzdirections of the cell is sufficiently large enough to perform the FFT. This note calls attention to a simple remedy whereby an FFT can be used to compute the transform on as coarse a grid as one desires without loss of precision.

Healthcare ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 234 ◽  
Author(s):  
Hyun Yoo ◽  
Soyoung Han ◽  
Kyungyong Chung

Recently, a massive amount of big data of bioinformation is collected by sensor-based IoT devices. The collected data are also classified into different types of health big data in various techniques. A personalized analysis technique is a basis for judging the risk factors of personal cardiovascular disorders in real-time. The objective of this paper is to provide the model for the personalized heart condition classification in combination with the fast and effective preprocessing technique and deep neural network in order to process the real-time accumulated biosensor input data. The model can be useful to learn input data and develop an approximation function, and it can help users recognize risk situations. For the analysis of the pulse frequency, a fast Fourier transform is applied in preprocessing work. With the use of the frequency-by-frequency ratio data of the extracted power spectrum, data reduction is performed. To analyze the meanings of preprocessed data, a neural network algorithm is applied. In particular, a deep neural network is used to analyze and evaluate linear data. A deep neural network can make multiple layers and can establish an operation model of nodes with the use of gradient descent. The completed model was trained by classifying the ECG signals collected in advance into normal, control, and noise groups. Thereafter, the ECG signal input in real time through the trained deep neural network system was classified into normal, control, and noise. To evaluate the performance of the proposed model, this study utilized a ratio of data operation cost reduction and F-measure. As a result, with the use of fast Fourier transform and cumulative frequency percentage, the size of ECG reduced to 1:32. According to the analysis on the F-measure of the deep neural network, the model had 83.83% accuracy. Given the results, the modified deep neural network technique can reduce the size of big data in terms of computing work, and it is an effective system to reduce operation time.


2008 ◽  
Vol 3 (4) ◽  
pp. 74-86
Author(s):  
Boris A. Knyazev ◽  
Valeriy S. Cherkasskij

The article is intended to the students, who make their first steps in the application of the Fourier transform to physics problems. We examine several elementary examples from the signal theory and classic optics to show relation between continuous and discrete Fourier transform. Recipes for correct interpretation of the results of FDFT (Fast Discrete Fourier Transform) obtained with the commonly used application programs (Matlab, Mathcad, Mathematica) are given.


1991 ◽  
Vol 69 (11) ◽  
pp. 1781-1785 ◽  
Author(s):  
D. J. Moffatt ◽  
J. K. Kauppinen ◽  
H. H. Mantsch

A brief history of the relationship between computer and infrared spectroscopist is given with emphasis on the use of the Fourier transform. Subsequently, an algorithm is developed that may be used to devise an objective Fourier self-deconvolution procedure which depends only on the input data and is not subject to the biases that are often introduced in such subjective techniques. Key words: deconvolution, Fourier transform, maximum entropy method.


2014 ◽  
Vol 543-547 ◽  
pp. 2341-2344
Author(s):  
Xue Fen Zhu ◽  
Yang Yang ◽  
Dong Rui Yang ◽  
Fei Shen ◽  
Xi Yuan Chen

L2C is a new civilians signal launched by the modernized GPS Block IIR-M satellite. This paper studies L2C acquisition algorithms with the implementations on MATLAB. Circular correlation is utilized to implement the acquisition algorithm. The input satellite signal is collected by hardware front-end and the local code then simulated by software. The input data after frequency reduction processing and the local simulated code are converted into the frequency domain by means of FFT (Fast Fourier Transform). After performing circular correlation, the initial phase of the CM code is attained and the carrier frequency is found with the resolution of 50Hz.The effectiveness of the acquisition algorithm is finally verified through the actual satellite experiments.


1981 ◽  
Vol 36 (2) ◽  
pp. 150-153 ◽  
Author(s):  
R. Bek ◽  
E. Nold ◽  
S. Steeb

Abstract Using MoK α-radiation and a θ-θ diffractometer from molten In and Bi as well as six molten Bi-In alloys (8; 22; 27; 33.3; 50; and 77 at % Bi) intensity curves were obtained at 10 °C above the liquidus temperature. The measurements were performed in the q-region up to 14.5 Å-1. From the Fourier transform of the structure factors coordination numbers NItot and radii rItot were obtained. The concentration dependency of NItot and rItot leads to the conclusion, that the Bi-In melts belong to the compound forming melts.


Author(s):  
Philip Coppens

Image formation in diffraction is no different from image formation in other branches of optics, and it obeys the same mathematical equations. However, the nonexistence of lenses for X-ray beams makes it necessary to use computational methods to achieve the Fourier transform of the diffraction pattern into the image. The phase information required for this process is, in general, not available from the diffraction experiment, even though progress has been made in deriving phases from multiple-beam effects. This is the phase problem, the paramount issue in crystal structure analysis, which also affects charge density analysis of noncentrosymmetric structures. For centrosymmetric space groups, the independent-atom model is a sufficiently close approximation to allow calculation of the signs for all but a few very weak reflections. Images of the charge density are indispensable for qualitative understanding of chemical bonding, and play a central role in charge density analysis. In this chapter, we will discuss methods for imaging the experimental charge density, and define the functions used in the imaging procedure. According to Eq. (1.22), the structure factor F(H) is the Fourier transform of the electron density ρ(r) in the crystallographic unit cell. The electron density p(r) is then obtained by the inverse Fourier transformation, or . . . ρ(r)=∫F(H) exp (−2πi H ·r) dH (5.1) . . . in which F(H) are the (complex) structure factors corrected for the anomalous scattering discussed in chapter 1.


In 1965 a technique called Fast Fourier Transform (FFT) was invented to find the Fourier Transform. This paper compares three architectures, the basic architecture/ non-reduced architecture of FFT, decomposed FFT architecture without retiming and decomposed FFT architecture with retiming. In each case, the adder used will be Ripple Carry Adder (RCA) and Carry Save Adder (CSA). A fast Fourier transform (FFT) calculates the discrete Fourier transform (DFT) or the inverse (IDFT) of a sequence. Fourier analysis transforms a signal from time to frequency domain or vice versa. One of the most burgeoning use of FFT is in Orthogonal Frequency Division Multiplex (OFDM) used by most cell phones, followed by the use in image processing. The synthesis has been carried out on Xilinx ISE Design Suite 14.7. There is a decrease in delay of 0.824% in Ripple Carry Adder and 6.869% in Carry Save Adder, further the reduced architecture for both the RCA and CSA architectures shows significant area optimization (approximately 20%) from the non-reduced counterparts of the FFT implementation.


Geophysics ◽  
1993 ◽  
Vol 58 (11) ◽  
pp. 1707-1709
Author(s):  
Michael J. Reed ◽  
Hung V. Nguyen ◽  
Ronald E. Chambers

The Fourier transform and its computationally efficient discrete implementation, the fast Fourier transform (FFT), are omnipresent in geophysical processing. While a general implementation of the discrete Fourier transform (DFT) will take on the order [Formula: see text] operations to compute the transform of an N point sequence, the FFT algorithm accomplishes the DFT with an operation count proportional to [Formula: see text] When a large percentage of the output coefficients of the transform are not desired, or a majority of the inputs to the transform are zero, it is possible to further reduce the computation required to perform the DFT. Here, we review one possible approach to accomplishing this reduction and indicate its application to phase‐shift migration.


Sign in / Sign up

Export Citation Format

Share Document