scholarly journals Bayesian 14C-rationality, Heisenberg Uncertainty, and Fourier Transform

2020 ◽  
Vol 47 ◽  
pp. 536-559
Author(s):  
Bernhard Weninger ◽  
Kevan Edinborough

Following some 30 years of radiocarbon research during which the mathematical principles of 14C-calibration have been on loan to Bayesian statistics, here they are returned to quantum physics. The return is based on recognition that 14C-calibration can be described as a Fourier transform. Following its introduction as such, there is need to reconceptualize the probabilistic 14C-analysis. The main change will be to replace the traditional (one-dimensional) concept of 14C-dating probability by a two-dimensional probability. This is entirely analogous to the definition of probability in quantum physics, where the squared amplitude of a wave function defined in Hilbert space provides a measurable probability of finding the corresponding particle at a certain point in time/space, the so-called Born rule. When adapted to the characteristics of 14C-calibration, as it turns out, the Fourier transform immediately accounts for practically all known so-called quantization properties of archaeological 14C-ages, such as clustering, age-shifting, and amplitude-distortion. This also applies to the frequently observed chronological lock-in properties of larger data sets, when analysed by Gaussian wiggle matching (on the 14C-scale) just as by Bayesian sequencing (on the calendar time-scale). Such domain-switching effects are typical for a Fourier transform. They can now be understood, and taken into account, by the application of concepts and interpretations that are central to quantum physics (e.g. wave diffraction, wave-particle duality, Heisenberg uncertainty, and the correspondence principle). What may sound complicated, at first glance, simplifies the construction of 14C-based chronologies. The new Fourier-based 14C-analysis supports chronological studies on previously unachievable geographic (continental) and temporal (Glacial-Holocene) scales; for example, by temporal sequencing of hundreds of archaeological sites, simultaneously, with minimal need for development of archaeological prior hypotheses, other than those based on the geo-archaeological law of stratigraphic superposition. As demonstrated in a variety of archaeological case studies, just one number, defined as a gauge-probability on a scale 0–100%, can be used to replace a stacked set of subjective Bayesian priors.

2011 ◽  
Vol 3 (1) ◽  
pp. 7-20 ◽  
Author(s):  
Ewa Drabik

Classical and Quantum Physics in Selected Economic ModelsA growing number of economic phenomena are nowadays described with methods known in physics. The most frequently applied physical theories by economists are: (1) the universal gravitation law and (2) the first and second law of thermodynamics. Physical principles can also be applied to the theory of financial markets. Financial markets are composed of individual participants who may be seen to interact as particles in a physical system. This approach proposes a financial market model known as a minority game model in which securities and money are allocated on the basis of price fluctuations, and where selling is best option when the vast majority of investors tend to purchase goods or services, and vice versa. The players who end up being on the minority side win.The above applications of physical methods in economics are deeply rooted in classical physics. However, this paper aims to introduce the basic concepts of quantum mechanics to the process of economic phenomena modelling. Quantum mechanics is a theory describing the behaviour of microscopic objects and is grounded on the principle of wave-particle duality. It is assumed that quantum-scale objects at the same time exhibit both wave-like and particle-like properties. The key role in quantum mechanics is played by: (1) the Schrödinger equation describing the probability amplitude for the particle to be found in a given position and at a given time, and as (2) the Heisenberg uncertainty principle stating that certain pairs of physical properties cannot be economic applications of the Schrödinger equation as well as the Heisenberg uncertainty principle. We also try to describe the English auction by means the quantum mechanics methods.


1988 ◽  
Vol 42 (2) ◽  
pp. 353-359 ◽  
Author(s):  
Steven M. Donahue ◽  
Chris W. Brown ◽  
Robert J. Obremski

Two- and three-component mixtures of methylated benzenes were analyzed with the use of both infrared and UV spectra. The spectra of known mixtures were Fourier transformed and coefficients from the transforms selected to form coordinates of vectors. The resulting vectors were subjected to factor analysis to obtain representations for multicomponent analysis. A total of eight data sets were analyzed by factor analysis after preprocessing by taking the Fourier transforms of the spectra. The eight data sets were also analyzed by the P-matrix method (inverse Beer's law) in the spectral domain after preprocessing of the data to allow selection of the optimum analytical wavenumbers. This spectral method was compared to the Fourier transform method using cross-validation, in which one sample at a time was left out of the standards and treated as an unknown. The Standard Error of Prediction (SEP) was calculated for the two methods for all possible numbers of vectors and numbers of wavenumbers, starting with the number equal to the number of components and increasing up to a total number of standards or some reasonable cut-off value. Processing in the Fourier domain clearly produced the best results for seven of the data sets and equal results for the other set.


Author(s):  
Mawardi Bahri ◽  
Ryuichi Ashino

Based on the relationship between the Fourier transform (FT) and linear canonical transform (LCT), a logarithmic uncertainty principle and Hausdorff–Young inequality in the LCT domains are derived. In order to construct the windowed linear canonical transform (WLCT), Gabor filters associated with the LCT is introduced. Using the basic connection between the classical windowed Fourier transform (WFT) and the WLCT, a new proof of inversion formula for the WLCT is provided. This relation allows us to derive Lieb’s uncertainty principle associated with the WLCT. Some useful properties of the WLCT such as bounded, shift, modulation, switching, orthogonality relation, and characterization of range are also investigated in detail. By the Heisenberg uncertainty principle for the LCT and the orthogonality relation property for the WLCT, the Heisenberg uncertainty principle for the WLCT is established. This uncertainty principle gives information how a complex function and its WLCT relate. Lastly, the logarithmic uncertainty principle associated with the WLCT is obtained.


Author(s):  
Minggang Fei ◽  
Yubin Pan ◽  
Yuan Xu

The Heisenberg uncertainty principle and the uncertainty principle for self-adjoint operators have been known and applied for decades. In this paper, in the framework of Clifford algebra, we establish a stronger Heisenberg–Pauli–Wely type uncertainty principle for the Fourier transform of multivector-valued functions, which generalizes the recent results about uncertainty principles of Clifford–Fourier transform. At the end, we consider another stronger uncertainty principle for the Dunkl transform of multivector-valued functions.


2011 ◽  
Vol 70 ◽  
pp. 63-68 ◽  
Author(s):  
Christopher M Sebastian ◽  
Eann A Patterson ◽  
Donald Ostberg

Image decomposition is used to address the problem of accurately and concisely describing the strain in an inhomogeneous composite panel that is bolted to a vehicle structure. In-service, the composite panel is subject to structural loads from the vehicle which can cause unintended damage to the panel. Finite element simulations have been performed with the plan to establish their fidelity using full-field optical strain measurements obtained using digital image correlation. A methodology is presented based on using orthogonal shape descriptors to decompose the data-rich maps of strain into information-preserved data sets of reduced dimensionality that facilitate a quantitative comparison of the computational and experimental results. The decomposition is achieved employing the Fourier transform followed by fitting Tchebichef moments to the maps of the magnitude of the Fourier transform. The results show that this approach is fast and reliably describes the strain fields using less than fifty moments as compared to the thousands of data points in each strain map.


2012 ◽  
Vol 2012 ◽  
pp. 1-15
Author(s):  
Ramakrishna Kakarala

Whenever ranking data are collected, such as in elections, surveys, and database searches, it is frequently the case that partial rankings are available instead of, or sometimes in addition to, full rankings. Statistical methods for partial rankings have been discussed in the literature. However, there has been relatively little published on their Fourier analysis, perhaps because the abstract nature of the transforms involved impede insight. This paper provides as its novel contributions an analysis of the Fourier transform for partial rankings, with particular attention to the first three ranks, while emphasizing on basic signal processing properties of transform magnitude and phase. It shows that the transform and its magnitude satisfy a projection invariance and analyzes the reconstruction of data from either magnitude or phase alone. The analysis is motivated by appealing to corresponding properties of the familiar DFT and by application to two real-world data sets.


2014 ◽  
Vol 35 (1) ◽  
pp. 15-23 ◽  
Author(s):  
Li-chang Qian ◽  
Jia Xu ◽  
Wen-feng Sun ◽  
Ying-ning Peng

Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2021 ◽  
Vol 11 (6) ◽  
pp. 2582
Author(s):  
Lucas M. Martinho ◽  
Alan C. Kubrusly ◽  
Nicolás Pérez ◽  
Jean Pierre von der Weid

The focused signal obtained by the time-reversal or the cross-correlation techniques of ultrasonic guided waves in plates changes when the medium is subject to strain, which can be used to monitor the medium strain level. In this paper, the sensitivity to strain of cross-correlated signals is enhanced by a post-processing filtering procedure aiming to preserve only strain-sensitive spectrum components. Two different strategies were adopted, based on the phase of either the Fourier transform or the short-time Fourier transform. Both use prior knowledge of the system impulse response at some strain level. The technique was evaluated in an aluminum plate, effectively providing up to twice higher sensitivity to strain. The sensitivity increase depends on a phase threshold parameter used in the filtering process. Its performance was assessed based on the sensitivity gain, the loss of energy concentration capability, and the value of the foreknown strain. Signals synthesized with the time–frequency representation, through the short-time Fourier transform, provided a better tradeoff between sensitivity gain and loss of energy concentration.


Sign in / Sign up

Export Citation Format

Share Document