scholarly journals THE ATTRACTOR AND THE QUANTUM STATES

2009 ◽  
Vol 07 (supp01) ◽  
pp. 83-96 ◽  
Author(s):  
HANS-THOMAS ELZE

The dissipative dynamics anticipated in the proof of 't Hooft's existence theorem — "For any quantum system there exists at least one deterministic model that reproduces all its dynamics after prequantization" — is constructed here explicitly. We propose a generalization of Liouville's classical phase space equation, incorporating dissipation and diffusion, and demonstrate that it describes the emergence of quantum states and their dynamics in the Schrödinger picture. Asymptotically, there is a stable ground state and two decoupled sets of degrees of freedom, which transform into each other under the energy-parity symmetry of Kaplan and Sundrum. They recover the familiar Hilbert space and its dual. Expectations of observables are shown to agree with the Born rule, which is not imposed a priori. This attractor mechanism is applicable in the presence of interactions, to few-body or field theories in particular.

2012 ◽  
Vol 10 (01) ◽  
pp. 1250001 ◽  
Author(s):  
BORIS ŠKORIĆ

Physical unclonable functions (PUFs) are physical structures that are hard to clone and have a unique challenge-response behavior. The term PUF was coined by Pappu et al. in 2001. That work triggered a lot of interest, and since then a substantial number of papers has been written about the use of a wide variety of physical structures for different security purposes such as identification, authentication, read-proof key storage, key distribution, tamper evidence, anti-counterfeiting, software-to-hardware binding and trusted computing. In this paper we propose a new security primitive: the quantum-readout PUF (QR-PUF). This is a classical PUF, without internal quantum degrees of freedom, which is challenged using a quantum state, e.g. a single-photon state, and whose response is also a quantum state. By the no-cloning property of unknown quantum states, attackers cannot intercept challenges or responses without noticeably disturbing the readout process. Thus, a verifier who sends quantum states as challenges and receives the correct quantum states back can be certain that he is probing a specific QR-PUF without disturbances, even if the QR-PUF is far away "in the field" and under hostile control. For PUFs whose information content is not exceedingly large, all currently known PUF-based authentication and anti-counterfeiting schemes require trusted readout devices in the field. Our quantum readout scheme has no such requirement. Furthermore, we show how the QR-PUF authentication scheme can be interwoven with quantum key exchange (QKE), leading to an authenticated QKE protocol between two parties. This protocol has the special property that it requires no a priori secret shared by the two parties, and that the quantum channel is the authenticated channel, allowing for an unauthenticated classical channel. We provide security proofs for a limited class of attacks. The proofs depend on the physical unclonability of PUFs and on the practical infeasibility of building a quantum computer.


1971 ◽  
Vol 93 (3) ◽  
pp. 814-817
Author(s):  
Richard H. Lyon

The interaction of structures and sound fields frequently involves many degrees of freedom of each participant in a complicated interaction process. In one way or another, statistical or deterministic models of the systems involved are applied in such problems and the resulting vibration (or radiation) is calculated, frequently to a satisfactory degree of accuracy. At other times, however, the results of these calculations suggest that a statistical rather than deterministic model might have been more satisfactory (or vice versal). What is clearly lacking is an a priori criterion for deciding whether a statistical or deterministic model of a system/response situation is more appropriate. The purpose of this paper is to discuss the manner in which a measure of disorder similar to those employed in other areas of technology might be calculated for problems in sound and vibration.


2020 ◽  
Vol 1 (1) ◽  
pp. 93-102
Author(s):  
Carsten Strzalka ◽  
◽  
Manfred Zehn ◽  

For the analysis of structural components, the finite element method (FEM) has become the most widely applied tool for numerical stress- and subsequent durability analyses. In industrial application advanced FE-models result in high numbers of degrees of freedom, making dynamic analyses time-consuming and expensive. As detailed finite element models are necessary for accurate stress results, the resulting data and connected numerical effort from dynamic stress analysis can be high. For the reduction of that effort, sophisticated methods have been developed to limit numerical calculations and processing of data to only small fractions of the global model. Therefore, detailed knowledge of the position of a component’s highly stressed areas is of great advantage for any present or subsequent analysis steps. In this paper an efficient method for the a priori detection of highly stressed areas of force-excited components is presented, based on modal stress superposition. As the component’s dynamic response and corresponding stress is always a function of its excitation, special attention is paid to the influence of the loading position. Based on the frequency domain solution of the modally decoupled equations of motion, a coefficient for a priori weighted superposition of modal von Mises stress fields is developed and validated on a simply supported cantilever beam structure with variable loading positions. The proposed approach is then applied to a simplified industrial model of a twist beam rear axle.


2021 ◽  
Vol 4 (1) ◽  
pp. 251524592095492
Author(s):  
Marco Del Giudice ◽  
Steven W. Gangestad

Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.


Author(s):  
B Ashby ◽  
C Bortolozo ◽  
A Lukyanov ◽  
T Pryer

Summary In this article, we present a goal-oriented adaptive finite element method for a class of subsurface flow problems in porous media, which exhibit seepage faces. We focus on a representative case of the steady state flows governed by a nonlinear Darcy–Buckingham law with physical constraints on subsurface-atmosphere boundaries. This leads to the formulation of the problem as a variational inequality. The solutions to this problem are investigated using an adaptive finite element method based on a dual-weighted a posteriori error estimate, derived with the aim of reducing error in a specific target quantity. The quantity of interest is chosen as volumetric water flux across the seepage face, and therefore depends on an a priori unknown free boundary. We apply our method to challenging numerical examples as well as specific case studies, from which this research originates, illustrating the major difficulties that arise in practical situations. We summarise extensive numerical results that clearly demonstrate the designed method produces rapid error reduction measured against the number of degrees of freedom.


2008 ◽  
Vol 8 (10) ◽  
pp. 951-964
Author(s):  
M. Zhang ◽  
Z.-T. Zhou ◽  
H.-Y. Dai ◽  
D.-W. Hu

Due to the fundamental limitations related to the Heisenberg uncertainty principle and the non-cloning theorem, it is impossible, even in principle, to determine the quantum state of a single system without a priori knowledge of it. To discriminate nonorthogonal quantum states in some optimal way, a priori knowledge of the discriminated states has to be relied upon. In this paper, we thoroughly investigate some impact of a priori classical knowledge of two quantum states on the optimal unambiguous discrimination. It is exemplified that a priori classical knowledge of the discriminated states, incomplete or complete, can be utilized to improve the optimal success probabilities, whereas the lack of a prior classical knowledge can not be compensated even by more resources.


2010 ◽  
Vol 10 (1) ◽  
pp. 183-211 ◽  
Author(s):  
S. Ceccherini ◽  
U. Cortesi ◽  
S. Del Bianco ◽  
P. Raspollini ◽  
B. Carli

Abstract. The combination of data obtained with different sensors (data fusion) is a powerful technique that can provide target products of the best quality in terms of precision and accuracy, as well as spatial and temporal coverage and resolution. In this paper the results are presented of the data fusion of measurements of ozone vertical profile performed by two space-borne interferometers (IASI on METOP and MIPAS on ENVISAT) using the new measurement-space-solution method. With this method both the loss of information due to interpolation and the propagation of possible biases (caused by a priori information) are avoided. The data fusion products are characterized by means of retrieval errors, information gain, averaging kernels and number of degrees of freedom. The analysis is performed both on simulated and real measurements and the results demonstrate and quantify the improvement of data fusion products with respect to measurements of a single instrument.


1976 ◽  
Vol 66 (1) ◽  
pp. 173-187
Author(s):  
Ray Buland

abstract A complete reexamination of Geiger's method in the light of modern numerical analysis indicates that numerical stability can be insured by use of the QR algorithm and the convergence domain considerably enlarged by the introduction of step-length damping. In order to make the maximum use of all data, the method is developed assuming a priori estimates of the statistics of the random errors at each station. Numerical experiments indicate that the bulk of the joint probability density of the location parameters is in the linear region allowing simple estimates of the standard errors of the parameters. The location parameters are found to be distributed as one minus chi squared with m degrees of freedom, where m is the number of parameters, allowing the simple construction of confidence levels. The use of the chi-squared test with n-m degrees of freedom, where n is the number of data, is introduced as a means of qualitatively evaluating the correctness of the earth model.


Sign in / Sign up

Export Citation Format

Share Document