hidden variables
Recently Published Documents


TOTAL DOCUMENTS

633
(FIVE YEARS 175)

H-INDEX

37
(FIVE YEARS 4)

2022 ◽  
Vol 12 (1) ◽  
pp. 496
Author(s):  
João Sequeira ◽  
Jorge Louçã ◽  
António M. Mendes ◽  
Pedro G. Lind

We analyze the empirical series of malaria incidence, using the concepts of autocorrelation, Hurst exponent and Shannon entropy with the aim of uncovering hidden variables in those series. From the simulations of an agent model for malaria spreading, we first derive models of the malaria incidence, the Hurst exponent and the entropy as functions of gametocytemia, measuring the infectious power of a mosquito to a human host. Second, upon estimating the values of three observables—incidence, Hurst exponent and entropy—from the data set of different malaria empirical series we predict a value of the gametocytemia for each observable. Finally, we show that the independent predictions show considerable consistency with only a few exceptions which are discussed in further detail.


Author(s):  
Debo Cheng ◽  
Jiuyong Li ◽  
Lin Liu ◽  
Kui Yu ◽  
Thuc Duy Le ◽  
...  

Author(s):  
Vasil Penchev

Lewis Carroll, both logician and writer, suggested a logical paradox containing furthermore two connotations (connotations or metaphors are inherent in literature rather than in mathematics or logics). The paradox itself refers to implication demonstrating that an intermediate implication can be always inserted in an implication therefore postponing its ultimate conclusion for the next step and those insertions can be iteratively and indefinitely added ad lib, as if ad infinitum. Both connotations clear up links due to the shared formal structure with other well-known mathematical observations: (1) the paradox of Achilles and the Turtle; (2) the transitivity of the relation of equality. Analogically to (1), one can juxtapose the paradox of the Liar (for Lewis Carroll’s paradox) and that of the arrow (for “Achilles and the Turtle”), i.e. a logical paradox, on the one hand, and an aporia of motion, on the other hand, suggesting a shared formal structure of both, which can be called “ontological”, on which basis “motion” studied by physics and “conclusion” studied by logic can be unified being able to bridge logic and physics philosophically in a Hegelian manner: even more, the bridge can be continued to mathematics in virtue of (2), which forces the equality (for its property of transitivity) of any two quantities to be postponed analogically ad lib and ad infinitum. The paper shows that Hilbert arithmetic underlies naturally Lewis Carroll’s paradox admitting at least three interpretations linked to each other by it: mathematical, physical and logical. Thus, it can be considered as both generalization and solution of his paradox therefore naturally unifying the completeness of quantum mechanics (i.e. the absence of hidden variables) and eventual completeness of mathematics as the same and isomorphic to the completeness of propositional logic in relation to set theory as a first-order logic (in the sense of Gödel (1930)’s completeness theorems).


2021 ◽  
Author(s):  
Vasil Dinev Penchev

Lewis Carroll, both logician and writer, suggested a logical paradox containing furthermore two connotations (connotations or metaphors are inherent in literature rather than in mathematics or logics). The paradox itself refers to implication demonstrating that an intermediate implication can be always inserted in an implication therefore postponing its ultimate conclusion for the next step and those insertions can be iteratively and indefinitely added ad lib, as if ad infinitum. Both connotations clear up links due to the shared formal structure with other well-known mathematical observations: (1) the paradox of Achilles and the Turtle; (2) the transitivity of the relation of equality. Analogically to (1), one can juxtapose the paradox of the Liar (for Lewis Carroll’s paradox) and that of the arrow (for “Achillesand the Turtle”), i.e. a logical paradox, on the one hand, and an aporia of motion, on the other hand, suggesting a shared formal structure of both, which can be called “ontological”, on which basis “motion” studied by physics and “conclusion” studied by logic can be unified being able to bridge logic and physics philosophically in a Hegelian manner: even more, the bridge can be continued to mathematics in virtue of (2), which forces the equality (for its property of transitivity) of any two quantities to be postponed analogically ad lib and ad infinitum. The paper shows that Hilbert arithmetic underlies naturally Lewis Carroll’s paradox admitting at least three interpretations linked to each other by it: mathematical, physical and logical. Thus, it can be considered as both generalization and solution of his paradox thereforenaturally unifying the completeness of quantum mechanics (i.e. the absence of hidden variables) and eventual completeness of mathematics as the same and isomorphic to the completeness of propositional logic in relation to set theory as a first-order logic (in the sense of Gödel (1930)’s completeness theorems).


2021 ◽  
Author(s):  
Yang Yu ◽  
Pathum Kossinna ◽  
Wenyuan Liao ◽  
Qingrun Zhang

Modern machine learning methods have been extensively utilized in gene expression data analysis. In particular, autoencoders (AE) have been employed in processing noisy and heterogenous RNA-Seq data. However, AEs usually lead to "black-box" hidden variables difficult to interpret, hindering downstream experimental validation and clinical translation. To bridge the gap between complicated models and biological interpretations, we developed a tool, XAE4Exp (eXplainable AutoEncoder for Expression data), which integrates AE and SHapley Additive exPlanations (SHAP), a flagship technique in the field of eXplainable AI (XAI). It quantitatively evaluates the contributions of each gene to the hidden structure learned by an AE, substantially improving the expandability of AE outcomes. By applying XAE4Exp to The Cancer Genome Atlas (TCGA) breast cancer gene expression data, we identified genes that are not differentially expressed, and pathways in various cancer-related classes. This tool will enable researchers and practitioners to analyze high-dimensional expression data intuitively, paving the way towards broader uses of deep learning.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1705
Author(s):  
Harrison Crecraft

The thermocontextual interpretation (TCI) is an alternative to the existing interpretations of physical states and time. The prevailing interpretations are based on assumptions rooted in classical mechanics, the logical implications of which include determinism, time symmetry, and a paradox: determinism implies that effects follow causes and an arrow of causality, and this conflicts with time symmetry. The prevailing interpretations also fail to explain the empirical irreversibility of wavefunction collapse without invoking untestable and untenable metaphysical implications. They fail to reconcile nonlocality and relativistic causality without invoking superdeterminism or unexplained superluminal correlations. The TCI defines a system’s state with respect to its actual surroundings at a positive ambient temperature. It recognizes the existing physical interpretations as special cases which either define a state with respect to an absolute zero reference (classical and relativistic states) or with respect to an equilibrium reference (quantum states). Between these special case extremes is where thermodynamic irreversibility and randomness exist. The TCI distinguishes between a system’s internal time and the reference time of relativity and causality as measured by an external observer’s clock. It defines system time as a complex property of state spanning both reversible mechanical time and irreversible thermodynamic time. Additionally, it provides a physical explanation for nonlocality that is consistent with relativistic causality without hidden variables, superdeterminism, or “spooky action”.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1673
Author(s):  
Ali Mohammad-Djafari

Classical methods for inverse problems are mainly based on regularization theory, in particular those, that are based on optimization of a criterion with two parts: a data-model matching and a regularization term. Different choices for these two terms and a great number of optimization algorithms have been proposed. When these two terms are distance or divergence measures, they can have a Bayesian Maximum A Posteriori (MAP) interpretation where these two terms correspond to the likelihood and prior-probability models, respectively. The Bayesian approach gives more flexibility in choosing these terms and, in particular, the prior term via hierarchical models and hidden variables. However, the Bayesian computations can become very heavy computationally. The machine learning (ML) methods such as classification, clustering, segmentation, and regression, based on neural networks (NN) and particularly convolutional NN, deep NN, physics-informed neural networks, etc. can become helpful to obtain approximate practical solutions to inverse problems. In this tutorial article, particular examples of image denoising, image restoration, and computed-tomography (CT) image reconstruction will illustrate this cooperation between ML and inversion.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1660
Author(s):  
Philippe Grangier

It is known that “quantum non locality”, leading to the violation of Bell’s inequality and more generally of classical local realism, can be attributed to the conjunction of two properties, which we call here elementary locality and predictive completeness. Taking this point of view, we show again that quantum mechanics violates predictive completeness, allowing the making of contextual inferences, which can, in turn, explain why quantum non locality does not contradict relativistic causality. An important question remains: if the usual quantum state ψ is predictively incomplete, how do we complete it? We give here a set of new arguments to show that ψ should be completed indeed, not by looking for any “hidden variables”, but rather by specifying the measurement context, which is required to define actual probabilities over a set of mutually exclusive physical events.


Author(s):  
Harrison Crecraft

The Thermocontextual Interpretation (TCI) is proposed here as an alternative to existing interpretations of physical states and time. Prevailing interpretations are based on assumptions rooted in classical mechanics. Logical implications include the determinism and reversibility of change, and an immediate conflict. Determinism underlies causality, but causality implies a distinction between cause and effect and an arrow of time, conflicting with reversibility. Prevailing interpretations also fail to explain the empirical irreversibility of wavefunction collapse without untestable and untenable metaphysical implications. They fail to reconcile nonlocality and relativity without invoking superdeterminism or unexplained superluminal correlations. The Thermocontextual Interpretation defines a system’s state with respect to its actual surroundings at a positive ambient temperature. The TCI bridges existing physical interpretations and thermodynamics as special cases, which define states either with respect to an absolute-zero reference or with respect to a thermally equilibrated reference. The TCI defines system time as a complex property of state spanning both reversible mechanical time and irreversible thermodynamic time, and it distinguishes between system time and the reference time of relativity and causality, as measured by an observer’s clock. And, the TCI provides a physical explanation for nonlocality, consistent with relativity, without hidden variables, superdeterminism, or “spooky action.”


2021 ◽  
Vol 2021 (12) ◽  
pp. 124004
Author(s):  
Parthe Pandit ◽  
Mojtaba Sahraee-Ardakan ◽  
Sundeep Rangan ◽  
Philip Schniter ◽  
Alyson K Fletcher

Abstract We consider the problem of estimating the input and hidden variables of a stochastic multi-layer neural network (NN) from an observation of the output. The hidden variables in each layer are represented as matrices with statistical interactions along both rows as well as columns. This problem applies to matrix imputation, signal recovery via deep generative prior models, multi-task and mixed regression, and learning certain classes of two-layer NNs. We extend a recently-developed algorithm—multi-layer vector approximate message passing, for this matrix-valued inference problem. It is shown that the performance of the proposed multi-layer matrix vector approximate message passing algorithm can be exactly predicted in a certain random large-system limit, where the dimensions N × d of the unknown quantities grow as N → ∞ with d fixed. In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features as well as training samples grow to infinity but the number of hidden nodes stays fixed. The analysis enables a precise prediction of the parameter and test error of the learning.


Sign in / Sign up

Export Citation Format

Share Document