expected density
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 6)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 3 (1) ◽  
pp. 13
Author(s):  
Ahmad Yousefi ◽  
Ariel Caticha

The classical Density Functional Theory (DFT) is introduced as an application of entropic inference for inhomogeneous fluids in thermal equilibrium. It is shown that entropic inference reproduces the variational principle of DFT when information about the expected density of particles is imposed. This process introduces a family of trial density-parametrized probability distributions and, consequently, a trial entropy from which the preferred one is found using the method of Maximum Entropy (MaxEnt). As an application, the DFT model for slowly varying density is provided, and its approximation scheme is discussed.


Author(s):  
Aaron R. Hurst

The supercharged nature of the Earth’s geothermal core can be demonstrated by three thought experiments exhibiting it is tremendously more powerful than any other terrestrial object in the solar system (planet or moon). Identifying a minimum of four byproduct asteroid blast patterns linked to the formation of Earth’s supercharged geothermal core is critical to properly identifying stars that also have these four byproduct asteroid blast patterns. These stars are the most likely to host an Earth-like planet qualified by having a supercharged geothermal core. The Planetary Vaporization-Event (PVE) Hypothesis provides a basis for correlation between the supercharged nature of Earth’s geothermal core and at least 14 listed side effects: (1) the asteroid-wide/planet-scale homogenization and lack thereof of 182W ε for Earth, the Moon, Mars and meteors, (2) the primary and secondary shifting of Earth’s tectonic plates, (3) the solar system wide displacement of Earth’s wayward moons (including Ceres, Pluto, Charon and Orcus) outgassing identical samples of ammoniated phyllosilicates, (4) the formation of asteroids at 100+ times the expected density of a nebular cloud vs. pre-solar grains formation density at the expected density of a nebular cloud, (5) three distinct formation timestamps for all known asteroids within a 5 million year window 4.55+ billion years ago, (6) the estimated formation temperature of CAI at 0.86 billion Kelvin and (7) the remaining chondritic meteorite matrix flash vaporizing at 1,200–1,900 °C, (8) followed by rapid freezing near 0 K, (9) the development of exactly 2 asteroid belts and a swarm of non-moon satellites, (10) particulate size distinction between the 2 asteroid belts of small/inner, large/outer, (11) the proximity of the Trojan Asteroid Groups to the Main Asteroid Belt, (12) observation of a past or present LHB, (13) the development of annual meteor showers for Earth proximal to apogee and/or perigee, (14) the Sun being the most-likely object struck by an asteroid in the inner solar system. Through better understanding of the relevant data at hand and reclassification of the byproducts of supercharging the core of a planet, at least 5 new insights can be inferred and are listed as: (1) the original mass, (2) distance and (3) speed of Earth Mark One, (4) the original order of Earth’s multi-moon formation and (5) the high probability of finding detectable signs of life on a planet orbiting the stars Epsilon Eridani and Eta Corvi. There are at least 6 popular hypothesis that the PVE Hypothesis is in conflict with, listed they are: (1) a giant impact forming the Moon, (2) asteroids being the building blocks of the solar system, (3) the Main Asteroid Belt being the result of a planet that never formed, (4) the LHB being a part of the accretion disk process, (5) the heat in Earth’s core coming primarily from the decay of radioactive elements, (6) the Oort Cloud being the source of ice comets.


Geophysics ◽  
2021 ◽  
pp. 1-74
Author(s):  
Elizabeth Maag-Capriotti ◽  
Yaoguo Li

Gravity gradiometry inversion can provide important knowledge about a salt body and assist in subsalt imaging. However, such inversions are faced with difficulties associated with the lack of response from the nil zone in which the salt density is nearly identical to that of the background sediments and weak signals from the deeper portion of the salt. It is well understood that such difficulties could be alleviated by incorporating prior information, such as the top of salt from seismic imaging and petrophysical data, into the inversions. How to effectively incorporate such prior information is still a challenge, and what level of increased knowledge such constrained inversions can provide remains to be understood. We have investigated and compared the additional knowledge provided by incorporating different forms of prior information, including a top-of-salt surface, and an expected density contrast model. These different types of information are incorporated through different strategies of constrained inversion, including an inversion with bound constraints on the density contrast, inversion after a reduction-to-binary process, and discrete-valued inversion. We apply these strategies first to synthetic gravity gradiometry data calculated from the SEG/EAGE salt body and evaluate the improvements to the recovered salt provided from successive imposition of increased prior information. We further apply the strategies to a set of marine gravity gradiometry data collected in the Gulf of Mexico and examine the additional knowledge gained from the imaging of the salt in the region. We show that much more valuable knowledge about the salt can be obtained with the right prior information imposed through an effective strategy, and demonstrate that such gravity gradiometry data contain information about the salt body at depths much greater than previously recognized.


2020 ◽  
Vol 76 (10) ◽  
pp. 912-925
Author(s):  
Thomas C. Terwilliger ◽  
Oleg V. Sobolev ◽  
Pavel V. Afonine ◽  
Paul D. Adams ◽  
Randy J. Read

Density modification uses expectations about features of a map such as a flat solvent and expected distributions of density in the region of the macromolecule to improve individual Fourier terms representing the map. This process transfers information from one part of a map to another and can improve the accuracy of a map. Here, the assumptions behind density modification for maps from electron cryomicroscopy are examined and a procedure is presented that allows the incorporation of model-based information. Density modification works best in cases where unfiltered, unmasked maps with clear boundaries between the macromolecule and solvent are visible, and where there is substantial noise in the map, both in the region of the macromolecule and the solvent. It also is most effective if the characteristics of the map are relatively constant within regions of the macromolecule and the solvent. Model-based information can be used to improve density modification, but model bias can in principle occur. Here, model bias is reduced by using ensemble models that allow an estimation of model uncertainty. A test of model bias is presented that suggests that even if the expected density in a region of a map is specified incorrectly by using an incorrect model, the incorrect expectations do not strongly affect the final map.


Author(s):  
Thomas C. Terwilliger ◽  
Oleg V. Sobolev ◽  
Pavel V. Afonine ◽  
Paul D. Adams ◽  
Randy J. Read

AbstractDensity modification uses expectations about features of a map such as a flat solvent and expected distributions of density in the region of the macromolecule to improve individual Fourier terms representing the map. This process transfers information from one part of a map to another and can improve the accuracy of a map. Here the assumptions behind density modification for maps from electron cryomicroscopy are examined and a procedure is presented that allows incorporation of model-based information. Density modification works best in cases where unfiltered, unmasked maps with clear boundaries between macromolecule and solvent are visible and where there is substantial noise in the map, both in the region of the macromolecule and the solvent. It also is most effective if the characteristics of the map are relatively constant within regions of the macromolecule and the solvent. Model-based information can be used to improve density modification, but model bias can in principle occur. Here model bias is reduced by using ensemble models that allow estimation of model uncertainty. A test of model bias is presented suggesting that even if the expected density in a region of a map is specified incorrectly by using an incorrect model, the incorrect expectations do not strongly affect the final map.SynopsisThe prerequisites for density modification of maps from electron cryomicroscopy are examined and a procedure for incorporating model-based information is presented.


2017 ◽  
Vol 7 (1) ◽  
pp. 115
Author(s):  
Yoshiyuki Kitaoka

Let $f(x)=x^n+a_{n-1}x^{n-1}+\dots+a_0$ $(a_{n-1},\dots,a_0\in\mathbb Z)$ be a polynomial with complex roots $\alpha_1,\dots,\alpha_n$ and suppose that a linear relation over $\mathbb Q$ among $1,\alpha_1,\dots,\alpha_n$ is a  multiple of $\sum_i\alpha_i+a_{n-1}=0$ only. For a prime number $p$ such that $f(x)\bmod p$ has $n$ distinct integer  roots $0<r_1<\dots<r_n<p$, we proposed in a previous paper a conjecture that the sequence of points $(r_1/p,\dots,r_n/p)$ is equi-distributed in some sense. In this paper, we show that it implies the equi-distribution of the sequence of $r_1/p,\dots,r_n/p$ in the ordinary sense and give the expected density of primes satisfying $r_i/p<a$ for a fixed suffix $i$ and $0<a<1$.


2017 ◽  
Author(s):  
Guillaume Marçais ◽  
David Pellow ◽  
Daniel Bork ◽  
Yaron Orenstein ◽  
Ron Shamir ◽  
...  

AbstractThe minimizers scheme is a method for selecting k-mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g., too many k-mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k-mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues.We provide an in-depth analysis of the effect of k-mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al.) on the expected density of minimizers in a random sequence.The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git.Contact: [email protected]


2013 ◽  
Vol 13 (11&12) ◽  
pp. 986-994
Author(s):  
Elliott H. Lieb ◽  
Anna Vershynina

We prove upper bounds on the rate, called "mixing rate", at which the von Neumann entropy of the expected density operator of a given ensemble of states changes under non-local unitary evolution. For an ensemble consisting of two states, with probabilities of p and 1-p, we prove that the mixing rate is bounded above by 4\sqrt{p(1-p)} for any Hamiltonian of norm 1. For a general ensemble of states with probabilities distributed according to a random variable X and individually evolving according to any set of bounded Hamiltonians, we conjecture that the mixing rate is bounded above by a Shannon entropy of a random variable $X$. For this general case we prove an upper bound that is independent of the dimension of the Hilbert space on which states in the ensemble act.


Sign in / Sign up

Export Citation Format

Share Document