scholarly journals Symmetry of the physical probability function implies modularity of the lattice of decision effects

1972 ◽  
Vol 28 (2) ◽  
pp. 123-132 ◽  
Author(s):  
Günter Dähn
Author(s):  
Jun Pei ◽  
Zheng Zheng ◽  
Hyunji Kim ◽  
Lin Song ◽  
Sarah Walworth ◽  
...  

An accurate scoring function is expected to correctly select the most stable structure from a set of pose candidates. One can hypothesize that a scoring function’s ability to identify the most stable structure might be improved by emphasizing the most relevant atom pairwise interactions. However, it is hard to evaluate the relevant importance for each atom pair using traditional means. With the introduction of machine learning methods, it has become possible to determine the relative importance for each atom pair present in a scoring function. In this work, we use the Random Forest (RF) method to refine a pair potential developed by our laboratory (GARF6) by identifying relevant atom pairs that optimize the performance of the potential on our given task. Our goal is to construct a machine learning (ML) model that can accurately differentiate the native ligand binding pose from candidate poses using a potential refined by RF optimization. We successfully constructed RF models on an unbalanced data set with the ‘comparison’ concept and, the resultant RF models were tested on CASF-2013.5 In a comparison of the performance of our RF models against 29 scoring functions, we found our models outperformed the other scoring functions in predicting the native pose. In addition, we used two artificial designed potential models to address the importance of the GARF potential in the RF models: (1) a scrambled probability function set, which was obtained by mixing up atom pairs and probability functions in GARF, and (2) a uniform probability function set, which share the same peak positions with GARF but have fixed peak heights. The results of accuracy comparison from RF models based on the scrambled, uniform, and original GARF potential clearly showed that the peak positions in the GARF potential are important while the well depths are not. <br>


Philosophies ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 30 ◽  
Author(s):  
Abir Igamberdiev

Relational ideas for our description of the natural world can be traced to the concept of Anaxagoras on the multiplicity of basic particles, later called “homoiomeroi” by Aristotle, that constitute the Universe and have the same nature as the whole world. Leibniz viewed the Universe as an infinite set of embodied logical essences called monads, which possess inner view, compute their own programs and perform mathematical transformations of their qualities, independently of all other monads. In this paradigm, space appears as a relational order of co-existences and time as a relational order of sequences. The relational paradigm was recognized in physics as a dependence of the spatiotemporal structure and its actualization on the observer. In the foundations of mathematics, the basic logical principles are united with the basic geometrical principles that are generic to the unfolding of internal logic. These principles appear as universal topological structures (“geometric atoms”) shaping the world. The decision-making system performs internal quantum reduction which is described by external observers via the probability function. In biology, individual systems operate as separate relational domains. The wave function superposition is restricted within a single domain and does not expand outside it, which corresponds to the statement of Leibniz that “monads have no windows”.


2002 ◽  
Vol 6 (4) ◽  
pp. 213-228 ◽  
Author(s):  
Bryan F. J. Manly

A resource selection probability function is a function that gives the prob- ability that a resource unit (e.g., a plot of land) that is described by a set of habitat variables X1 to Xp will be used by an animal or group of animals in a certain period of time. The estimation of a resource selection function is usually based on the comparison of a sample of resource units used by an animal with a sample of the resource units that were available for use, with both samples being assumed to be effectively randomly selected from the relevant populations. In this paper the possibility of using a modified sampling scheme is examined, with the used units obtained by line transect sampling. A logistic regression type of model is proposed, with estimation by conditional maximum likelihood. A simulation study indicates that the proposed method should be useful in practice.


2016 ◽  
Vol 11 (S321) ◽  
pp. 248-250
Author(s):  
B. W. Holwerda ◽  
W. C. Keel

AbstractInterstellar dust is still a dominant uncertainty in Astronomy, limiting precision in e.g., cosmological distance estimates and models of how light is re-processed within a galaxy. When a foreground galaxy serendipitously overlaps a more distant one, the latter backlights the dusty structures in the nearer foreground galaxy.Such an overlapping or occulting galaxy pair can be used to measure the distribution of dust in the closest galaxy with great accuracy. The STARSMOG program uses Hubble to map the distribution of dust in foreground galaxies in fine (<100 pc) detail. Integral Field Unit (IFU) observations will map the effective extinction curve, disentangling the role of fine-scale geometry and grain composition on the path of light through a galaxy.The overlapping galaxy technique promises to deliver a clear understanding of the dust in galaxies: geometry, a probability function of dimming as a function of galaxy mass and radius, and its dependence on wavelength.


2014 ◽  
Vol 8 (1) ◽  
pp. 108-130
Author(s):  
E. HOWARTH ◽  
J. B. PARIS

AbstractSpectrum Exchangeability, Sx, is an irrelevance principle of Pure Inductive Logic, and arguably the most natural (but not the only) extension of Atom Exchangeability to polyadic languages. It has been shown1 that all probability functions which satisfy Sx are comprised of a mixture of two essential types of probability functions; heterogeneous and homogeneous functions. We determine the theory of Spectrum Exchangeability, which for a fixed language L is the set of sentences of L which must be assigned probability 1 by every probability function satisfying Sx, by examining separately the theories of heterogeneity and homogeneity. We find that the theory of Sx is equal to the theory of finite structures, i.e., those sentences true in all finite structures for L, and it emerges that Sx is inconsistent with the principle of Super-Regularity (Universal Certainty). As a further consequence we are able to characterize those probability functions which satisfy Sx and the Finite Values Property.


SAGE Open ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 215824402110459
Author(s):  
Małgorzata Iwanicz-Drozdowska ◽  
Krzysztof Jackowicz ◽  
Maciej Karczmarczyk

In this study, we analyze the probability of bank failure, the expected losses, and the costs of bank restructuring with the application of a lognormal distribution probability function for three categories of European banks, that is, small, medium, and large, over the post-crisis period from 2012 to 2016. Our goal was to determine whether the total capital ratio (TCR) properly reflects banks’ solvency under stress conditions. We identified a phenomenon that one can call the “crooked smile of TCR”. Medium-sized banks with relatively high TCRs performed poorly in stress tests; however, the probability of bank failure increases slightly with the size of the bank, while the TCR decreases. We claim that the focus on capital adequacy measures is not sufficient to achieve the goal of improving banks’ stability and reducing their restructuring costs. Our results are of special importance for medium-sized banks, as these banks are not regularly subjected to publicly available stress tests.


Sign in / Sign up

Export Citation Format

Share Document