class probability
Recently Published Documents


TOTAL DOCUMENTS

76
(FIVE YEARS 19)

H-INDEX

12
(FIVE YEARS 2)

Author(s):  
Yifu Wang ◽  
Boguslaw Zegarlinski

AbstractWe study the higher order q- Poincaré and other coercive inequalities for a class probability measures satisfying Adam’s regularity condition.


2021 ◽  
Vol 13 (5) ◽  
pp. 1042
Author(s):  
Jung-Hyun Yang ◽  
Jung-Moon Yoo ◽  
Yong-Sang Choi

The detection of low stratus and fog (LSF) at dawn remains limited because of their optical features and weak solar radiation. LSF could be better identified by simultaneous observations of two geostationary satellites from different viewing angles. The present study developed an advanced dual-satellite method (DSM) using FY-4A and Himawari-8 for LSF detection at dawn in terms of probability indices. Optimal thresholds for identifying the LSF from the spectral tests in DSM were determined by the comparison with ground observations of fog and clear sky in/around Japan between April to November of 2018. Then the validation of these thresholds was carried out for the same months of 2019. The DSM essentially used two traditional single-satellite tests for daytime such as the 0.65-μm reflectance (R0.65), and the brightness temperature difference between 3.7 μm and 11 μm (BTD3.7-11); in addition to four more tests such as Himawari-8 R0.65 and BTD13.5-8.5, the dual-satellite stereoscopic difference in BTD3.7-11 (ΔBTD3.7-11), and that in the Normalized Difference Snow Index (ΔNDSI). The four were found to show very high skill scores (POD: 0.82 ± 0.04; FAR, 0.10 ± 0.04). The radiative transfer simulation supported optical characteristics of LSF in observations. The LSF probability indices (average POD: 0.83, FAR: 0.10) were constructed by a statistical combination of the four to derive the five-class probability values of LSF occurrence in a grid. The indices provided more details and useful results in LSF spatial distribution, compared to the single satellite observations (i.e., R0.65 and/or BTD3.7-11) of either LSF or no LSF. The present DSM could apply for remote sensing of environmental phenomena if the stereoscopic viewing angle between two satellites is appropriate.


2021 ◽  
pp. 245-272
Author(s):  
Igor Wysocki ◽  
Walter Block

According to that old adage, if you are going to attack the king, you had better kill him. Mises, of course, is our emperor. Crovelli (2010) has launched a denunciation of him. In our view, he has not at all succeeded. The monarch, of course, cannot respond, but we, his courtiers, can. In this paper we will attempt to refute the former in defense of the latter. Crovelli, more than once, upbraids Mises for not defining probability; for using the concepts of case and class probability, without ever explicating what these two branches have in common. And, this is a legitimate, although somewhat minor, criticism of Ludwig von Mises. In fact we observe that “probability” is essentially mathematical in meaning, whether we consult Wolfram MathWorld which states: “Probability is the branch of mathematics that studies the possible outcomes of given events together with the outcomes’ relative likelihoods and distributions. In common usage, the word “probability” is used to mean the chance that a particular event (or set of events) will occur expressed on a linear scale from 0 (impossibility) to 1 (certainty), also expressed as a percentage between 0 and 100%. The analysis of events governed by probability is called statistics. There are several competing interpretations of the actual “meaning” of probabilities. Frequentists view probability simply as a measure of the frequency of outcomes (the more conventional interpretation), while Bayesians treat probability more subjectively as a statistical procedure that endeavors to estimate parameters of an underlying distribution based on the observed distribution”,1 or the OED: “probability, n. 3. Mathematics. As a measurable quantity: the extent to which a particular event is likely to occur, or a particular situation be the case, as measured by the relative frequency of occurrence of events of the same kind in the whole course of experience, and expressed by a number between 0 and 1.An event that cannot happen has probability 0; one that is certain to happen has probability 1. Probability is commonly estimated by the ratio of the number of successful cases to the total number of possible cases, derived mathematically using known properties of the distribution of events, or estimated logically by inferential or inductive reasoning (when mathematical concepts may be inapplicable or insufficient).”


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1237
Author(s):  
Denis Ullmann ◽  
Shideh Rezaeifar ◽  
Olga Taran ◽  
Taras Holotyak ◽  
Brandon Panos ◽  
...  

We present a new decentralized classification system based on a distributed architecture. This system consists of distributed nodes, each possessing their own datasets and computing modules, along with a centralized server, which provides probes to classification and aggregates the responses of nodes for a final decision. Each node, with access to its own training dataset of a given class, is trained based on an auto-encoder system consisting of a fixed data-independent encoder, a pre-trained quantizer and a class-dependent decoder. Hence, these auto-encoders are highly dependent on the class probability distribution for which the reconstruction distortion is minimized. Alternatively, when an encoding–quantizing–decoding node observes data from different distributions, unseen at training, there is a mismatch, and such a decoding is not optimal, leading to a significant increase of the reconstruction distortion. The final classification is performed at the centralized classifier that votes for the class with the minimum reconstruction distortion. In addition to the system applicability for applications facing big-data communication problems and or requiring private classification, the above distributed scheme creates a theoretical bridge to the information bottleneck principle. The proposed system demonstrates a very promising performance on basic datasets such as MNIST and FasionMNIST.


2020 ◽  
Vol 19 (04) ◽  
pp. 963-986
Author(s):  
Lev V. Utkin ◽  
Andrei V. Konstantinov ◽  
Viacheslav S. Chukanov ◽  
Anna A. Meldo

A new adaptive weighted deep forest algorithm which can be viewed as a modification of the confidence screening mechanism is proposed. The main idea underlying the algorithm is based on adaptive weigting of every training instance at each cascade level of the deep forest. The confidence screening mechanism for the deep forest proposed by Pang et al., strictly removes instances from training and testing processes to simplify the whole algorithm in accordance with the obtained random forest class probability distributions. This strict removal may lead to a very small number of training instances at the next levels of the deep forest cascade. The presented modification is more flexible and assigns weights to instances in order to differentiate their use in building decision trees at every level of the deep forest cascade. It overcomes the main disadvantage of the confidence screening mechanism. The proposed modification is similar to the AdaBoost algorithm to some extent. Numerical experiments illustrate the outperformance of the proposed modification in comparison with the original deep forest. It is also illustrated how the proposed algorithm can be extended for solving the transfer learning and distance metric learning problems.


2020 ◽  
Vol 122 ◽  
pp. 289-307 ◽  
Author(s):  
Xinmin Tao ◽  
Qing Li ◽  
Chao Ren ◽  
Wenjie Guo ◽  
Qing He ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document