conditional probability distribution
Recently Published Documents


TOTAL DOCUMENTS

69
(FIVE YEARS 24)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Wenwu Gong ◽  
Jie Jiang ◽  
Lili Yang

Abstract. Typhoons and rainstorms are types of natural hazards that can cause significant impacts. These individual hazards may also occur simultaneously to produce compound hazards, leading to increased losses. The accurate risk assessment of such compound hazards faces several challenges due to the uncertainties in multiple hazards level evaluation, and the incomplete information in historical data sets. In this paper, to deal with these challenges, we propose a risk assessment model called VFS-IEM-IDM based on the Variable Fuzzy Set and Information Diffusion Method. In particular, VFS-IEM-IDM provides a comprehensive evaluation of the compound hazards level, and a predictive cumulative logistic model is used to verify the results. Furthermore, VFS-IEM-IDM applies a normal information diffusion estimator to estimate the conditional probability distribution and the vulnerability distribution of the compound hazards based on the hazards level, the hazards occurrence time, and the corresponding losses. To examine the efficacy of VFS-IEM-IDM, a case study of the Typhoon-Rainstorm hazards that occurred in Shenzhen, China is presented. The risk assessment results indicate that hazards of level Ⅱ mostly occur in August and October, while hazards of level Ⅲ often occur in September. The risk of the Typhoon-Rainstorm hazards differs in each month and in August and September the risk gets the highest value, and the estimated economic losses are around 114 million RMB and 167 million RMB respectively.


2021 ◽  
Author(s):  
Shangying Wang ◽  
Simone Bianco

The relationship between the genotype, defined as the set of genetic information encoded in the DNA, and the phenotype, defined as the macroscopic realization of that information, is still unclear. The emergence of a specific phenotype may be linked not only to gene expression, but also to environmental perturbations and experimental conditions. Moreover, even genetically identical cells in identical environments may display a variety of phenotypes. This imposes a big challenge in building traditional supervised machine learning models that can only predict determined phenotypic parameters or categories per specific genetic and/or environmental conditions as inputs. Furthermore, biological noise has been proven to play a crucial role in gene regulation mechanisms. The prediction of the average value of a given phenotype is not always sufficient to fully characterize a given biological system. In this study, we develop a deep learning algorithm that can predict the conditional probability distribution of a phenotype of interest with a small number of observations per input condition. The key innovation of this study is that the deep neural network automatically generates the probability distributions based on only few (10 or less) noisy measurements for each input condition, with no prior knowledge or assumption of the probability distributions. This is extremely useful for exploring unknown biological systems with limited measurements for each input condition, which is linked not only to a better quantitative understanding of biological systems, but also to the design of new ones, as it is in the case of synthetic biology and cellular engineering.


2021 ◽  
Vol 39 (8) ◽  
Author(s):  
Jimoh Sina Ogede ◽  
Soliu Bidemi Adegboyega

The volatility of oil prices and the exchange rate are closely linked and multifaceted. However, this paper utilizes a quantile regression model to explain the heterogeneous changes in oil price and their impact on the exchange ratein the selected countries in Africa namely Nigeria, Gabon, and Algeria between 1995:Q1 to Q4:2018.This technique offers us the ability to verify the predictors of exchange rate movements throughout conditional probability distribution, with a special emphasis on the importance of depreciation as well as an appreciation of the domestic currencies.Our results confirm the distributional variability of the relationship between oil price fluctuations and the exchange rate. For OLS, the estimated coefficient of oil price volatility is insignificant and negative at a 5% significance level. The QR results depict a significantly positive nexus between volatilities of oil price and exchange rate at the 10% level of significance which are insignificantly different from the OLS results. The QR results reveal that appreciation and depreciation in oil prices impact exchange rate movement positively and negatively respectively. Furthermore, the QR estimated coefficient of oil price volatility provides a lower (0.10) and upper (0.90) quantiles, which substantially differentiable from zero, indicating that considerable depreciation and appreciation of $US appears to change the exchange rate response to oil price volatility, which could provide useful insights to investors and policymakers.


2021 ◽  
Vol 11 (11) ◽  
pp. 4735
Author(s):  
Wang Xu ◽  
Renwen Chen ◽  
Qinbang Zhou ◽  
Fei Liu

In recent years, deep-learning-based super-resolution (SR) methods have obtained impressive performance gains on synthetic clean datasets, but they fail to perform well in real-world scenarios due to insufficient real-world training data. To tackle this issue, we propose a conditional-normalizing-flow-based method named IDFlow for image degradation modeling that aims to generate various degraded low-resolution (LR) images for real-world SR model training. IDFlow takes image degradation modeling as a problem of learning a conditional probability distribution of LR images given the high-resolution (HR) ones, and learns the distribution from existing real-world SR datasets. It first decomposes the image degradation modeling into blur degradation modeling and real-world noise modeling. It then utilizes two multi-scale invertible networks to model these two steps, respectively. Before applied into real-world SR, IDFlow is first trained supervisedly on two real-world datasets RealSR and DPED with negative log-likelihood (NLL) loss. It is then used to generate a large number of HR-LR image pairs from an arbitrary HR image dataset for SR model training. Extensive experiments on IDFlow with RealSR and DPED are conducted, including evaluations on image degradation stochasticity, degradation modeling, and real-world super resolution. Two known SR models are trained with IDFlow and named as IDFlow-SR and IDFlow-GAN. Testing results on the RealSR and DPED testing dataset show that not only can IDFlow generate realistic degraded images close to real-world images, but it is also beneficial to real-world SR performance improvement. IDFlow-SR achieves 4× SR performance gains of 0.91 dB and 0.161 in terms of image quality assessment metrics PSNR and LPIPS. Moreover, IDFlow-GAN can super-resolve real-world images in the DPED testing dataset with richer textures and maintain clearer patterns without visible noises when compared with state-of-the-art SR methods. Quantitative and qualitative experimental results well demonstrate the effectiveness of the proposed IDFlow.


2021 ◽  
Vol 14 (5) ◽  
pp. 213
Author(s):  
Tomaso Aste

Systemic risk, in a complex system with several interrelated variables, such as a financial market, is quantifiable from the multivariate probability distribution describing the reciprocal influence between the system’s variables. The effect of stress on the system is reflected by the change in such a multivariate probability distribution, conditioned to some of the variables being at a given stress’ amplitude. Therefore, the knowledge of the conditional probability distribution function can provide a full quantification of risk and stress propagation in the system. However, multivariate probabilities are hard to estimate from observations. In this paper, I investigate the vast family of multivariate elliptical distributions, discussing their estimation from data and proposing novel measures for stress impact and systemic risk in systems with many interrelated variables. Specific examples are described for the multivariate Student-t and the multivariate normal distributions applied to financial stress testing. An example of the US equity market illustrates the practical potentials of this approach.


2021 ◽  
Vol 4 (1) ◽  
pp. 94-106
Author(s):  
S. Ibrahim-Tiamiyu ◽  
O. V. Oni ◽  
E. O. Adeleke

Covid-19 is an emergency and viral infection with its outbreak being termed as one of the great epidemics in the 21st century causing so many deaths, which made WHO declare it as a pandemic emergency. This virus is new and comes with its characteristics of which randomness and uncertainty are among its common features. In this paper, we developed a model for carrying out an analysis of COVID-19 dynamics using Markov-chain theory methodology. Here, we employed the use of conditional probability distribution as embedded in the Markov property of our chain to construct the transition probabilities that were used in determining the probability distributions of COVID-19 patients as well as predicting its future spread dynamics. We provide a step-by-step approach to obtaining probability distributions of infected and recovered individuals, of infected and recovering and of a recovered patient being getting infected again. This study reveals that irrespective of the initial state of health of an individual, we will always have probabilities P_RI/〖(P〗_IR+P_RI) of an individual being infected and P_RI/〖(P〗_IR+P_RI) of an individual recovering from this disease. Also, with increasing ‘n’, we have an equilibrium that does not depend on the initial conditions, the implication of which is that at some point in time, the situation stabilizes and the distribution X_(n+1) is the same as that of X_n. We envision that the output of this model will assist those in the health system and related fields to plan for the potential impact of the pandemic and its peak.


2021 ◽  
Author(s):  
Gang Xi ◽  
Xiaoyi Yang ◽  
Ming Xi

Abstract Value is one of the most fundamental concepts in economics. The existing main definitions of value have certain limitations and are difficult to be unified and quantified. Thus, this article presents a method of quantifying value based on the conditional probability theory; we set value as a random variable, a price is the value of the good in terms of money, according to the price’s historical records, quantitative statistics and human experiences, and thus uses conditional probability distribution to measure value. Furthermore, the mean and variance of random variables are used to describe the weighted average of the possible values and the dispersion of values distribution. This method provides a new perspective for the measurement of value.


2021 ◽  
Author(s):  
Gang Xi ◽  
Xiaoyi Yang ◽  
Ming Xi

Abstract Value is one of the most fundamental concepts in economics. The existing main definitions of value have certain limitations and are difficult to be unified and quantified. Thus, this article presents a method of quantifying value based on the conditional probability theory; we set value as a random variable, a price is the value of the good in terms of money, according to the price’s historical records, quantitative statistics and human experiences, and thus uses conditional probability distribution to measure value. Furthermore, the mean and variance of random variables are used to describe the weighted average of the possible values and the dispersion of values distribution. This method provides a new perspective for the measurement of value.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 287 ◽  
Author(s):  
Riccardo Volpi ◽  
Uddhipan Thakur ◽  
Luigi Malagò

Word embeddings based on a conditional model are commonly used in Natural Language Processing (NLP) tasks to embed the words of a dictionary in a low dimensional linear space. Their computation is based on the maximization of the likelihood of a conditional probability distribution for each word of the dictionary. These distributions form a Riemannian statistical manifold, where word embeddings can be interpreted as vectors in the tangent space of a specific reference measure on the manifold. A novel family of word embeddings, called α-embeddings have been recently introduced as deriving from the geometrical deformation of the simplex of probabilities through a parameter α, using notions from Information Geometry. After introducing the α-embeddings, we show how the deformation of the simplex, controlled by α, provides an extra handle to increase the performances of several intrinsic and extrinsic tasks in NLP. We test the α-embeddings on different tasks with models of increasing complexity, showing that the advantages associated with the use of α-embeddings are present also for models with a large number of parameters. Finally, we show that tuning α allows for higher performances compared to the use of larger models in which additionally a transformation of the embeddings is learned during training, as experimentally verified in attention models.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 1945
Author(s):  
Ravil I. Mukhamediev ◽  
Kirill Yakunin ◽  
Rustam Mussabayev ◽  
Timur Buldybayev ◽  
Yan Kuchin ◽  
...  

Mass media not only reflect the activities of state bodies but also shape the informational context, sentiment, depth, and significance level attributed to certain state initiatives and social events. Multilateral and quantitative (to the practicable extent) assessment of media activity is important for understanding their objectivity, role, focus, and, ultimately, the quality of the society’s “fourth power”. The paper proposes a method for evaluating the media in several modalities (topics, evaluation criteria/properties, classes), combining topic modeling of the text corpora and multiple-criteria decision making. The evaluation is based on an analysis of the corpora as follows: the conditional probability distribution of media by topics, properties, and classes is calculated after the formation of the topic model of the corpora. Several approaches are used to obtain weights that describe how each topic relates to each evaluation criterion/property and to each class described in the paper, including manual high-level labeling, a multi-corpora approach, and an automatic approach. The proposed multi-corpora approach suggests assessment of corpora topical asymmetry to obtain the weights describing each topic’s relationship to a certain criterion/property. These weights, combined with the topic model, can be applied to evaluate each document in the corpora according to each of the considered criteria and classes. The proposed method was applied to a corpus of 804,829 news publications from 40 Kazakhstani sources published from 01 January 2018 to 31 December 2019, to classify negative information on socially significant topics. A BigARTM model was derived (200 topics) and the proposed model was applied, including to fill a table of the analytical hierarchical process (AHP) and all of the necessary high-level labeling procedures. Experiments confirm the general possibility of evaluating the media using the topic model of the text corpora, because an area under receiver operating characteristics curve (ROC AUC) score of 0.81 was achieved in the classification task, which is comparable with results obtained for the same task by applying the BERT (Bidirectional Encoder Representations from Transformers) model.


Sign in / Sign up

Export Citation Format

Share Document