conditional estimates
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 0)

2020 ◽  
Vol 13 (4) ◽  
pp. 82-94
Author(s):  
Muhammad Akbar ◽  
Aima Tahir ◽  
Syeda Faiza Urooj

We examine the intraday returns and volatility in the US equity market amid the COVID-19 pandemic crisis. Our empirical results suggest an increase in volatility over time with mostly negative returns and higher volatility in the last trading session of the day. Our Univariate analysis reveals structural break(s) since the first trading halt in March 2020 and that failure to account for this may lead to biased and unstable conditional estimates. Allowing for time-varying conditional variance and conditional correlation, our dynamic conditional correlation tests suggest that COVID-19 cases and deaths are jointly related to stock returns and realised volatility.


2018 ◽  
Author(s):  
Maithreyi Gopalan ◽  
Elizabeth Tipton

The National Study of Learning Mindsets (NSLM) is a randomized trial evaluating an intervention in a national sample of schools that were selected to participate via probability sampling methods. The response rate for this study was 56%. This paper evaluates whether site-level non-response compromises the generalizability of the results from the achieved sample of schools in the NSLM. Comparisons of characteristics of schools taking part in the NSLM relative to national benchmarks shows that the NSLM sample has a high degree of similarity to the population of all regular, U.S. public high schools with at least 25 students in 9th grade and in which 9th grade is the lowest grade, via two metrics. First, comparisons of school- and district-level characteristics between the NSLM and the national population of inference show few statistically significant differences. Second, applying an empirical method to quantify the degree of generalizability—the Tipton (2014) generalizability index—found that the analytic sample is generalizable to the population overall (generalizability index = .98 on a 0 to 1 scale) and to four other theoretically-relevant inference populations identified based on school achievement level and school minority concentration measures (generalizability indices > .93). Thus, full-sample estimates and conditional estimates (within school achievement and racial composition subgroups) are likely to be highly generalizable to the corresponding populations of inference.


2018 ◽  
Vol 42 (3) ◽  
pp. 501-509 ◽  
Author(s):  
S. A. Brianskiy ◽  
Yu. V. Vizilter

We propose new morphological conditional estimates of image complexity and information content as well as morphological mutual information. These morphological estimates take into account both the number and the shape of image tessellation (mosaic) regions. We provide such a region shape account via joint use of mosaic image shape models based on the morphological image analysis (MIA) proposed by Yu. Pyt’ev and morphological thickness maps from the mathematical morphology (MM) introduced by J. Serra. Mathematical properties of morphological thickness maps are explored w.r.t. properties of structured elements, and corresponding properties of the proposed morphological image complexity and information content are proved. Some experimental results on image shape comparison in terms of shape complexity and information are reported. Open access images from a Kimia99 database  are utilized for these experiments.


2016 ◽  
Author(s):  
Biao Zeng ◽  
Luke R. Lloyd-Jones ◽  
Alexander Holloway ◽  
Urko M. Marigorta ◽  
Andres Metspalu ◽  
...  

AbstractExpression QTL (eQTL) detection has emerged as an important tool for unravelling of the relationship between genetic risk factors and disease or clinical phenotypes. Most studies use single marker linear regression to discover primary signals, followed by sequential conditional modeling to detect secondary genetic variants affecting gene expression. However, this approach assumes that functional variants are sparsely distributed and that close linkage between them has little impact on estimation of their precise location and magnitude of effects. In this study, we address the prevalence of secondary signals and bias in estimation of their effects by performing multi-site linear regression on two large human cohort peripheral blood gene expression datasets (each greater than 2,500 samples) with accompanying whole genome genotypes, namely the CAGE compendium of Illumina microarray studies, and the Framingham Heart Study Affymetrix data. Stepwise conditional modeling demonstrates that multiple eQTL signals are present for ~40% of over 3500 eGenes in both datasets, and the number of loci with additional signals reduces by approximately two-thirds with each conditioning step. However, the concordance of specific signals between the two studies is only ~30%, indicating that expression profiling platform is a large source of variance in effect estimation. Furthermore, a series of simulation studies imply that in the presence of multi-site regulation, up to 10% of the secondary signals could be artefacts of incomplete tagging, and at least 5% but up to one quarter of credible intervals may not even include the causal site, which is thus mis-localized. Joint multi-site effect estimation recalibrates effect size estimates by just a small amount on average. Presumably similar conclusions apply to most types of quantitative trait. Given the strong empirical evidence that gene expression is commonly regulated by more than one variant, we conclude that the fine-mapping of causal variants needs to be adjusted for multi-site influences, as conditional estimates can be highly biased by interference among linked sites.


Author(s):  
S. W. Franks ◽  
C. J. White ◽  
M. Gensen

Abstract. Hydrological extremes are amongst the most devastating forms of natural disasters both in terms of lives lost and socio-economic impacts. There is consequently an imperative to robustly estimate the frequency and magnitude of hydrological extremes. Traditionally, engineers have employed purely statistical approaches to the estimation of flood risk. For example, for an observed hydrological timeseries, each annual maximum flood is extracted and a frequency distribution is fit to these data. The fitted distribution is then extrapolated to provide an estimate of the required design risk (i.e. the 1% Annual Exceedance Probability – AEP). Such traditional approaches are overly simplistic in that risk is implicitly assumed to be static, in other words, that climatological processes are assumed to be randomly distributed in time. In this study, flood risk estimates are evaluated with regards to traditional statistical approaches as well as Pacific Decadal Oscillation (PDO)/El Niño-Southern Oscillation (ENSO) conditional estimates for a flood-prone catchment in eastern Australia. A paleo-reconstruction of pre-instrumental PDO/ENSO occurrence is then employed to estimate uncertainty associated with the estimation of the 1% AEP flood. The results indicate a significant underestimation of the uncertainty associated with extreme flood events when employing the traditional engineering estimates.


Sign in / Sign up

Export Citation Format

Share Document