scholarly journals Probabilistic downscaling of precipitation data in a subtropical mountain area: a two-step approach

2011 ◽  
Vol 18 (2) ◽  
pp. 223-234 ◽  
Author(s):  
R. Haas ◽  
K. Born

Abstract. In this study, a two-step probabilistic downscaling approach is introduced and evaluated. The method is exemplarily applied on precipitation observations in the subtropical mountain environment of the High Atlas in Morocco. The challenge is to deal with a complex terrain, heavily skewed precipitation distributions and a sparse amount of data, both spatial and temporal. In the first step of the approach, a transfer function between distributions of large-scale predictors and of local observations is derived. The aim is to forecast cumulative distribution functions with parameters from known data. In order to interpolate between sites, the second step applies multiple linear regression on distribution parameters of observed data using local topographic information. By combining both steps, a prediction at every point of the investigation area is achieved. Both steps and their combination are assessed by cross-validation and by splitting the available dataset into a trainings- and a validation-subset. Due to the estimated quantiles and probabilities of zero daily precipitation, this approach is found to be adequate for application even in areas with difficult topographic circumstances and low data availability.

2017 ◽  
Vol 32 (3) ◽  
pp. 1161-1183 ◽  
Author(s):  
Bryan M. Burlingame ◽  
Clark Evans ◽  
Paul J. Roebber

Abstract This study evaluates the influence of planetary boundary layer parameterization on short-range (0–15 h) convection initiation (CI) forecasts within convection-allowing ensembles that utilize subsynoptic-scale observations collected during the Mesoscale Predictability Experiment. Three cases, 19–20 May, 31 May–1 June, and 8–9 June 2013, are considered, each characterized by a different large-scale flow pattern. An object-based method is used to verify and analyze CI forecasts. Local mixing parameterizations have, relative to nonlocal mixing parameterizations, higher probabilities of detection but also higher false alarm ratios, such that the ensemble mean forecast skill only subtly varied between parameterizations considered. Temporal error distributions associated with matched events are approximately normal around a zero mean, suggesting little systematic timing bias. Spatial error distributions are skewed, with average mean (median) distance errors of approximately 44 km (28 km). Matched event cumulative distribution functions suggest limited forecast skill increases beyond temporal and spatial thresholds of 1 h and 100 km, respectively. Forecast skill variation is greatest between cases with smaller variation between PBL parameterizations or between individual ensemble members for a given case, implying greatest control on CI forecast skill by larger-scale features than PBL parameterization. In agreement with previous studies, local mixing parameterizations tend to produce simulated boundary layers that are too shallow, cool, and moist, while nonlocal mixing parameterizations tend to be deeper, warmer, and drier. Forecasts poorly resolve strong capping inversions across all parameterizations, which is hypothesized to result primarily from implicit numerical diffusion associated with the default finite-differencing formulation for vertical advection used herein.


2016 ◽  
Vol 55 (9) ◽  
pp. 2091-2108 ◽  
Author(s):  
Michael Weniger ◽  
Petra Friederichs

AbstractThe feature-based spatial verification method named for its three score components: structure, amplitude, and location (SAL) is applied to cloud data, that is, two-dimensional spatial fields of total cloud cover and spectral radiance. Model output is obtained from the German-focused Consortium for Small-Scale Modeling (COSMO-DE) forward operator Synthetic Satellite Simulator (SynSat) and compared with SEVIRI satellite data. The aim of this study is twofold: first, to assess the applicability of SAL to this kind of data and, second, to analyze the role of external object identification algorithms (OIA) and the effects of observational uncertainties on the resulting scores. A comparison of three different OIA shows that the threshold level, which is a fundamental part of all studied algorithms, induces high sensitivity and unstable behavior of object-dependent SAL scores (i.e., even very small changes in parameter values lead to large changes in the resulting scores). An in-depth statistical analysis reveals significant effects on distributional quantities commonly used in the interpretation of SAL, for example, median and interquartile distance. Two sensitivity indicators that are based on the univariate cumulative distribution functions are derived. They make it possible to assess the sensitivity of the SAL scores to threshold-level changes without computationally expensive iterative calculations of SAL for various thresholds. The mathematical structure of these indicators connects the sensitivity of the SAL scores to parameter changes with the effect of observational uncertainties. Last, the discriminating power of SAL is studied. It is shown that—for large-scale cloud data—changes in the parameters may have larger effects on the object-dependent SAL scores (i.e., the S and L2 scores) than does a complete loss of temporal collocation.


Stats ◽  
2020 ◽  
Vol 3 (3) ◽  
pp. 412-426
Author(s):  
Edmund Marth ◽  
Gerd Bramerdorfer

In the field of electrical machine design, excellent performance for multiple objectives, like efficiency or torque density, can be reached by using contemporary optimization techniques. Unfortunately, highly optimized designs are prone to be rather sensitive regarding uncertainties in the design parameters. This paper introduces an approach to rate the sensitivity of designs with a large number of tolerance-affected parameters using cumulative distribution functions (CDFs) based on finite element analysis results. The accuracy of the CDFs is estimated using the Dvoretzky–Kiefer–Wolfowitz inequality, as well as the bootstrapping method. The advantage of the presented technique is that computational time can be kept low, even for complex problems. As a demanding test case, the effect of imperfect permanent magnets on the cogging torque of a Vernier machine with 192 tolerance-affected parameters is investigated. Results reveal that for this problem, a reliable statement about the robustness can already be made with 1000 finite element calculations.


Author(s):  
Katharina Göckeler ◽  
Steffen Terhaar ◽  
Christian Oliver Paschereit

Residence time distributions in a swirling, premixed combustor flow are determined by means of tracer experiments and a reactor network model. The measurements were conducted at nonreacting, reacting, and steam-diluted reacting conditions for steam contents of up to 30% of the air mass flow. The tracer distribution was obtained from the light scattering of seeding particles employing the quantitative light sheet technique (QLS). At steady operating conditions, a positive step of particle feed was applied, yielding cumulative distribution functions (CDF) for the tracer response. The shape of the curve is characteristic for the local degree of mixedness. Fresh and recirculating gases were found to mix rapidly at nonreacting and highly steam-diluted conditions, whereas mixing was more gradual at dry reacting conditions. The instantaneous mixing near the burner outlet is related to the presence of a large-scale helical structure, which was suppressed at dry reacting conditions. Zones of similar mixing time scales, such as the recirculation zones, are identified. The CDF curves in these zones are reproduced by a network model of plug flow and perfectly mixed flow reactors. Reactor residence times and inlet volume flow fractions obtained in this way provide data for kinetic network models.


2019 ◽  
Vol 491 (3) ◽  
pp. 4247-4253 ◽  
Author(s):  
David Harvey ◽  
Wessel Valkenburg ◽  
Amelie Tamone ◽  
Alexey Boyarsky ◽  
Frederic Courbin ◽  
...  

ABSTRACT Flux ratio anomalies in strong gravitationally lensed quasars constitute a unique way to probe the abundance of non-luminous dark matter haloes, and hence the nature of dark matter. In this paper, we identify double-imaged quasars as a statistically efficient probe of dark matter, since they are 20 times more abundant than quadruply imaged quasars. Using N-body simulations that include realistic baryonic feedback, we measure the full distribution of flux ratios in doubly imaged quasars for cold (CDM) and warm dark matter (WDM) cosmologies. Through this method, we fold in two key systematics – quasar variability and line-of-sight structures. We find that WDM cosmologies predict a ∼6 per cent difference in the cumulative distribution functions of flux ratios relative to CDM, with CDM predicting many more small ratios. Finally, we estimate that ∼600 doubly imaged quasars will need to be observed in order to be able to unambiguously discern between CDM and the two WDM models studied here. Such sample sizes will be easily within reach of future large-scale surveys such as Euclid. In preparation for this survey data, we require discerning the scale of the uncertainties in modelling lens galaxies and their substructure in simulations, plus a strong understanding of the selection function of observed lensed quasars.


2014 ◽  
Vol 635-637 ◽  
pp. 411-416
Author(s):  
Shu Xin Zhang ◽  
Kai Chao Yu

This paper presents a method for probabilistic sensitivity analysis of mechanical components or structural systems subject to random uncertainties in loads, material properties and geometry. The bi-variate dimension reduction method is applied to compute the response moments and their sensitivities with respect to the distribution parameters of basic random variables. Saddlepoint approximations with truncated cumulant generating functions are employed to estimate the probability density functions and cumulative distribution functions of the random responses. The rigorous analytic derivation of the sensitivities of the probability of failure of the systems under consideration with respect to the distribution parameters of basic random variables is derived. Finally, the practicality and efficiency of the proposed method are demonstrated by an application example.


Author(s):  
Katharina Göckeler ◽  
Steffen Terhaar ◽  
Christian Oliver Paschereit

Residence time distributions in a swirling, premixed combustor flow are determined by means of tracer experiments and a reactor network model. The measurements were conducted at non-reacting, reacting, and steam-diluted reacting conditions for steam contents of up to 30% of the air mass flow. The tracer distribution was obtained from the light scattering of seeding particles employing the quantitative light sheet technique (QLS). At steady operating conditions, a positive step of particle feed was applied, yielding cumulative distribution functions (CDF) for the tracer response. The shape of the curve is characteristic for the local degree of mixedness. Fresh and recirculating gases were found to mix rapidly at non-reacting and highly steam diluted conditions, whereas mixing was more gradual at dry reacting conditions. The instantaneous mixing near the burner outlet is related to the presence of a large scale helical structure, which was suppressed at dry reacting conditions. Zones of similar mixing time scales, such as the recirculation zones, are identified. The CDF curves in these zones are reproduced by a network model of plug flow and perfectly mixed flow reactors. Reactor residence times and inlet volume flow fractions obtained in this way provide data for kinetic network models.


2020 ◽  
Vol 47 (3) ◽  
pp. 547-560 ◽  
Author(s):  
Darush Yazdanfar ◽  
Peter Öhman

PurposeThe purpose of this study is to empirically investigate determinants of financial distress among small and medium-sized enterprises (SMEs) during the global financial crisis and post-crisis periods.Design/methodology/approachSeveral statistical methods, including multiple binary logistic regression, were used to analyse a longitudinal cross-sectional panel data set of 3,865 Swedish SMEs operating in five industries over the 2008–2015 period.FindingsThe results suggest that financial distress is influenced by macroeconomic conditions (i.e. the global financial crisis) and, in particular, by various firm-specific characteristics (i.e. performance, financial leverage and financial distress in previous year). However, firm size and industry affiliation have no significant relationship with financial distress.Research limitationsDue to data availability, this study is limited to a sample of Swedish SMEs in five industries covering eight years. Further research could examine the generalizability of these findings by investigating other firms operating in other industries and other countries.Originality/valueThis study is the first to examine determinants of financial distress among SMEs operating in Sweden using data from a large-scale longitudinal cross-sectional database.


2020 ◽  
Vol 501 (1) ◽  
pp. 994-1001
Author(s):  
Suman Sarkar ◽  
Biswajit Pandey ◽  
Snehasish Bhattacharjee

ABSTRACT We use an information theoretic framework to analyse data from the Galaxy Zoo 2 project and study if there are any statistically significant correlations between the presence of bars in spiral galaxies and their environment. We measure the mutual information between the barredness of galaxies and their environments in a volume limited sample (Mr ≤ −21) and compare it with the same in data sets where (i) the bar/unbar classifications are randomized and (ii) the spatial distribution of galaxies are shuffled on different length scales. We assess the statistical significance of the differences in the mutual information using a t-test and find that both randomization of morphological classifications and shuffling of spatial distribution do not alter the mutual information in a statistically significant way. The non-zero mutual information between the barredness and environment arises due to the finite and discrete nature of the data set that can be entirely explained by mock Poisson distributions. We also separately compare the cumulative distribution functions of the barred and unbarred galaxies as a function of their local density. Using a Kolmogorov–Smirnov test, we find that the null hypothesis cannot be rejected even at $75{{\ \rm per\ cent}}$ confidence level. Our analysis indicates that environments do not play a significant role in the formation of a bar, which is largely determined by the internal processes of the host galaxy.


Sign in / Sign up

Export Citation Format

Share Document