quantile estimates
Recently Published Documents


TOTAL DOCUMENTS

90
(FIVE YEARS 25)

H-INDEX

14
(FIVE YEARS 1)

2022 ◽  
Vol 30 (1) ◽  
pp. 319-342
Author(s):  
Zun Liang Chuan ◽  
Wan Nur Syahidah Wan Yusoff ◽  
Azlyna Senawi ◽  
Mohd Romlay Mohd Akramin ◽  
Soo-Fen Fam ◽  
...  

Descriptive data mining has been widely applied in hydrology as the regionalisation algorithms to identify the statistically homogeneous rainfall regions. However, previous studies employed regionalisation algorithms, namely agglomerative hierarchical and non-hierarchical regionalisation algorithms requiring post-processing techniques to validate and interpret the analysis results. The main objective of this study is to investigate the effectiveness of the automated agglomerative hierarchical and non-hierarchical regionalisation algorithms in identifying the homogeneous rainfall regions based on a new statistically significant difference regionalised feature set. To pursue this objective, this study collected 20 historical monthly rainfall time-series data from the rain gauge stations located in the Kuantan district. In practice, these 20 rain gauge stations can be categorised into two statistically homogeneous rainfall regions, namely distinct spatial and temporal variability in the rainfall amounts. The results of the analysis show that Forgy K-means non-hierarchical (FKNH), Hartigan- Wong K-means non-hierarchical (HKNH), and Lloyd K-means non-hierarchical (LKNH) regionalisation algorithms are superior to other automated agglomerative hierarchical and non-hierarchical regionalisation algorithms. Furthermore, FKNH, HKNH, and LKNH yielded the highest regionalisation accuracy compared to other automated agglomerative hierarchical and non-hierarchical regionalisation algorithms. Based on the regionalisation results yielded in this study, the reliability and accuracy that assessed the risk of extreme hydro-meteorological events for the Kuantan district can be improved. In particular, the regional quantile estimates can provide a more accurate estimation compared to at-site quantile estimates using an appropriate statistical distribution.


2021 ◽  
pp. 0958305X2110415
Author(s):  
Yongpei Wang ◽  
Jieru Yang ◽  
Jinwei Chen ◽  
Qian Zhang

China's unprecedented urbanization has come at the cost of environmental degradation, which increasingly appears to be holding back migration to bustling municipal districts. Escaping from Beijing, Shanghai and Guangzhou has become a hot topic in recent years. But in fact, small and medium-sized cities also show signs of population decentralization. The aim of this paper is to reveal the impact of environmental pollution on the decentralization of urban population in China. Based on the panel data of over 227 prefecture-level cities and municipalities directly under the central government, the sensitivity of municipal population to PM2.5 concentration was empirically studied. The results showed that PM2.5 concentrations generally had the effect of driving away the population of municipal districts, but this was most pronounced in first-tier and fourth-tier cities rather than second-tier and third-tier cities. The panel quantile estimates further confirm this finding that megacities and small and micro cities rather than small and medium-sized cities are more vulnerable to environmental pollution, which is a reminder that Chinese policymakers must not focus on the environmental problems of megacities alone, but must prevent environmental degradation from hollowing out the population of small cities and towns.


Author(s):  
Fan Zhou ◽  
Zhoufan Zhu ◽  
Qi Kuang ◽  
Liwen Zhang

Although distributional reinforcement learning (DRL) has been widely examined in the past few years, there are two open questions people are still trying to address. One is how to ensure the validity of the learned quantile function, the other is how to efficiently utilize the distribution information. This paper attempts to provide some new perspectives to encourage the future in-depth studies in these two fields. We first propose a non-decreasing quantile function network (NDQFN) to guarantee the monotonicity of the obtained quantile estimates and then design a general exploration framework called distributional prediction error (DPE) for DRL which utilizes the entire distribution of the quantile function. In this paper, we not only discuss the theoretical necessity of our method but also show the performance gain it achieves in practice by comparing with some competitors on Atari 2600 Games especially in some hard-explored games.


2021 ◽  
Vol 70 (1) ◽  
pp. 109-128
Author(s):  
Hilary I. Okagbue ◽  
Timothy A. Anake ◽  
Pelumi E. Oguntunde ◽  
Abiodun A. Opanuga

2021 ◽  
pp. 135481662110300
Author(s):  
Usamah F Alfarhan ◽  
Khaldoon Nusair ◽  
Hamed Al-Azri ◽  
Saeed Al-Muharrami ◽  
Nan Hua

Tourism expenditures are determined by a set of antecedents that reflect tourists’ willingness and ability to spend, and de facto incremental monetary outlays at which willingness and ability is transformed into total expenditures. Based on the neoclassical theoretical argument of utility-constrained expenditure minimization, we extend the current literature by applying a sustainability-based segmentation criterion, namely, the Legatum Prosperity IndexTM to the decomposition of a total expenditure differential into tourists’ relative willingness to spend and an upper bound of third-degree price discrimination, using mean-level and conditional quantile estimates. Our results indicate that understanding the price–quantity composition of international inbound tourism expenditure differentials assists agents in the tourism industry in their quest for profit maximization.


2021 ◽  
Vol 17 (2) ◽  
pp. 166-180
Author(s):  
Busababodhin Piyapatr ◽  
Chiangpradit Monchaya ◽  
Phoophiwfa Tossapol ◽  
Jeong-Soo Park ◽  
Do-ove Manoon ◽  
...  

This article applies the Wakeby distribution (WAD) with high-order L-moments estimates (LH-ME) to annual extreme rainfall data obtained from 99 gauge stations in Thailand. The objectives of this study investigate to obtain appropriate quantile estimates and return levels for several return periods, 2, 5, 10, 25 and 50 years. The 95% confidence intervals for the quantiles determined from the WAD are derived using the bootstrap technique. Isopluvial maps of estimated design values that correspond to selected return periods are presented. The LH-ME results are better than estimates from the more primitive L-moments method for a large majority of the stations considered.


2021 ◽  
Author(s):  
Enrique Soriano Martín ◽  
Antonio Jiménez ◽  
Luis Mediero

<p>Flood peak quantiles for return periods up to 10 000 years are required for dam design and safety assessment, though flood series usually have a record length of around 20-40 years that leads to a high uncertainty. The utility of historical data of flooding is generally recognised for estimating the magnitude of extreme events with return periods in excess of 100 years. Therefore, historical information can be incorporated in flood frequency analyses to reduce uncertainties in high return period flood quantile estimates that are used in hydrological dam safety analyses.</p><p>This study assesses a set of existing techniques to incorporate historical information of flooding in extreme frequency analyses, focusing on their reliability and uncertainty reduction for high return periods that are used for dam safety analysis. Monte Carlo simulations are used to assess both the reliability and uncertainty in high return period quantile estimates. Varying lengths in the historical (Nh = 100 and 200 years) and systematic (Ns = 20, 40 and 60 years) periods are considered. In addition, a varying number of known flood magnitudes that exceed a given perception threshold in the historical period are also considered (k = 1-2). The values of Nh, Ns and k used in the study are the most usual in practice.</p><p>The reliability and uncertainty reduction in flood quantile estimates for each technique depend on the statistical properties of flood series. Therefore, a set of feasible combinations of L-coefficient of variation (L-CV) and skewness (L-CS) values should be considered. The analysis aims to understand how each technique behaves in terms of flood quantile reliability and uncertainty reduction depending on the L-moment statistics of flood series. In this study, L-CV and L-CS regional values in the 29 homogeneous regions identified in Spain for developing the national map of flood quantiles by the Centre for Hydrographic Studies of CEDEX are considered.</p><p>The results show that the maximum likelihood estimator (MLE) and weighted moments (WM) techniques show the best results in the regions with small L-CS values. However, the biased partial probability weighted moments (BPPWM) technique shows the best results in the regions with high L-CS values. While the expected moments algorithm (EMA) tends to underestimate flood quantiles for high return periods, the unbiased partial probability weighted moments (UPPWM) technique tends to overestimate them. In addition, including historical information of flooding in flood frequency analyses improves flood quantile estimates in most cases regardless the technique that is used. Uncertainty reduction in high return period flood quantile estimates are higher for short systematic time series, regions with high L-CS values and long historical periods.</p><p><strong>Acknowledgments:</strong> This research has been supported by the project SAFERDAMS (PID2019-107027RB-I00) funded by the Spanish Ministry of Science and Innovation.</p>


Sign in / Sign up

Export Citation Format

Share Document