scholarly journals Comparative study of flood coincidence risk estimation methods in the mainstream and its tributaries

Author(s):  
Na Li ◽  
Shenglian Guo ◽  
Feng Xiong ◽  
Jun Wang ◽  
Yuzuo Xie

Abstract The coincidence of floods in the mainstream and its tributaries may lead to a large flooding in the downstream confluence area, and the flood coincidence risk analysis is very important for flood prevention and disaster reduction. In this study, the multiple regression model was used to establish the functional relationship among flood magnitudes in the mainstream and its tributaries. The mixed von Mises distribution and Pearson Type III distribution were selected to fit the probability distribution of the annual maximum flood occurrence dates and magnitudes, respectively. The joint distributions of the annual maximum flood occurrence dates and magnitudes were established using copula function, respectively. Fuhe River in the Poyang Lake region was selected as a study case. The joint probability, co-occurrence probability and conditional probability of flood magnitudes were quantitatively estimated and compared with the predicted flood coincidence risks. The results show that the selected marginal and joint distributions can fit observed flood dataset very well. The coincidence probabilities of flood occurrence dates in the upper mainstream and its tributaries mainly occur from May to early July. It is found that the conditional probability is the most consistent with the predicted flood coincidence risks in the mainstream and its tributaries, and is more reliable and rational in practice.

2021 ◽  
Author(s):  
Na Li ◽  
Shenglian Guo ◽  
Feng Xiong ◽  
Jun Wang

Abstract The coincidence of floods in the mainstream and its tributaries may lead to a large flooding in the downstream confluence area, and the flood coincidence risk analysis is very important for flood prevention and disaster reduction. In this study, the multiple regression model was used to establish the functional relationship among flood magnitudes in the mainstream and its tributaries. The mixed von Mises distribution and Pearson Type III distribution were selected to fit the probability distribution of the annual maximum flood occurrence dates and magnitudes, respectively. The joint distributions of the annual maximum flood occurrence dates and magnitudes were established using copula function. Fuhe River in the Poyang Lake region was selected as a study case. The joint probability, co-occurrence probability and conditional probability of flood magnitudes were calculated and compared with the simulated results of the observed data. The results show that the selected marginal and joint distributions perform well in simulating the observed flood data. The coincidence probabilities of flood occurrence dates in the upper mainstream and its tributaries mainly occur from May to early July. Among the three coincidence probability calculation methods, the conditional probability is the most consistent with the flood coincidence risk in the mainstream and its tributaries, which is more reliable and rational in practice. The results reflect the actual flood coincidence situation in the Fuhe River basin and can provide technique support for flood control decision-making.


2021 ◽  
Author(s):  
Trevor Hoey ◽  
Pamela Tolentino ◽  
Esmael Guardian ◽  
Richard Williams ◽  
Richard Boothroyd ◽  
...  

<p>Assessment of flood and drought risks, and changes to these risks under climate change, is a critical issue worldwide. Statistical methods are commonly used in data-rich regions to estimate the magnitudes of river floods of specified return period at ungauged sites. However, data availability can be a major constraint on reliable estimation of flood and drought magnitudes, particularly in the Global South. Statistical flood and drought magnitude estimation methods rely on the availability of sufficiently long data records from sites that are representative of the hydrological region of interest. In the Philippines, although over 1000 locations have been identified where flow records have been collected at some time, very few records exist of over 20 years duration and only a limited number of sites are currently being gauged. We collated data from three archival sources: (1) Division of Irrigation, Surface Water Supply (SWS) (1908-22; 257 sites in total); (2) Japan International Cooperation Agency (JICA) (1955-91; 90 sites); and, (3) Bureau of Research and Standards (BRS) (1957-2018; 181 sites). From these data sets, 176 contained sufficiently long and high quality records to be analysed. Series of annual maximum floods were fit using L-moments with Weibull, Log-Pearson Type III and Generalised Logistic Distributions, the best-fit of these being used to estimate 2-, 10- and 100-year flood events, Q<sub>2</sub>, Q<sub>10</sub> and Q<sub>100</sub>. Predictive equations were developed using catchment area, several measures of annual and extreme precipitation, catchment geometry and land-use. Analysis took place nationally, and also for groups of hydrologically similar regions, based on similar flood growth curve shapes, across the Philippines. Overall, the best fit equations use a combination of two predictor variables, catchment area and the median annual maximum daily rainfall. The national equations have R<sup>2</sup> of 0.55-0.65, being higher for shorter return periods, and regional groupings R<sup>2</sup> are 0.60-0.77 for Q<sub>10</sub>. These coefficients of determination, R<sup>2</sup>, are lower than in some comprehensive studies worldwide reflecting in part the short individual flow records. Standard errors of residuals for the equations are between 0.19 and 0.51 (log<sub>10</sub> units), which lead to significant uncertainty in flood estimation for water resource and flood risk management purposes. Improving the predictions requires further analysis of hydrograph shape across the different climate types, defined by seasonal rainfall distributions, in the Philippines and between catchments of different size. The results here represent the most comprehensive study to date of flood magnitudes in the Philippines and are being incorporated into guidance for river managers alongside new assessments of river channel change across the country. The analysis illustrates the potential, and the limitations, for combining information from multiple data sources and short individual records to generate reliable estimates of flow extremes.</p>


In this paper, we have defined a new two-parameter new Lindley half Cauchy (NLHC) distribution using Lindley-G family of distribution which accommodates increasing, decreasing and a variety of monotone failure rates. The statistical properties of the proposed distribution such as probability density function, cumulative distribution function, quantile, the measure of skewness and kurtosis are presented. We have briefly described the three well-known estimation methods namely maximum likelihood estimators (MLE), least-square (LSE) and Cramer-Von-Mises (CVM) methods. All the computations are performed in R software. By using the maximum likelihood method, we have constructed the asymptotic confidence interval for the model parameters. We verify empirically the potentiality of the new distribution in modeling a real data set.


Author(s):  
Deborah G. Mayo

In this chapter I shall discuss what seems to me to be a systematic ambiguity running through the large and complex risk-assessment literature. The ambiguity concerns the question of separability: can (and ought) risk assessment be separated from the policy values of risk management? Roughly, risk assessment is the process of estimating the risks associated with a practice or substance, and risk management is the process of deciding what to do about such risks. The separability question asks whether the empirical, scientific, and technical questions in estimating the risks either can or should be separated (conceptually or institutionally) from the social, political, and ethical questions of how the risks should be managed. For example, is it possible (advisable) for risk-estimation methods to be separated from social or policy values? Can (should) risk analysts work independently of policymakers (or at least of policy pressures)? The preponderant answer to the variants of the separability question in recent riskresearch literature is no. Such denials of either the possibility or desirability of separation may be termed nonseparatist positions. What needs to be recognized, however, is that advocating a nonseparatist position masks radically different views about the nature of risk-assessment controversies and of how best to improve risk assessment. These nonseparatist views, I suggest, may be divided into two broad camps (although individuals in each camp differ in degree), which I label the sociological view and the metascientific view. The difference between the two may be found in what each finds to be problematic about any attempt to separate assessment and management. Whereas the former (sociological) view argues against separatist attempts on the grounds that they give too small a role to societal (and other nonscientific) values, the latter (metascientific) view does so on the grounds that they give too small a role to scientific and methodological understanding. Examples of those I place under the sociological view are the cultural reductionists discussed in the preceding chapter by Shrader-Frechette. Examples of those I place under the metascientific view are the contributors to this volume themselves. A major theme running through this volume is that risk assessment cannot and should not be separated from societal and policy values (e.g., Silbergeld's uneasy divorce).


2014 ◽  
Vol 610 ◽  
pp. 367-376 ◽  
Author(s):  
Jia Jia Zhang ◽  
Xuan Wang ◽  
Lin Yao ◽  
Jing Peng Li ◽  
Xue Dong Shen

UCT (Upper confidential bounds on Trees) has been applied quite well as a selection approach in MCTS(Monte Carlo Tree Search) in imperfect information games like poker. By using risk dominance as complementary part of decision method besides payoff dominance, opponent strategies is better characterized as their risk factors, like bluff actions in Texas Hold’em Poker . In this paper, estimation method about the influence of risk factors on computing game equilibrium is provided. A novel algorithm, UCT-risk is proposed as modification about UCT algorithm basing on risk estimation methods. To verify the performance of new algorithm, Texas Hold’em, a popular test-bed for AI research is chosen as the experiment platform. The Agent adopted UCT-risk algorithm performs as well or better as the best previous approaches in experiments. And also it is applied in a poker agent named HITSZ_CS_13 in the 2013 AAAI Computer Poker Competition, which confirms the effectiveness of the UCT-risk provided in this paper.


2015 ◽  
Vol 79 (7) ◽  
pp. 1609-1617 ◽  
Author(s):  
Chun-Wei Lu ◽  
Jin-Chung Shih ◽  
Ssu-Yuan Chen ◽  
Hsin-Hui Chiu ◽  
Jou-Kou Wang ◽  
...  

2016 ◽  
Vol 115 (1) ◽  
pp. 355-362 ◽  
Author(s):  
Suchitra Ramachandran ◽  
Travis Meyer ◽  
Carl R. Olson

When monkeys view two images in fixed sequence repeatedly over days and weeks, neurons in area TE of the inferotemporal cortex come to exhibit prediction suppression. The trailing image elicits only a weak response when presented following the leading image that preceded it during training. Induction of prediction suppression might depend either on the contiguity of the images, as determined by their co-occurrence and captured in the measure of joint probability P( A, B), or on their contingency, as determined by their correlation and as captured in the measures of conditional probability P( A| B) and P( B| A). To distinguish between these possibilities, we measured prediction suppression after imposing training regimens that held P( A, B) constant but varied P( A| B) and P( B| A). We found that reducing either P( A| B) or P( B| A) during training attenuated prediction suppression as measured during subsequent testing. We conclude that prediction suppression depends on contingency, as embodied in the predictive relations between the images, and not just on contiguity, as embodied in their co-occurrence.


Proceedings ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 23
Author(s):  
Romero-Jarén ◽  
Benito ◽  
Arranz

Identification and classification of the different structures and infrastructures that make up a city (conventional buildings, power stations, nuclear power stations, routes of communication, etc.) are of great importance at the time of characterize their vulnerability and carry out estimates of seismic risk. Different types have different physical damage to some seismic movement, hence the importance of correctly assign a class of vulnerability. For this reason, it is necessary to know, updated form, the distribution and composition of structures and infrastructure of a city. Behaviour that presented these elements to a seismic phenomenon is linked, among others, building material and its geometric shape. Today, cadastral information updated about the infrastructure of a city does not have the data necessary and useful to carry out a calculation of seismic risk. For decades, the way of being able to have such information, has been through the development of campaigns of field for the elaboration of databases. This practice entails long time of work and the need for qualified personnel for the identification of the constructive typologies of the different structures. Nowadays, there are different geospatial techniques that allow data acquisition on a massive scale in a short time. In particular, by means of laser measurements, it is possible to have clouds of millions of points with geometric and radiometric information in a matter of seconds. This article presents a line of research whose main objective is to innovate in the vulnerability mapping and seismic risk estimation methods using geospatial techniques: static and dynamic laser. The end is contributing to knowledge and more accurate risk results, on which will be supported after the emergency plans that facilitate post event actions.


Sign in / Sign up

Export Citation Format

Share Document