Which method should we use to draw empirical rainfall thresholds for landslide early warning?

Author(s):  
David Johnny Peres ◽  
Antonino Cancelliere

<p>Landslide thresholds determined empirically through the combined analysis of rainfall and landslide data are at the core of early warning systems. Given a set of rainfall and landslide data, several methods do exist to determine the threshold: methods based on triggering events only, methods based on the non-triggering events only, and methods based on both type of rainfall events. The first are the most commonly encountered in literature. Early work determined the threshold by drawing the lower envelope curve of the triggering events “by eye”. More recent work used more sophisticated statistical approaches in order to reduce the subjectivity. Among these methods, the so-called frequentist method has become prominent in the literature. These methods have been criticized because they do not account uncertainty, i.e. the fact that there is not a clear separation between rainfall characteristics of triggering and non-triggering events. Hence, methods based on the optimization of Receiver operating characteristic indices – count of true and false positives/negatives – have been proposed. One of the first methods proposed in this sense referred to the use of Bayesian a-posteriori probability, which is the same of using the so-called ROC Precision index. Others have used the True Skill Statistic. On the other hand, use of non-triggering events only has been discussed just by a few researchers, and the potentialities of this way to proceed have been scarcely explored.</p><p>The choice of the method is usually dictated by external factors, such as the availability of data and their reliability, but it should also take into account of the theoretical statistical properties of each method.</p><p>Given this context, in the present work we compare, through Monte Carlo simulations, the statistical properties of each of the above-mentioned methods. In particular, we attempt to provide the answer to the following questions: What is the minimum number of landslides that is needed to perform a reliable determination of thresholds? How robust is the method for drawing the threshold – i.e. their sensitivity to artifacts in the data, such as exchanges of triggering events with non-triggering events due to incompleteness of landslide archives? What are the performances of the methods in terms of the whole ROC confusion matrix?</p><p>The analysis is performed for various levels of uncertainty in the data, i.e. noise in the separation by triggering and non-triggering events. Results show that methods based on non-triggering events only may be convenient when few landslide data are available. Also, in the case of high uncertainty in the data, the performances of methods based on triggering events may be poor compared to those based on non-triggering events. Finally, the methods based on both triggering and non-triggering events are the most robust.</p>

Landslides ◽  
2021 ◽  
Author(s):  
David J. Peres ◽  
Antonino Cancelliere

AbstractRainfall intensity-duration landslide-triggering thresholds have become widespread for the development of landslide early warning systems. Thresholds can be in principle determined using rainfall event datasets of three types: (a) rainfall events associated with landslides (triggering rainfall) only, (b) rainfall events not associated with landslides (non-triggering rainfall) only, (c) both triggering and non-triggering rainfall. In this paper, through Monte Carlo simulation, we compare these three possible approaches based on the following statistical properties: robustness, sampling variation, and performance. It is found that methods based only on triggering rainfall can be the worst with respect to those three investigated properties. Methods based on both triggering and non-triggering rainfall perform the best, as they could be built to provide the best trade-off between correct and wrong predictions; they are also robust, but still require a quite large sample to sufficiently limit the sampling variation of the threshold parameters. On the other side, methods based on non-triggering rainfall only, which are mostly overlooked in the literature, imply good robustness and low sampling variation, and performances that can often be acceptable and better than thresholds derived from only triggering events. To use solely triggering rainfall—which is the most common practice in the literature—yields to thresholds with the worse statistical properties, except when there is a clear separation between triggering and non-triggering events. Based on these results, it can be stated that methods based only on non-triggering rainfall deserve wider attention. Methods for threshold identification based on only non-triggering rainfall may have the practical advantage that can be in principle used where limited information on landslide occurrence is available (newly instrumented areas). The fact that relatively large samples (about 200 landslides events) are needed for a sufficiently precise estimation of threshold parameters when using triggering rainfall suggests that threshold determination in future applications may start from identifying thresholds from non-triggering events only, and then move to methods considering also the triggering events as landslide information starts to become more available.


2015 ◽  
Vol 3 (1) ◽  
pp. 891-917 ◽  
Author(s):  
D. Lagomarsino ◽  
S. Segoni ◽  
A. Rosi ◽  
G. Rossi ◽  
A. Battistini ◽  
...  

Abstract. This work proposes a methodology to compare the forecasting effectiveness of different rainfall threshold models for landslide forecasting. We tested our methodology with two state-of-the-art models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds. The first model identifies rainfall intensity-duration thresholds by means of a software called MaCumBA (MAssive CUMulative Brisk Analyzer) (Segoni et al., 2014a) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I−D values. A back analysis using data from past events is used to identify the threshold conditions associated with the least amount of false alarms. The second model (SIGMA) (Sistema Integrato Gestione Monitoraggio Allerta) (Martelloni et al., 2012) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the SD (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the SDs in the proposed methodology. To perform a quantitative and objective comparison, these two methodologies were applied in two different areas, each time performing a site-specific calibration against available rainfall and landslide data. After each application, a validation procedure was carried out on an independent dataset and a confusion matrix was build. The results of the confusion matrixes were combined to define a series of indexes commonly used to evaluate model performances in natural hazard assessment. The comparison of these indexes allowed assessing the most effective model in each case of study and, consequently, which threshold should be used in the local early warning system in order to obtain the best possible risk management. In our application, none of the two models prevailed absolutely on the other, since each model performed better in a test site and worse in the other one, depending on the physical characteristics of the area. This conclusion can be generalized and it can be assumed that the effectiveness of a threshold model depends on the test site characteristics (including the quality and quantity of the input data) and that a validation and a comparison with alternative models should be performed before the implementation in operational early warning systems.


1995 ◽  
Vol 34 (05) ◽  
pp. 518-522 ◽  
Author(s):  
M. Bensadon ◽  
A. Strauss ◽  
R. Snacken

Abstract:Since the 1950s, national networks for the surveillance of influenza have been progressively implemented in several countries. New epidemiological arguments have triggered changes in order to increase the sensitivity of existent early warning systems and to strengthen the communications between European networks. The WHO project CARE Telematics, which collects clinical and virological data of nine national networks and sends useful information to public health administrations, is presented. From the results of the 1993-94 season, the benefits of the system are discussed. Though other telematics networks in this field already exist, it is the first time that virological data, absolutely essential for characterizing the type of an outbreak, are timely available by other countries. This argument will be decisive in case of occurrence of a new strain of virus (shift), such as the Spanish flu in 1918. Priorities are now to include other existing European surveillance networks.


10.1596/29269 ◽  
2018 ◽  
Author(s):  
Ademola Braimoh ◽  
Bernard Manyena ◽  
Grace Obuya ◽  
Francis Muraya

2005 ◽  
Author(s):  
Willian H. VAN DER Schalie ◽  
David E. Trader ◽  
Mark W. Widder ◽  
Tommy R. Shedd ◽  
Linda M. Brennan

Sign in / Sign up

Export Citation Format

Share Document