scholarly journals Evaluation of a scaling cascade model for temporal rain- fall disaggregation

1998 ◽  
Vol 2 (1) ◽  
pp. 19-30 ◽  
Author(s):  
J. Olsson

Abstract. The possibility of modelling the temporal structure of rainfall in southern Sweden by a simple cascade model is tested. The cascade model is based on exact conservation of rainfall volume and has a branching number of 2. The weights associated with one branching are 1 and 0 with probability P(1/0), 0 and 1 with P(0/1), and Wx/x, and 1 - Wx/x, 0 < Wx/x, < 1, with P(x/x), where Wx/x is associated with a theoretical probability distribution. Furthermore, the probabilities p are assumed to depend on two characteristics of the rainy time period (wet box) to be branched: rainfall volume and position in the rainfall sequence. In the first step, analyses of 2 years of 8-min data indicates that the model is applicable between approximately 1 hour and 1 week with approximately uniformly distributed Wx/x values. The probabilities P show a clear dependence on the box characteristics and a slight seasonal nonstationarity. In the second step, the model is used to disaggregate the time series from 17- to 1-hour resolution. The model-generated data reproduce well the ratio between rainy and nonrainy periods and the distribution of individual volumes. Event volumes, event durations, and dry period lengths are fairly well reproduced, but somewhat underestimated, as was the autocorrelation. From analyses of power spectrum and statistical moments the model preserves the scaling behaviour of the data. The results demonstrate the potential of scaling-based approaches in hydrological applications involving rainfall disaggregation.

1998 ◽  
Vol 37 (11) ◽  
pp. 73-79 ◽  
Author(s):  
J. Olsson ◽  
R. Berndtsson

The present study concerns disaggregation of daily rainfall time series into higher resolution. For this purpose, the scaling-based cascade model proposed by Olsson (1998) is employed. This model operates by dividing each rainy time period into halves of equal length and distributing the rainfall volume between the halves. For this distribution three possible cases are defined, and the occurrence probability of each case is empirically estimated. Olsson (1998) showed that the model was applicable between the time scales 1 hour and 1 week for rainfall in southern Sweden. In the present study, a daily seasonal (April-June; 3 years) rainfall time series from the same region was disaggregated by the model to 45-min resolution. The disaggregated data was shown to very well reproduce many fundamental characteristics of the observed 45-min data, e.g., the division between rainy and dry periods, the event structure, and the scaling behavior. The results demonstrate the potential of scaling-based approaches in hydrological applications involving rainfall.


Author(s):  
Davide Provenzano ◽  
Rodolfo Baggio

AbstractIn this study, we characterized the dynamics and analyzed the degree of synchronization of the time series of daily closing prices and volumes in US$ of three cryptocurrencies, Bitcoin, Ethereum, and Litecoin, over the period September 1,2015–March 31, 2020. Time series were first mapped into a complex network by the horizontal visibility algorithm in order to revel the structure of their temporal characters and dynamics. Then, the synchrony of the time series was investigated to determine the possibility that the cryptocurrencies under study co-bubble simultaneously. Findings reveal similar complex structures for the three virtual currencies in terms of number and internal composition of communities. To the aim of our analysis, such result proves that price and volume dynamics of the cryptocurrencies were characterized by cyclical patterns of similar wavelength and amplitude over the time period considered. Yet, the value of the slope parameter associated with the exponential distributions fitted to the data suggests a higher stability and predictability for Bitcoin and Litecoin than for Ethereum. The study of synchrony between the time series investigated displayed a different degree of synchronization between the three cryptocurrencies before and after a collapse event. These results could be of interest for investors who might prefer to switch from one cryptocurrency to another to exploit the potential opportunities of profit generated by the dynamics of price and volumes in the market of virtual currencies.


2007 ◽  
Vol 46 (6) ◽  
pp. 742-756 ◽  
Author(s):  
Gyu Won Lee ◽  
Alan W. Seed ◽  
Isztar Zawadzki

Abstract The information on the time variability of drop size distributions (DSDs) as seen by a disdrometer is used to illustrate the structure of uncertainty in radar estimates of precipitation. Based on this, a method to generate the space–time variability of the distributions of the size of raindrops is developed. The model generates one moment of DSDs that is conditioned on another moment of DSDs; in particular, radar reflectivity Z is used to obtain rainfall rate R. Based on the fact that two moments of the DSDs are sufficient to capture most of the DSD variability, the model can be used to calculate DSDs and other moments of interest of the DSD. A deterministic component of the precipitation field is obtained from a fixed R–Z relationship. Two different components of DSD variability are added to the deterministic precipitation field. The first represents the systematic departures from the fixed R–Z relationship that are expected from different regimes of precipitation. This is generated using a simple broken-line model. The second represents the fluctuations around the R–Z relationship for a particular regime and uses a space–time multiplicative cascade model. The temporal structure of the stochastic fluctuations is investigated using disdrometer data. Assuming Taylor hypothesis, the spatial structure of the fluctuations is obtained and a stochastic model of the spatial distribution of the DSD variability is constructed. The consistency of the model is validated using concurrent radar and disdrometer data.


2021 ◽  
Author(s):  
Arun Ramanathan ◽  
Pierre-Antoine Versini ◽  
Daniel Schertzer ◽  
Ioulia Tchiguirinskaia ◽  
Remi Perrin ◽  
...  

&lt;p&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Hydrological applications such as flood design usually deal with and are driven by region-specific reference rainfall regulations, generally expressed as Intensity-Duration-Frequency (IDF) values. The meteorological module of hydro-meteorological models used in such applications should therefore be capable of simulating these reference rainfall scenarios. The multifractal cascade framework, since it incorporates physically realistic properties of rainfall processes such as non-homogeneity (intermittency), scale invariance, and extremal statistics, seems to be an appropriate choice for this purpose. Here we suggest a rather simple discrete-in-scale multifractal cascade based approach. Hourly rainfall time-series datasets (with lengths ranging from around 28 to 35 years) over six cities (Paris, Marseille, Strasbourg, Nantes, Lyon, and Lille) in France that are characterized by different climates and a six-minute rainfall time series dataset (with a length of around 15&amp;#160; years) over Paris were analyzed via spectral analysis and Trace Moment analysis to understand the scaling range over which the universal multifractal theory can be considered valid. Then the Double Trace Moment analysis was performed to estimate the universal multifractal parameters &amp;#945;,C&lt;sub&gt;1&lt;/sub&gt; that are required by the multifractal cascade model for simulating rainfall. A renormalization technique that estimates suitable renormalization constants based on the IDF values of reference rainfall is used to simulate the reference rainfall scenarios. Although only purely temporal simulations are considered here, this approach could possibly be generalized to higher spatial dimensions as well.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Keywords&lt;/strong&gt;&lt;/p&gt;&lt;p&gt;Multifractals, Non-linear geophysical systems, Cascade dynamics, Scaling, Hydrology, Stochastic rainfall simulations.&lt;/p&gt;


2017 ◽  
Vol 8 (4) ◽  
pp. 963-976 ◽  
Author(s):  
Jaak Jaagus ◽  
Mait Sepp ◽  
Toomas Tamm ◽  
Arvo Järvet ◽  
Kiira Mõisja

Abstract. Time series of monthly, seasonal and annual mean air temperature, precipitation, snow cover duration and specific runoff of rivers in Estonia are analysed for detecting of trends and regime shifts during 1951–2015. Trend analysis is realised using the Mann–Kendall test and regime shifts are detected with the Rodionov test (sequential t-test analysis of regime shifts). The results from Estonia are related to trends and regime shifts in time series of indices of large-scale atmospheric circulation. Annual mean air temperature has significantly increased at all 12 stations by 0.3–0.4 K decade−1. The warming trend was detected in all seasons but with the higher magnitude in spring and winter. Snow cover duration has decreased in Estonia by 3–4 days decade−1. Changes in precipitation are not clear and uniform due to their very high spatial and temporal variability. The most significant increase in precipitation was observed during the cold half-year, from November to March and also in June. A time series of specific runoff measured at 21 stations had significant seasonal changes during the study period. Winter values have increased by 0.4–0.9 L s−1 km−2 decade−1, while stronger changes are typical for western Estonia and weaker changes for eastern Estonia. At the same time, specific runoff in April and May have notably decreased indicating the shift of the runoff maximum to the earlier time, i.e. from April to March. Air temperature, precipitation, snow cover duration and specific runoff of rivers are highly correlated in winter determined by the large-scale atmospheric circulation. Correlation coefficients between the Arctic Oscillation (AO) and North Atlantic Oscillation (NAO) indices reflecting the intensity of westerlies, and the studied variables were 0.5–0.8. The main result of the analysis of regime shifts was the detection of coherent shifts for air temperature, snow cover duration and specific runoff in the late 1980s, mostly since the winter of 1988/1989, which are, in turn, synchronous with the shifts in winter circulation. For example, runoff abruptly increased in January, February and March but decreased in April. Regime shifts in annual specific runoff correspond to the alternation of wet and dry periods. A dry period started in 1964 or 1963, a wet period in 1978 and the next dry period at the beginning of the 21st century.


CJEM ◽  
2018 ◽  
Vol 20 (S1) ◽  
pp. S14-S14
Author(s):  
J. Thull-Freedman ◽  
T. Williamson ◽  
E. Pols ◽  
A. McFetridge ◽  
S. Libbey ◽  
...  

Introduction: Undertreated pain is known to cause short and long-term harm in children. Limb injuries are a common painful condition in emergency department (ED) patients, accounting for 12% of ED visits by children. Our city has one pediatric ED in a freestanding children’s hospital and 3 general ED’s that treat both adults and children. 68% of pediatric limb injuries in our city are treated in the pediatric ED and 32% are treated in a general ED. A quality improvement (QI) initiative was developed at the children’s hospital ED in April 2015 focusing on “Commitment to Comfort.” After achieving aims at the childrens hospital, a QI collaborative was formed among the pediatric ED and the 3 general ED’s to 1) improve the proportion of children citywide receiving analgesia for limb injuries from 27% to 40% and 2) reduce the median time to analgesia from 37 minutes to 15 minutes, during the time period of April-September, 2016. Methods: Data were obtained from computerized order entry records for children 0-17.99 years visiting any participating ED with a chief complaint of limb injury. Project teams from each site met monthly to discuss aims, develop key driver diagrams, plan tests of change, and share learnings. Implementation strategies were based on the Model for Improvement with PDSA cycles. Patient and family consultation was obtained. Process measures included the proportion of children treated with analgesic medication and time to analgesia; balancing measures were duration of triage and length of stay for limb injury and all patients. Site-specific run charts were used to detect special cause variation. Data from all sites were combined at study end to measure city-wide impact using 2 and interrupted time series analysis. Results: During the 3.5-year time period studied (April 1, 2014-September 30, 2017), there were 45,567 visits to the participating ED’s by children 0-17.99 years with limb injury. All visits were included in analysis. Special cause was detected in run charts of all process measures. Interrupted time series analysis comparing the year prior to implementation at the childrens hospital in April 2015 to the year following completion of implementation at the 3 general hospitals in October 2016 demonstrated that the proportion of patients with limb injury receiving analgesia increased from 27% to 40% (p<0.01), and the median time from arrival to analgesia decreased from 37 to 11 minutes (p<0.01). Balancing measure analysis is in progress. Conclusion: This multisite initiative emphasizing “Commitment to Comfort” was successful in improving pain outcomes for all children with limb injuries seen in city-wide ED’s, and was sustained for one year following implementation. A QI collaborative can be an effective method for spreading improvement. The project team is now spreading the Commitment to Comfort initiative to over 30 rural and regional EDs throughout the province through establishment of a provincial QI collaborative.


2019 ◽  
Author(s):  
Jaqueline Lekscha ◽  
Reik V. Donner

Abstract. Analysing palaeoclimate proxy time series using windowed recurrence network analysis (wRNA) has been shown to provide valuable information on past climate variability. In turn, it has also been found that the robustness of the obtained results differs among proxies from different palaeoclimate archives. To systematically test the suitability of wRNA for studying different types of palaeoclimate proxy time series, we use the framework of forward proxy modelling. For this, we create artificial input time series with different properties and, in a first step, compare the time series properties of the input and the model output time series. In a second step, we compare the areawise significant anomalies detected using wRNA. For proxies from tree and lake archives, we find that significant anomalies present in the input time series are sometimes missed in the input time series after the nonlinear filtering by the corresponding models. For proxies from speleothems, we observe falsely identified significant anomalies that are not present in the input time series. Finally, for proxies from ice cores, the wRNA results show the best correspondence with those for the input data. Our results contribute to improve the interpretation of windowed recurrence network analysis results obtained from real-world palaeoclimate time series.


2020 ◽  
Author(s):  
Yuan Yuan ◽  
Lei Lin

Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data is scarce. To address this problem, we propose a novel self-supervised pre-training scheme to initialize a Transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pre-training is completed, the pre-trained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed method, leading to a classification accuracy increment up to 1.91% to 6.69%. <div><b>This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.</b></div>


2020 ◽  
Vol 86 (7) ◽  
pp. 431-441 ◽  
Author(s):  
Sébastien Giordano ◽  
Simon Bailly ◽  
Loic Landrieu ◽  
Nesrine Chehata

Leveraging the recent availability of accurate, frequent, and multimodal (radar and optical) Sentinel-1 and -2 acquisitions, this paper investigates the automation of land parcel identi- fication system (LPIS ) crop type classification. Our approach allows for the automatic integration of temporal knowledge, i.e., crop rotations using existing parcel-based land cover databases and multi-modal Sentinel-1 and -2 time series. The temporal evolution of crop types was modeled with a linear- chain conditional random field, trained with time series of multi-modal (radar and optical) satellite acquisitions and associated LPIS. Our model was tested on two study areas in France (≥ 1250 km2) which show different crop types, various parcel sizes, and agricultural practices: . the Seine et Marne and the Alpes de Haute-Provence classified accordingly to a fine national 25-class nomenclature. We first trained a Random Forest classifier without temporal structure to achieve 89.0% overall accuracy in Seine et Marne (10 classes) and 73% in Alpes de Haute-Provence (14 classes). We then demonstrated experimentally that taking into account the temporal structure of crop rotation with our model resulted in an increase of 3% to +5% in accuracy. This increase was especially important (+12%) for classes which were poorly classified without using the temporal structure. A stark posi- tive impact was also demonstrated on permanent crops, while it was fairly limited or even detrimental for annual crops.


Author(s):  
Nachiketa Chakraborty

With an explosion of data in the near future, from observatories spanning from radio to gamma-rays, we have entered the era of time domain astronomy. Historically, this field has been limited to modeling the temporal structure with time-series simulations limited to energy ranges blessed with excellent statistics as in X-rays. In addition to ever increasing volumes and variety of astronomical lightcurves, there's a plethora of different types of transients detected not only across the electromagnetic spectrum, but indeed across multiple messengers like counterparts for neutrino and gravitational wave sources. As a result, precise, fast forecasting and modeling the lightcurves or time-series will play a crucial role in both understanding the physical processes as well as coordinating multiwavelength and multimessenger campaigns. In this regard, deep learning algorithms such as recurrent neural networks (RNNs) should prove extremely powerful for forecasting as it has in several other domains. Here we test the performance of a very successful class of RNNs, the Long Short Term Memory (LSTM) algorithms with simulated lightcurves. We focus on univariate forecasting of types of lightcurves typically found in active galactic nuclei (AGN) observations. Specifically, we explore the sensitivity of training and test losses to key parameters of the LSTM network and data characteristics namely gaps and complexity measured in terms of number of Fourier components. We find that typically, the performances of LSTMs are better for pink or flicker noise type sources. The key parameters on which performance is dependent are batch size for LSTM and the gap percentage of the lightcurves. While a batch size of $10-30$ seems optimal, the most optimal test and train losses are under $10 \%$ of missing data for both periodic and random gaps in pink noise. The performance is far worse for red noise. This compromises detectability of transients. The performance gets monotonically worse for data complexity measured in terms of number of Fourier components which is especially relevant in the context of complicated quasi-periodic signals buried under noise. Thus, we show that time-series simulations are excellent guides for use of RNN-LSTMs in forecasting.


Sign in / Sign up

Export Citation Format

Share Document