scholarly journals Timescape: A Simple Spatiotemporal Interpolation Tool

Author(s):  
Marco Ciolfi ◽  
Francesca Chiocchini ◽  
Rocco Pace ◽  
Giuseppe Russo ◽  
Marco Lauteri

We developed a novel approach in the field of spatiotemporal modelling, based on the spatialisation of time: the Timescape algorithm. It is especially aimed at sparsely distributed datasets in ecological research, whose spatial and temporal variability is strongly entangled. The algorithm is based on the definition of a spatiotemporal distance that incorporates a causality constraint and that is capable of accommodating the seasonal behaviour of the modelled variable as well. The actual modelling is conducted exploiting any established spatial interpolation technique, substituting the ordinary spatial distance with our Timescape distance, thus sorting, from the same input set of observations, those causally related to each estimated value at a given site and time. The notion of causality is expressed topologically and it has to be tuned for each particular case. The Timescape algorithm originates from the field of stable isotopes spatial modelling (isoscapes), but in principle it can be used to model any real scalar random field distribution.

2020 ◽  
Author(s):  
Fidel González-Rouco ◽  
María Angeles López-Cayuela ◽  
Jorge Navarro ◽  
Elena García-Bustamante ◽  
Nuria García-Cantero ◽  
...  

<p>The spatial and temporal variability of droughts in the Euro-Mediterranean area during the last two millennia has been analyzed by comparing the Old World Drought Atlas (OWDA) dentrochronological based reconstruction and 13 simulations including a complete set of natural and anthropogenic forcings from the Community Earth System Model- Last Millennium Ensemble (CESM-LME). The OWDA represents scPDSI estimates, whereas for the CESM-LME soil moisture is used. A clustering into regions of objectively different behavior is achieved through rotation of principal components and the resulting regionalizations of the OWDA and the CESM-LME are compared.</p><p>The resulting regions from the reconstructions and model are overall consistent. Some regions are coincident in both and in some cases model regions are a combination of the reconstructed ones. The resulting classification is also robust across the model ensemble, although It is found that the definition of some hydroclimatic regions shows some sensitivity to internal variability.</p><p>The temporal variability of drought within each region is analyzed. Differences are found in the level of low frequency variability among regions with implications for the probability of having long intense droughts in different areas. Megadroughts have been found to exist both in the reconstructions and in the simulations and their occurrence suggest rather internal variability dependances rather than responses to external forcing.</p>


Crop Science ◽  
2004 ◽  
Vol 44 (3) ◽  
pp. 847 ◽  
Author(s):  
Weidong Liu ◽  
Matthijs Tollenaar ◽  
Greg Stewart ◽  
William Deen

2021 ◽  
Vol 5 (1) ◽  
pp. 38
Author(s):  
Chiara Giola ◽  
Piero Danti ◽  
Sandro Magnani

In the age of AI, companies strive to extract benefits from data. In the first steps of data analysis, an arduous dilemma scientists have to cope with is the definition of the ’right’ quantity of data needed for a certain task. In particular, when dealing with energy management, one of the most thriving application of AI is the consumption’s optimization of energy plant generators. When designing a strategy to improve the generators’ schedule, a piece of essential information is the future energy load requested by the plant. This topic, in the literature it is referred to as load forecasting, has lately gained great popularity; in this paper authors underline the problem of estimating the correct size of data to train prediction algorithms and propose a suitable methodology. The main characters of this methodology are the Learning Curves, a powerful tool to track algorithms performance whilst data training-set size varies. At first, a brief review of the state of the art and a shallow analysis of eligible machine learning techniques are offered. Furthermore, the hypothesis and constraints of the work are explained, presenting the dataset and the goal of the analysis. Finally, the methodology is elucidated and the results are discussed.


Sign in / Sign up

Export Citation Format

Share Document