scholarly journals Spatio-Temporal Downscaling of Climate Data Using Convolutional and Error-Predicting Neural Networks

2021 ◽  
Vol 3 ◽  
Author(s):  
Agon Serifi ◽  
Tobias Günther ◽  
Nikolina Ban

Numerical weather and climate simulations nowadays produce terabytes of data, and the data volume continues to increase rapidly since an increase in resolution greatly benefits the simulation of weather and climate. In practice, however, data is often available at lower resolution only, for which there are many practical reasons, such as data coarsening to meet memory constraints, limited computational resources, favoring multiple low-resolution ensemble simulations over few high-resolution simulations, as well as limits of sensing instruments in observations. In order to enable a more insightful analysis, we investigate the capabilities of neural networks to reconstruct high-resolution data from given low-resolution simulations. For this, we phrase the data reconstruction as a super-resolution problem from multiple data sources, tailored toward meteorological and climatological data. We therefore investigate supervised machine learning using multiple deep convolutional neural network architectures to test the limits of data reconstruction for various spatial and temporal resolutions, low-frequent and high-frequent input data, and the generalization to numerical and observed data. Once such downscaling networks are trained, they serve two purposes: First, legacy low-resolution simulations can be downscaled to reconstruct high-resolution detail. Second, past observations that have been taken at lower resolutions can be increased to higher resolutions, opening new analysis possibilities. For the downscaling of high-frequent fields like precipitation, we show that error-predicting networks are far less suitable than deconvolutional neural networks due to the poor learning performance. We demonstrate that deep convolutional downscaling has the potential to become a building block of modern weather and climate analysis in both research and operational forecasting, and show that the ideal choice of the network architecture depends on the type of data to predict, i.e., there is no single best architecture for all variables.

2020 ◽  
Vol 16 (5) ◽  
pp. 155014772092048
Author(s):  
Miguel Ángel López-Medina ◽  
Macarena Espinilla ◽  
Chris Nugent ◽  
Javier Medina Quero

The automatic detection of falls within environments where sensors are deployed has attracted considerable research interest due to the prevalence and impact of falling people, especially the elderly. In this work, we analyze the capabilities of non-invasive thermal vision sensors to detect falls using several architectures of convolutional neural networks. First, we integrate two thermal vision sensors with different capabilities: (1) low resolution with a wide viewing angle and (2) high resolution with a central viewing angle. Second, we include fuzzy representation of thermal information. Third, we enable the generation of a large data set from a set of few images using ad hoc data augmentation, which increases the original data set size, generating new synthetic images. Fourth, we define three types of convolutional neural networks which are adapted for each thermal vision sensor in order to evaluate the impact of the architecture on fall detection performance. The results show encouraging performance in single-occupancy contexts. In multiple occupancy, the low-resolution thermal vision sensor with a wide viewing angle obtains better performance and reduction of learning time, in comparison with the high-resolution thermal vision sensors with a central viewing angle.


2020 ◽  
Author(s):  
Marie Déchelle-Marquet ◽  
Marina Levy ◽  
Patrick Gallinari ◽  
Michel Crepon ◽  
Sylvie Thiria

<p>Ocean currents are a major source of impact on climate variability, through the heat transport they induce for instance. Ocean climate models have quite low resolution of about 50 km. Several dynamical processes such as instabilities and filaments which have a scale of 1km have a strong influence on the ocean state. We propose to observe and model these fine scale effects by a combination of satellite high resolution SST observations (1km resolution, daily observations) and mesoscale resolution altimetry observations (10km resolution, weekly observations) with deep neural networks. Whereas the downscaling of climate models has been commonly addressed with assimilation approaches, in the last few years neural networks emerged as powerful multi-scale analysis method. Besides, the large amount of available oceanic data makes attractive the use of deep learning to bridge the gap between scales variability.</p><p>This study aims at reconstructing the multi-scale variability of oceanic fields, based on the high resolution NATL60 model of ocean observations at different spatial resolutions: low-resolution sea surface height (SSH) and high resolution SST. As the link between residual neural networks and dynamical systems has recently been established, such a network is trained in a supervised way to reconstruct the high variability of SSH and ocean currents at submesoscale (a few kilometers). To ensure the conservation of physical aspects in the model outputs, physical knowledge is incorporated into the deep learning models training. Different validation methods are investigated and the model outputs are tested with regards to their physical plausibility. The method performance is discussed and compared to other baselines (namely convolutional neural network). The generalization of the proposed method on different ocean variables such as sea surface chlorophyll or sea surface salinity is also examined.</p>


2018 ◽  
Author(s):  
Christoph Schlager ◽  
Gottfried Kirchengast ◽  
Juergen Fuchsberger

Abstract. A weather diagnostic application for automatic generation of gridded wind fields in near-real time, recently developed by the authors (Schlager et al., 2017), is applied to the WegenerNet Johnsbachtal (JBT) meteorological station network. This station network contains eleven meteorological stations at elevations from about 600 m to 2200 m in a mountainous region in the north of Styria, Austria. The application generates, based on meteorological observations with a temporal resolution of 10 minutes from the WegenerNet JBT, mean wind and wind gust fields at 10 m and 50 m height levels with a high spatial resolution of 100 × 100 m and a temporal resolution of 30 minutes. These wind field products are automatically stored to the WegenerNet data archives, which also include long-term averaged weather and climate datasets from post-processing. A main purpose of these empirically modeled products is the evaluation of convection-permitting dynamical climate models as well as investigating weather and climate variability on a local scale. The application's performance is evaluated against the observations from meteorological stations for representative weather conditions, for a month including mainly thermally induced wind events (July 2014) and a month with frequently occurring strong wind events (December 2013). The overall statistical agreement, estimated for the vector-mean wind speed, shows a reasonably good modeling performance with somewhat better values for the strong wind conditions. The difference between modeled and observed wind directions depends on the station location, where locations along mountain slopes are particularly challenging. Furthermore, the seasonal statistical agreement was investigated from five-year climate data of the WegenerNet JBT in comparison to nine-year climate data from the high-density WegenerNet meteorological station network Feldbach Region (FBR) analyzed by Schlager et al., (2017)In general, the five-year statistical evaluation for the JBT indicates similar performance as the shorter-term evaluations of the two representative months. Because of the denser WegenerNet FBR network, the statistical results show better performance for this station network. The application can now serve as a valuable tool for intercomparison with and evaluation of wind fields from high-resolution dynamical climate models in both the WegenerNet FBR and JBT regions.


2018 ◽  
Vol 11 (10) ◽  
pp. 5607-5627 ◽  
Author(s):  
Christoph Schlager ◽  
Gottfried Kirchengast ◽  
Juergen Fuchsberger

Abstract. A weather diagnostic application for automatic generation of gridded wind fields in near-real-time, recently developed by the authors Schlager et al. (2017), is applied to the WegenerNet Johnsbachtal (JBT) meteorological station network. This station network contains 11 meteorological stations at elevations from about 600 to 2200 m in a mountainous region in the north of Styria, Austria. The application generates, based on meteorological observations with a temporal resolution of 10 min from the WegenerNet JBT, mean wind and wind gust fields at 10 and 50 m height levels with a high spatial resolution of 100 m × 100 m and a temporal resolution of 30 min. These wind field products are automatically stored to the WegenerNet data archives, which also include long-term averaged weather and climate datasets from post-processing. The main purpose of these empirically modeled products is the evaluation of convection-permitting dynamical climate models as well as investigating weather and climate variability on a local scale. The application's performance is evaluated against the observations from meteorological stations for representative weather conditions, for a month including mainly thermally induced wind events (July 2014) and a month with frequently occurring strong wind events (December 2013). The overall statistical agreement, estimated for the vector-mean wind speed, shows a reasonably good modeling performance. Due to the spatially more homogeneous wind speeds and directions for strong wind events in this mountainous region, the results show somewhat better performance for these events. The difference between modeled and observed wind directions depends on the station location, where locations along mountain slopes are particularly challenging. Furthermore, the seasonal statistical agreement was investigated from 5-year climate data of the WegenerNet JBT in comparison to 9-year climate data from the high-density WegenerNet meteorological station network Feldbach Region (FBR) analyzed by Schlager et al. (2017). In general, the 5-year statistical evaluation for the JBT indicates similar performance as the shorter-term evaluations of the two representative months. Because of the denser WegenerNet FBR network, the statistical results show better performance for this station network. The application can now serve as a valuable tool for intercomparison with, and evaluation of, wind fields from high-resolution dynamical climate models in both the WegenerNet FBR and JBT regions.


2021 ◽  
Vol 10 (3) ◽  
Author(s):  
Pere Mujal ◽  
Àlex Martínez Miguel ◽  
Artur Polls ◽  
Bruno Juliá-Díaz ◽  
Sebastiano Pilati

We investigate the supervised machine learning of few interacting bosons in optical speckle disorder via artificial neural networks. The learning curve shows an approximately universal power-law scaling for different particle numbers and for different interaction strengths. We introduce a network architecture that can be trained and tested on heterogeneous datasets including different particle numbers. This network provides accurate predictions for all system sizes included in the training set and, by design, is suitable to attempt extrapolations to (computationally challenging) larger sizes. Notably, a novel transfer-learning strategy is implemented, whereby the learning of the larger systems is substantially accelerated and made consistently accurate by including in the training set many small-size instances.


2020 ◽  
Author(s):  
Michael Kern ◽  
Kevin Höhlein ◽  
Timothy Hewson ◽  
Rüdiger Westermann

<p>Numerical weather prediction models with high resolution (of order kms or less) can deliver very accurate low-level winds. The problem is that one cannot afford to run simulations at very high resolution over global or other large domains for long periods because the computational power needed is prohibitive.</p><p>Instead, we propose using neural networks to downscale low-resolution wind-field simulations (input) to high-resolution fields (targets) to try to match a high-resolution simulation. Based on short-range forecasts of wind fields (at the 100m level) from the ECMWF ERA5 reanalysis, at 31km resolution, and the HRES (deterministic) model version, at 9km resolution, we explore two complementary approaches, in an initial “proof-of-concept” study.</p><p>In a first step, we evaluate the ability of U-Net-type convolutional neural networks to learn a one-to-one mapping of low-resolution input data to high-resolution simulation results. By creating a compressed feature-space representation of the data, networks of this kind manage to encode important flow characteristics of the input fields and assimilate information from additional data sources. Next to wind vector fields, we use topographical information to inform the network, at low and high resolution, and include additional parameters that strongly influence wind-field prediction in simulations, such as vertical stability (via the simple, compact metric of boundary layer height) and the land-sea mask. We thus infer weather-situation and location-dependent wind structures that could not be retrieved otherwise.</p><p>In some situations, however, it will be inappropriate to deliver only a single estimate for the high-resolution wind field. Especially in regions where topographic complexity fosters the emergence of complex wind patterns, a variety of different high-resolution estimates may be equally compatible with the low-resolution input, and with physical reasoning. In a second step, we therefore extend the learning task from optimizing deterministic one-to-one mappings to modelling the distribution of physically reasonable high-resolution wind-vector fields, conditioned on the given low-resolution input. Using the framework of conditional variational autoencoders, we realize a generative model, based on convolutional neural networks, which is able to learn the conditional distributions from data. Sampling multiple estimates of the high-resolution wind vector fields from the model enables us to explore multimodalities in the data and to infer uncertainties in the predictand.</p><p>In a future customer-oriented extension of this proof-of-concept work, we would envisage using a target resolution higher than 9km - say in the 1-4km range - to deliver much better representivity for users. Ensembles of low resolution input data could also be used, to deliver as output an “ensemble of ensembles”, to condense into a meaningful probabilistic format for users. The many exciting applications of this work (e.g. for wind power management) will be highlighted.</p>


2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


Author(s):  
Joseph Bethge ◽  
Christian Bartz ◽  
Haojin Yang ◽  
Ying Chen ◽  
Christoph Meinel

2021 ◽  
Vol 40 (3) ◽  
pp. 1-13
Author(s):  
Lumin Yang ◽  
Jiajie Zhuang ◽  
Hongbo Fu ◽  
Xiangzhi Wei ◽  
Kun Zhou ◽  
...  

We introduce SketchGNN , a convolutional graph neural network for semantic segmentation and labeling of freehand vector sketches. We treat an input stroke-based sketch as a graph with nodes representing the sampled points along input strokes and edges encoding the stroke structure information. To predict the per-node labels, our SketchGNN uses graph convolution and a static-dynamic branching network architecture to extract the features at three levels, i.e., point-level, stroke-level, and sketch-level. SketchGNN significantly improves the accuracy of the state-of-the-art methods for semantic sketch segmentation (by 11.2% in the pixel-based metric and 18.2% in the component-based metric over a large-scale challenging SPG dataset) and has magnitudes fewer parameters than both image-based and sequence-based methods.


Sign in / Sign up

Export Citation Format

Share Document