Self-validating deep learning of continental hydrology through satellite gravimetry and altimetry

Author(s):  
Christopher Irrgang ◽  
Jan Saynisch-Wagner ◽  
Robert Dill ◽  
Eva Boergens ◽  
Maik Thomas

<p>Space-borne observations of terrestrial water storage (TWS) are an essential ingredient for understanding the Earth's global water cycle, its susceptibility to climate change, and for risk assessments of ecosystems, agriculture, and water management. However, the complex distribution of water masses in rivers, lakes, or groundwater basins remains elusive in coarse-resolution gravimetry observations. We combine machine learning, numerical modeling, and satellite altimetry to build and train a downscaling neural network that recovers simulated TWS from synthetic space-borne gravity observations. The neural network is designed to adapt and validate its training progress by considering independent satellite altimetry records. We show that the neural network can accurately derive TWS anomalies in 2019 after being trained over the years 2003 to 2018. Specifically for validated regions in the Amazonas, we highlight that the neural network can outperform the numerical hydrology model used in the network training.</p><p>https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2020GL089258</p>

2020 ◽  
Author(s):  
Christopher Irrgang ◽  
Jan Saynisch-Wagner ◽  
Robert Dill ◽  
Eva Boergens ◽  
Maik Thomas

<p>Space-borne observations of terrestrial water storage (TWS) are an essential ingredient for understanding the Earth's global water cycle, its susceptibility to climate change, and for risk assessments of ecosystems, agriculture, and water management. However, the complex distribution of water masses in rivers, lakes, or groundwater basins remains elusive in coarse-resolution gravimetry observations. We combine machine learning, numerical modeling, and satellite altimetry to build and train a downscaling neural network that recovers simulated TWS from synthetic space-borne gravity observations. The neural network is designed to adapt and validate its training progress by considering independent satellite altimetry records. We show that the neural network can accurately derive TWS anomalies in 2019 after being trained over the years 2003 to 2018. Specifically for validated regions in the Amazonas, we highlight that the neural network can outperform the numerical hydrology model used in the network training.</p> <p><a class="epub-doi" href="https://doi.org/10.1029/2020GL089258" aria-label="Digital Object Identifier">https://doi.org/10.1029/2020GL089258</a></p> <p> </p>


2021 ◽  
Author(s):  
Mohammad J. Tourian ◽  
Omid Elmi ◽  
Yasin Shafaghi ◽  
Sajedeh Behnia ◽  
Peyman Saemian ◽  
...  

Abstract. Against the backdrop of global change, both in terms of climate and demography, there is a pressing need for monitoring the global water cycle. The publicly available global database is very limited in its spatial and temporal coverage worldwide. Moreover, the acquisition of in situ data and their delivery to the database are in decline since the late 1970s, be it for economical or political reasons. Given the insufficient monitoring from in situ gauge networks, and with no outlook for improvement, spaceborne approaches have been under investigation for some years now. Satellite-based Earth observation with its global coverage and homogeneous accuracy has been demonstrated to be a potential alternative to in situ measurements. This paper presents HydroSat as a repository of global water cycle products from spaceborne geodetic sensors. HydroSat provides time series and their uncertainty of: water level from satellite altimetry, surface water extent from satellite imagery, terrestrial water storage anomaly from satellite gravimetry, lake and reservoir water storage anomaly from a combination of satellite altimetry and imagery, and river discharge from either satellite altimetry or imagery. These products can contribute to understanding the global water cycle within the Earth system in several ways. They can act as inputs to hydrological models, they can play a complementary role to current and future spaceborne observations, and they can define indicators of the past and future state of the global freshwater system. The repository is publicly available through http://hydrosat.gis.uni-stuttgart.de.


2012 ◽  
Vol 605-607 ◽  
pp. 2175-2178
Author(s):  
Xiao Qin Wu

In order to overcome the disadvantage of neural networks that their structure and parameters were decided stochastically or by one’s experience, an improved BP neural network training algorithm based on genetic algorithm was proposed.In this paper,genetic algorithms and simulated annealing algorithm that optimizes neural network is proposed which is used to scale the fitness function and select the proper operation according to the expected value in the course of optimization,and the weights and thresholds of the neural network is optimized. This method is applied to the stock prediction system.The experimental results show that the proposed approach have high accuracy,strong stability and improved confidence.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Onesimo Meza-Cruz ◽  
Isaac Pilatowsky ◽  
Agustín Pérez-Ramírez ◽  
Carlos Rivera-Blanco ◽  
Youness El Hamzaoui ◽  
...  

The aim of this work is to present a model for heat transfer, desorbed refrigerant, and pressure of an intermittent solar cooling system’s thermochemical reactor based on backpropagation neural networks and mathematical symmetry groups. In order to achieve this, a reactor was designed and built based on the reaction of BaCl2-NH3. Experimental data from this reactor were collected, where barium chloride was used as a solid absorbent and ammonia as a refrigerant. The neural network was trained using the Levenberg–Marquardt algorithm. The correlation coefficient between experimental data and data simulated by the neural network was r = 0.9957. In the neural network’s sensitivity analysis, it was found that the inputs, reactor’s heating temperature and sorption time, influence neural network’s learning by 35% and 20%, respectively. It was also found that, by applying permutations to experimental data and using multibase mathematical symmetry groups, the neural network training algorithm converges faster.


2020 ◽  
Author(s):  
John Reager ◽  
Madeleine Pascolini-Campbell

<p>A frontier in hydrology lies in understanding the potential impacts of a warming planet on water cycle variability from regional to global scales.  The fluxes that constitute the terrestrial water cycle present various complexity in observability, with Evapotranspiration (ET) being generally the most challenging variable to quantify directly.  Because of the ability to apply mass conservation and to "close" a water flux budget across scales, mass change measurements present the best opportunity to quantify evapotranspiration and changes in evapotranspiration at larger scales, ranging from basins to global. Here we present work on: (1) using GRACE/GFO observations to estimate basin-scale ET in the continental United States as a target for validation and error analysis of up-scaled ET products from other sources, and (2) using GRACE/GFO observations to estimate ET globally over the full joint record (2003-2020) in order to quantify observed changes in the global water cycle.  We find that because of the way that errors in mass change measurements inherently change in scale (i.e. decreasing with larger study domains), GRACE/GFO measurements offer a very clear and robust uncertainty quantification approach for large scale ET monitoring.  We also find that there is a clear and statistically significant signal in global land ET over the record length that indicates changes in the global water cycle consistent with our understanding of climate change.  These methods and results will be presented and discussed.</p>


2006 ◽  
Vol 23 (1) ◽  
pp. 80-89 ◽  
Author(s):  
Amauri P. Oliveira ◽  
Jacyra Soares ◽  
Marija Z. Božnar ◽  
Primož Mlakar ◽  
João F. Escobedo

Abstract This work describes an application of a multilayer perceptron neural network technique to correct dome emission effects on longwave atmospheric radiation measurements carried out using an Eppley Precision Infrared Radiometer (PIR) pyrgeometer. It is shown that approximately 7-month-long measurements of dome and case temperatures and meteorological variables available in regular surface stations (global solar radiation, air temperature, and air relative humidity) are enough to train the neural network algorithm and correct the observed longwave radiation for dome temperature effects in surface stations with climates similar to that of the city of São Paulo, Brazil. The network was trained using data from 15 October 2003 to 7 January 2004 and verified using data, not present during the network-training period, from 8 January to 30 April 2004. The longwave radiation values generated by the neural network technique were very similar to the values obtained by Fairall et al., assumed here as the reference approach to correct dome emission effects in PIR pyrgeometers. Compared to the empirical approach the neural network technique is less limited to sensor type and time of day (allows nighttime corrections).


2021 ◽  
Vol 4 (135) ◽  
pp. 12-22
Author(s):  
Vladimir Gerasimov ◽  
Nadija Karpenko ◽  
Denys Druzhynin

The goal of the paper is to create a training model based on real raw noisy data and train a neural network to determine the behavior of the fuel level, namely, to determine the time and volume of vehicle refueling, fuel consumption / excessive consumption / drainage.Various algorithms and data processing methods are used in fuel control and metering systems to get rid of noise. In some systems, primary filtering is used by excluding readings that are out of range, sharp jumps and deviations, and averaging over a sliding window. Research is being carried out on the use of more complex filters than simple averaging – by example, the Kalman filter for data processing.When measuring the fuel level using various fuel level sensor the data is influenced by many external factors that can interfere with the measurement and distort the real fuel level. Since these interferences are random and have a different structure, it is very difficult to completely remove them using classical noise suppression algorithms. Therefore, we use artificial intelligence, namely a neural network, to find patterns, detect noise and correct distorted data. To correct distorted data, you first need to determine which data is distorted, classify the data.In the course of the work, the raw data on the fuel level were transformed for use in the neural network training model. To describe the behavior of the fuel level, we use 4 possible classes: fuel consumption is observed, the vehicle is refueled, the fuel level does not change (the vehicle is idle), the data is distorted by noise. Also, in the process of work, additional tools of the DeepLearning4 library were used to load data training and training a neural network. A multilayer neural network model is used, namely a three-layer neural network, as well as used various training parameters provided by the DeepLearning4j library, which were obtained because of experiments.After training the neural network was used on test data, because of which the Confusion Matrix and Evaluation Metrics were obtained.In conclusion, finding a good model takes a lot of ideas and a lot of experimentation, also need to correctly process and transform the raw data to get the correct data for training. So far, a neural network has been trained to determine the state of the fuel level at a point in time and classify the behavior into four main labels (classes). Although we have not reduced the error in determining the behavior of the fuel level to zero, we have saved the states of the neural network, and in the future we will be able to retrain and evolve our neural network to obtain better results.


2020 ◽  
Vol 2 (1) ◽  
pp. 29-36
Author(s):  
M. I. Zghoba ◽  
◽  
Yu. I. Hrytsiuk ◽  

The peculiarities of neural network training for forecasting taxi passenger demand using graphics processing units are considered, which allowed to speed up the training procedure for different sets of input data, hardware configurations, and its power. It has been found that taxi services are becoming more accessible to a wide range of people. The most important task for any transportation company and taxi driver is to minimize the waiting time for new orders and to minimize the distance from drivers to passengers on order receiving. Understanding and assessing the geographical passenger demand that depends on many factors is crucial to achieve this goal. This paper describes an example of neural network training for predicting taxi passenger demand. It shows the importance of a large input dataset for the accuracy of the neural network. Since the training of a neural network is a lengthy process, parallel training was used to speed up the training. The neural network for forecasting taxi passenger demand was trained using different hardware configurations, such as one CPU, one GPU, and two GPUs. The training times of one epoch were compared along with these configurations. The impact of different hardware configurations on training time was analyzed in this work. The network was trained using a dataset containing 4.5 million trips within one city. The results of this study show that the training with GPU accelerators doesn't necessarily improve the training time. The training time depends on many factors, such as input dataset size, splitting of the entire dataset into smaller subsets, as well as hardware and power characteristics.


Author(s):  
Fei Long ◽  
Fen Liu ◽  
Xiangli Peng ◽  
Zheng Yu ◽  
Huan Xu ◽  
...  

In order to improve the electrical quality disturbance recognition ability of the neural network, this paper studies a depth learning-based power quality disturbance recognition and classification method: constructing a power quality perturbation model, generating training set; construct depth neural network; profit training set to depth neural network training; verify the performance of the depth neural network; the results show that the training set is randomly added 20DB-50DB noise, even in the most serious 20dB noise conditions, it can reach more than 99% identification, this is a tradition. The method is impossible to implement. Conclusion: the deepest learning-based power quality disturbance identification and classification method overcomes the disadvantage of the selection steps of artificial characteristics, poor robustness, which is beneficial to more accurately and quickly discover the category of power quality issues.


Sign in / Sign up

Export Citation Format

Share Document