scholarly journals RadNet 1.0: Exploring deep learning architectures for longwave radiative transfer

2020 ◽  
Author(s):  
Ying Liu ◽  
Rodrigo Caballero ◽  
Joy Merwin Monteiro

Abstract. Simulating global and regional climate at high resolution is essential to study the effects of climate change and capture extreme events affecting human populations. To achieve this goal, the scalability of climate models and the efficiency of individual model components are both important. Radiative transfer is among the most computationally expensive components in a typical climate model. Here we attempt to model this component using a neural network. We aim to study the feasibility of replacing an explicit, physics-based computation of longwave radiative transfer by a neural network emulator, and assessing the resultant performance gains. We compare multiple neural-network architectures, including a convolutional neural network and our results suggest that the performance loss from the use of convolutional networks is not offset by gains in accuracy. We train the networks with and without noise added to the input profiles and find that adding noise improves the ability of the networks to generalise beyond the training set. Prediction of radiative heating rates using our neural network models achieve up to 370x speedup on a GTX 1080 GPU setup and 11x speedup on a Xeon CPU setup compared to the a state of the art radiative transfer library running on the same Xeon CPU. Furthermore, our neural network models yield less than 0.1 Kelvin per day mean squared error across all pressure levels. Upon introducing this component into a single column model, we find that the time evolution of the temperature and humidity profiles are physically reasonable, though the model is conservative in its prediction of heating rates in regions where the optical depth changes quickly. Differences exist in the equilibrium climate simulated when using the neural networks, which are attributed to small systematic errors that accumulate over time. Thus, we find that the accuracy of the neural network in the "offline" mode does not reflect its performance when coupled with other components.

2020 ◽  
Vol 13 (9) ◽  
pp. 4399-4412 ◽  
Author(s):  
Ying Liu ◽  
Rodrigo Caballero ◽  
Joy Merwin Monteiro

Abstract. Simulating global and regional climate at high resolution is essential to study the effects of climate change and capture extreme events affecting human populations. To achieve this goal, the scalability of climate models and efficiency of individual model components are both important. Radiative transfer is among the most computationally expensive components in a typical climate model. Here we attempt to model this component using a neural network. We aim to study the feasibility of replacing an explicit, physics-based computation of longwave radiative transfer by a neural network emulator and assessing the resultant performance gains. We compare multiple neural-network architectures, including a convolutional neural network, and our results suggest that the performance loss from the use of conventional convolutional networks is not offset by gains in accuracy. We train the networks with and without noise added to the input profiles and find that adding noise improves the ability of the networks to generalise beyond the training set. Prediction of radiative heating rates using our neural network models achieve up to 370× speedup on a GTX 1080 GPU setup and 11× speedup on a Xeon CPU setup compared to the a state-of-the-art radiative transfer library running on the same Xeon CPU. Furthermore, our neural network models yield less than 0.1 K d−1 mean squared error across all pressure levels. Upon introducing this component into a single-column model, we find that the time evolution of the temperature and humidity profiles is physically reasonable, though the model is conservative in its prediction of heating rates in regions where the optical depth changes quickly. Differences exist in the equilibrium climate simulated when using the neural network, which are attributed to small systematic errors that accumulate over time. Thus, we find that the accuracy of the neural network in the “offline” mode does not reflect its performance when coupled with other components.


Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2601
Author(s):  
Seung Chan Jo ◽  
Young Gyu Jin ◽  
Yong Tae Yoon ◽  
Ho Chan Kim

Variability, intermittency, and limited controllability are inherent characteristics of photovoltaic (PV) generation that result in inaccurate solutions to scheduling problems and the instability of the power grid. As the penetration level of PV generation increases, it becomes more important to mitigate these problems by improving forecasting accuracy. One of the alternatives to improving forecasting performance is to include a seasonal component. Thus, this study proposes using information on extraterrestrial radiation (ETR), which is the solar radiation outside of the atmosphere, in neural network models for day-ahead PV generation forecasting. Specifically, five methods for integrating the ETR into the neural network models are presented: (1) division preprocessing, (2) multiplication preprocessing, (3) replacement of existing input, (4) inclusion as additional input, and (5) inclusion as an intermediate target. The methods were tested using two datasets in Australia using four neural network models: Multilayer perceptron and three recurrent neural network(RNN)-based models including vanilla RNN, long short-term memory, and gated recurrent unit. It was found that, among the integration methods, including the ETR as the intermediate target improved the mean squared error by 4.1% on average, and by 12.28% at most in RNN-based models. These results verify that the integration of ETR into the PV forecasting models based on neural networks can improve the forecasting performance.


2021 ◽  
Vol 9 (5) ◽  
pp. 524
Author(s):  
Alawi Alqushaibi ◽  
Said Jadid Abdulkadir ◽  
Helmi Md Rais ◽  
Qasem Al-Tashi ◽  
Mohammed G. Ragab ◽  
...  

Constructing offshore and coastal structures with the highest level of stability and lowest cost, as well as the prevention of faulty risk, is the desired plan that stakeholders seek to obtain. The successful construction plans of such projects mostly rely on well-analyzed and modeled metocean data that yield high prediction accuracy for the ocean environmental conditions including waves and wind. Over the past decades, planning and designing coastal projects have been accomplished by traditional static analytic, which requires tremendous efforts and high-cost resources to validate the data and determine the transformation of metocean data conditions. Therefore, the wind plays an essential role in the oceanic atmosphere and contributes to the formation of waves. This paper proposes an enhanced weight-optimized neural network based on Sine Cosine Algorithm (SCA) to accurately predict the wave height. Three neural network models named: Long Short-Term Memory (LSTM), Vanilla Recurrent Neural Network (VRNN), and Gated Recurrent Network (GRU) are enhanced, instead of random weight initialization, SCA generates weight values that are adaptable to the nature of the data and model structure. Besides, a Grid Search (GS) is utilized to automatically find the best models’ configurations. To validate the performance of the proposed models, metocean datasets have been used. The original LSTM, VRNN, and GRU are implemented and used as benchmarking models. The results show that the optimized models outperform the original three benchmarking models in terms of mean squared error (MSE), root mean square error (RMSE), and mean absolute error (MAE).


2020 ◽  
Vol 5 ◽  
pp. 140-147 ◽  
Author(s):  
T.N. Aleksandrova ◽  
◽  
E.K. Ushakov ◽  
A.V. Orlova ◽  
◽  
...  

The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4242
Author(s):  
Fausto Valencia ◽  
Hugo Arcos ◽  
Franklin Quilumba

The purpose of this research is the evaluation of artificial neural network models in the prediction of stresses in a 400 MVA power transformer winding conductor caused by the circulation of fault currents. The models were compared considering the training, validation, and test data errors’ behavior. Different combinations of hyperparameters were analyzed based on the variation of architectures, optimizers, and activation functions. The data for the process was created from finite element simulations performed in the FEMM software. The design of the Artificial Neural Network was performed using the Keras framework. As a result, a model with one hidden layer was the best suited architecture for the problem at hand, with the optimizer Adam and the activation function ReLU. The final Artificial Neural Network model predictions were compared with the Finite Element Method results, showing good agreement but with a much shorter solution time.


2021 ◽  
Vol 11 (3) ◽  
pp. 908
Author(s):  
Jie Zeng ◽  
Panagiotis G. Asteris ◽  
Anna P. Mamou ◽  
Ahmed Salih Mohammed ◽  
Emmanuil A. Golias ◽  
...  

Buried pipes are extensively used for oil transportation from offshore platforms. Under unfavorable loading combinations, the pipe’s uplift resistance may be exceeded, which may result in excessive deformations and significant disruptions. This paper presents findings from a series of small-scale tests performed on pipes buried in geogrid-reinforced sands, with the measured peak uplift resistance being used to calibrate advanced numerical models employing neural networks. Multilayer perceptron (MLP) and Radial Basis Function (RBF) primary structure types have been used to train two neural network models, which were then further developed using bagging and boosting ensemble techniques. Correlation coefficients in excess of 0.954 between the measured and predicted peak uplift resistance have been achieved. The results show that the design of pipelines can be significantly improved using the proposed novel, reliable and robust soft computing models.


Sign in / Sign up

Export Citation Format

Share Document