scholarly journals RadNet 1.0: exploring deep learning architectures for longwave radiative transfer

2020 ◽  
Vol 13 (9) ◽  
pp. 4399-4412 ◽  
Author(s):  
Ying Liu ◽  
Rodrigo Caballero ◽  
Joy Merwin Monteiro

Abstract. Simulating global and regional climate at high resolution is essential to study the effects of climate change and capture extreme events affecting human populations. To achieve this goal, the scalability of climate models and efficiency of individual model components are both important. Radiative transfer is among the most computationally expensive components in a typical climate model. Here we attempt to model this component using a neural network. We aim to study the feasibility of replacing an explicit, physics-based computation of longwave radiative transfer by a neural network emulator and assessing the resultant performance gains. We compare multiple neural-network architectures, including a convolutional neural network, and our results suggest that the performance loss from the use of conventional convolutional networks is not offset by gains in accuracy. We train the networks with and without noise added to the input profiles and find that adding noise improves the ability of the networks to generalise beyond the training set. Prediction of radiative heating rates using our neural network models achieve up to 370× speedup on a GTX 1080 GPU setup and 11× speedup on a Xeon CPU setup compared to the a state-of-the-art radiative transfer library running on the same Xeon CPU. Furthermore, our neural network models yield less than 0.1 K d−1 mean squared error across all pressure levels. Upon introducing this component into a single-column model, we find that the time evolution of the temperature and humidity profiles is physically reasonable, though the model is conservative in its prediction of heating rates in regions where the optical depth changes quickly. Differences exist in the equilibrium climate simulated when using the neural network, which are attributed to small systematic errors that accumulate over time. Thus, we find that the accuracy of the neural network in the “offline” mode does not reflect its performance when coupled with other components.

2020 ◽  
Author(s):  
Ying Liu ◽  
Rodrigo Caballero ◽  
Joy Merwin Monteiro

Abstract. Simulating global and regional climate at high resolution is essential to study the effects of climate change and capture extreme events affecting human populations. To achieve this goal, the scalability of climate models and the efficiency of individual model components are both important. Radiative transfer is among the most computationally expensive components in a typical climate model. Here we attempt to model this component using a neural network. We aim to study the feasibility of replacing an explicit, physics-based computation of longwave radiative transfer by a neural network emulator, and assessing the resultant performance gains. We compare multiple neural-network architectures, including a convolutional neural network and our results suggest that the performance loss from the use of convolutional networks is not offset by gains in accuracy. We train the networks with and without noise added to the input profiles and find that adding noise improves the ability of the networks to generalise beyond the training set. Prediction of radiative heating rates using our neural network models achieve up to 370x speedup on a GTX 1080 GPU setup and 11x speedup on a Xeon CPU setup compared to the a state of the art radiative transfer library running on the same Xeon CPU. Furthermore, our neural network models yield less than 0.1 Kelvin per day mean squared error across all pressure levels. Upon introducing this component into a single column model, we find that the time evolution of the temperature and humidity profiles are physically reasonable, though the model is conservative in its prediction of heating rates in regions where the optical depth changes quickly. Differences exist in the equilibrium climate simulated when using the neural networks, which are attributed to small systematic errors that accumulate over time. Thus, we find that the accuracy of the neural network in the "offline" mode does not reflect its performance when coupled with other components.


Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2601
Author(s):  
Seung Chan Jo ◽  
Young Gyu Jin ◽  
Yong Tae Yoon ◽  
Ho Chan Kim

Variability, intermittency, and limited controllability are inherent characteristics of photovoltaic (PV) generation that result in inaccurate solutions to scheduling problems and the instability of the power grid. As the penetration level of PV generation increases, it becomes more important to mitigate these problems by improving forecasting accuracy. One of the alternatives to improving forecasting performance is to include a seasonal component. Thus, this study proposes using information on extraterrestrial radiation (ETR), which is the solar radiation outside of the atmosphere, in neural network models for day-ahead PV generation forecasting. Specifically, five methods for integrating the ETR into the neural network models are presented: (1) division preprocessing, (2) multiplication preprocessing, (3) replacement of existing input, (4) inclusion as additional input, and (5) inclusion as an intermediate target. The methods were tested using two datasets in Australia using four neural network models: Multilayer perceptron and three recurrent neural network(RNN)-based models including vanilla RNN, long short-term memory, and gated recurrent unit. It was found that, among the integration methods, including the ETR as the intermediate target improved the mean squared error by 4.1% on average, and by 12.28% at most in RNN-based models. These results verify that the integration of ETR into the PV forecasting models based on neural networks can improve the forecasting performance.


The neural network models series used in the development of an aggregated digital twin of equipment as a cyber-physical system are presented. The twins of machining accuracy, chip formation and tool wear are examined in detail. On their basis, systems for stabilization of the chip formation process during cutting and diagnose of the cutting too wear are developed. Keywords cyberphysical system; neural network model of equipment; big data, digital twin of the chip formation; digital twin of the tool wear; digital twin of nanostructured coating choice


2021 ◽  
Vol 12 (6) ◽  
pp. 1-21
Author(s):  
Jayant Gupta ◽  
Carl Molnar ◽  
Yiqun Xie ◽  
Joe Knight ◽  
Shashi Shekhar

Spatial variability is a prominent feature of various geographic phenomena such as climatic zones, USDA plant hardiness zones, and terrestrial habitat types (e.g., forest, grasslands, wetlands, and deserts). However, current deep learning methods follow a spatial-one-size-fits-all (OSFA) approach to train single deep neural network models that do not account for spatial variability. Quantification of spatial variability can be challenging due to the influence of many geophysical factors. In preliminary work, we proposed a spatial variability aware neural network (SVANN-I, formerly called SVANN ) approach where weights are a function of location but the neural network architecture is location independent. In this work, we explore a more flexible SVANN-E approach where neural network architecture varies across geographic locations. In addition, we provide a taxonomy of SVANN types and a physics inspired interpretation model. Experiments with aerial imagery based wetland mapping show that SVANN-I outperforms OSFA and SVANN-E performs the best of all.


2018 ◽  
Vol 8 (8) ◽  
pp. 1290 ◽  
Author(s):  
Beata Mrugalska

Increasing expectations of industrial system reliability require development of more effective and robust fault diagnosis methods. The paper presents a framework for quality improvement on the neural model applied for fault detection purposes. In particular, the proposed approach starts with an adaptation of the modified quasi-outer-bounding algorithm towards non-linear neural network models. Subsequently, its convergence is proven using quadratic boundedness paradigm. The obtained algorithm is then equipped with the sequential D-optimum experimental design mechanism allowing gradual reduction of the neural model uncertainty. Finally, an emerging robust fault detection framework on the basis of the neural network uncertainty description as the adaptive thresholds is proposed.


2012 ◽  
Vol 6-7 ◽  
pp. 1055-1060 ◽  
Author(s):  
Yang Bing ◽  
Jian Kun Hao ◽  
Si Chang Zhang

In this study we apply back propagation Neural Network models to predict the daily Shanghai Stock Exchange Composite Index. The learning algorithm and gradient search technique are constructed in the models. We evaluate the prediction models and conclude that the Shanghai Stock Exchange Composite Index is predictable in the short term. Empirical study shows that the Neural Network models is successfully applied to predict the daily highest, lowest, and closing value of the Shanghai Stock Exchange Composite Index, but it can not predict the return rate of the Shanghai Stock Exchange Composite Index in short terms.


Author(s):  
Soha Abd Mohamed El-Moamen ◽  
Marghany Hassan Mohamed ◽  
Mohammed F. Farghally

The need for tracking and evaluation of patients in real-time has contributed to an increase in knowing people’s actions to enhance care facilities. Deep learning is good at both a rapid pace in collecting frameworks of big data healthcare and good predictions for detection the lung cancer early. In this paper, we proposed a constructive deep neural network with Apache Spark to classify images and levels of lung cancer. We developed a binary classification model using threshold technique classifying nodules to benign or malignant. At the proposed framework, the neural network models training, defined using the Keras API, is performed using BigDL in a distributed Spark clusters. The proposed algorithm has metrics AUC-0.9810, a misclassifying rate from which it has been shown that our suggested classifiers perform better than other classifiers.


2021 ◽  
Vol 1 (1) ◽  
pp. 19-29
Author(s):  
Zhe Chu ◽  
Mengkai Hu ◽  
Xiangyu Chen

Recently, deep learning has been successfully applied to robotic grasp detection. Based on convolutional neural networks (CNNs), there have been lots of end-to-end detection approaches. But end-to-end approaches have strict requirements for the dataset used for training the neural network models and it’s hard to achieve in practical use. Therefore, we proposed a two-stage approach using particle swarm optimizer (PSO) candidate estimator and CNN to detect the most likely grasp. Our approach achieved an accuracy of 92.8% on the Cornell Grasp Dataset, which leaped into the front ranks of the existing approaches and is able to run at real-time speeds. After a small change of the approach, we can predict multiple grasps per object in the meantime so that an object can be grasped in a variety of ways.


2000 ◽  
Author(s):  
Arturo Pacheco-Vega ◽  
Mihir Sen ◽  
Rodney L. McClain

Abstract In the current study we consider the problem of accuracy in heat rate estimations from artificial neural network models of heat exchangers used for refrigeration applications. The network configuration is of the feedforward type with a sigmoid activation function and a backpropagation algorithm. Limited experimental measurements from a manufacturer are used to show the capability of the neural network technique in modeling the heat transfer in these systems. Results from this exercise show that a well-trained network correlates the data with errors of the same order as the uncertainty of the measurements. It is also shown that the number and distribution of the training data are linked to the performance of the network when estimating the heat rates under different operating conditions, and that networks trained from few tests may give large errors. A methodology based on the cross-validation technique is presented to find regions where not enough data are available to construct a reliable neural network. The results from three tests show that the proposed methodology gives an upper bound of the estimated error in the heat rates.


2008 ◽  
pp. 2476-2493 ◽  
Author(s):  
David Encke

Researchers have known for some time that nonlinearity exists in the financial markets and that neural networks can be used to forecast market returns. Unfortunately, many of these studies fail to consider alternative forecasting techniques, or the relevance of the input variables. The following research utilizes an information-gain technique from machine learning to evaluate the predictive relationships of numerous financial and economic input variables. Neural network models for level estimation and classification are then examined for their ability to provide an effective forecast of future values. A cross-validation technique is also employed to improve the generalization ability of the models. The results show that the classification models generate higher accuracy in forecasting ability than the buy-and-hold strategy, as well as those guided by the level-estimation-based forecasts of the neural network and benchmark linear regression models.


Sign in / Sign up

Export Citation Format

Share Document