scholarly journals Stable climate simulations using a realistic GCM with neural network parameterizations for atmospheric moist physics and radiation processes

2021 ◽  
Author(s):  
Xin Wang ◽  
Yilun Han ◽  
Wei Xue ◽  
Guangwen Yang ◽  
Guang J. Zhang

Abstract. In climate models, subgrid parameterizations of convection and cloud are one of the main reasons for the biases in precipitation and atmospheric circulation simulations. In recent years, due to the rapid development of data science, Machine learning (ML) parameterizations for convection and clouds have been proven the potential to perform better than conventional parameterizations. At present, most of the existing studies are on aqua-planet and idealized models, and the problems of simulated instability and climate drift still exist. In realistic configurated models, developing a machine learning parameterization scheme remains a challenging task. In this study, a group of deep residual multilayer perceptrons with strong nonlinear fitting ability is designed to learn a parameterization scheme from cloud-resolving model outputs. Multi-target training is achieved to best balance the fits across diverse neural network outputs. The optimal machine learning parameterization, named NN-Parameterization, is further chosen among feasible candidates for both high performance and long-term simulation. The results show that NN-Parameterization performs well in multi-year climate simulations and reproduces reasonable climatology and climate variability in a general circulation model (GCM), with a running speed of about 30 times faster than the cloud-resolving model embedded Superparameterizated GCM. Under real geographical boundary conditions, the hybrid ML-physical GCM well simulates the spatial distribution of precipitation and significantly improves the frequency of precipitation extremes, which is largely underestimated in the Community Atmospheric Model version 5 (CAM5) with the horizontal resolution of 1.9° × 2.5°. Furthermore, the hybrid ML-physical GCM simulates a stronger signal of the Madden-Julian oscillation with a more reasonable propagation speed, which is too weak and propagates too fast in CAM5. This study is a pioneer to achieve multi-year stable climate simulations using a hybrid ML-physical GCM in actual land-ocean boundary conditions. It demonstrates the emerging potential for using machine learning parameterizations in climate simulations.

Author(s):  
Ruimin Ke ◽  
Wan Li ◽  
Zhiyong Cui ◽  
Yinhai Wang

Traffic speed prediction is a critically important component of intelligent transportation systems. Recently, with the rapid development of deep learning and transportation data science, a growing body of new traffic speed prediction models have been designed that achieved high accuracy and large-scale prediction. However, existing studies have two major limitations. First, they predict aggregated traffic speed rather than lane-level traffic speed; second, most studies ignore the impact of other traffic flow parameters in speed prediction. To address these issues, the authors propose a two-stream multi-channel convolutional neural network (TM-CNN) model for multi-lane traffic speed prediction considering traffic volume impact. In this model, the authors first introduce a new data conversion method that converts raw traffic speed data and volume data into spatial–temporal multi-channel matrices. Then the authors carefully design a two-stream deep neural network to effectively learn the features and correlations between individual lanes, in the spatial–temporal dimensions, and between speed and volume. Accordingly, a new loss function that considers the volume impact in speed prediction is developed. A case study using 1-year data validates the TM-CNN model and demonstrates its superiority. This paper contributes to two research areas: (1) traffic speed prediction, and (2) multi-lane traffic flow study.


Author(s):  
J. Fenila Naomi ◽  
Kavitha M. ◽  
Sathiyamoorthi V.

For centuries, the concept of a smart, autonomous learning machine has fascinated people. The machine learning philosophy is to automate the development of analytical models so that algorithms can learn continually with the assistance of accessible information. Machine learning (ML) and deep learning (DL) methods are implemented to further improve an application's intelligence and capacities as the quantity of the gathered information rises. Because IoT will be one of the main sources of information, data science will make a significant contribution to making IoT apps smarter. There is a rapid development of both technologies, cloud computing and the internet of things, considering the field of wireless communication. This chapter answers the questions: How can IoT intelligent information be applied to ML and DL algorithms? What is the taxonomy of IoT's ML and DL and profound learning algorithms? And what are real-world IoT data features that require data analytics?


2020 ◽  
Author(s):  
Rachel Furner ◽  
Peter Haynes ◽  
Dan Jones ◽  
Dave Munday ◽  
Brooks Paige ◽  
...  

<p>The recent boom in machine learning and data science has led to a number of new opportunities in the environmental sciences. In particular, climate models represent the best tools we have to predict, understand and potentially mitigate climate change, however these process-based models are incredibly complex and require huge amounts of high-performance computing resources. Machine learning offers opportunities to greatly improve the computational efficiency of these models.</p><p>Here we discuss our recent efforts to reduce the computational cost associated with running a process-based model of the physical ocean by developing an analogous data-driven model. We train statistical and machine learning algorithms using the outputs from a highly idealised sector configuration of general circulation model (MITgcm). Our aim is to develop an algorithm which is able to predict the future state of the general circulation model to a similar level of accuracy in a more computationally efficient manner.</p><p>We first develop a linear regression model to investigate the sensitivity of data-driven approaches to various inputs, e.g. temperature on different spatial and temporal scales, and meta-variables such as location information. Following this, we develop a neural network model to replicate the general circulation model, as in the work of Dueben and Bauer 2018, and Scher 2018.</p><p>We present a discussion on the sensitivity of data-driven models and preliminary results from the neural network based model.</p><p> </p><p><em>Dueben, P. D., & Bauer, P. (2018). Challenges and design choices for global weather and climate models based on machine learning. Geoscientific Model Development, 11(10), 3999-4009.</em></p><p><em>Scher, S. (2018). Toward Data‐Driven Weather and Climate Forecasting: Approximating a Simple General Circulation Model With Deep Learning. Geophysical Research Letters, 45(22), 12-616.</em></p>


2016 ◽  
Vol 12 (8) ◽  
pp. 1619-1634 ◽  
Author(s):  
Youichi Kamae ◽  
Kohei Yoshida ◽  
Hiroaki Ueda

Abstract. Accumulations of global proxy data are essential steps for improving reliability of climate model simulations for the Pliocene warming climate. In the Pliocene Model Intercomparison Project phase 2 (PlioMIP2), a part project of the Paleoclimate Modelling Intercomparison Project phase 4, boundary forcing data have been updated from the PlioMIP phase 1 due to recent advances in understanding of oceanic, terrestrial and cryospheric aspects of the Pliocene palaeoenvironment. In this study, sensitivities of Pliocene climate simulations to the newly archived boundary conditions are evaluated by a set of simulations using an atmosphere–ocean coupled general circulation model, MRI-CGCM2.3. The simulated Pliocene climate is warmer than pre-industrial conditions for 2.4 °C in global mean, corresponding to 0.6 °C warmer than the PlioMIP1 simulation by the identical climate model. Revised orography, lakes, and shrunk ice sheets compared with the PlioMIP1 lead to local and remote influences including snow and sea ice albedo feedback, and poleward heat transport due to the atmosphere and ocean that result in additional warming over middle and high latitudes. The amplified higher-latitude warming is supported qualitatively by the proxy evidences, but is still underestimated quantitatively. Physical processes responsible for the global and regional climate changes should be further addressed in future studies under systematic intermodel and data–model comparison frameworks.


2021 ◽  
Author(s):  
Kira Rehfeld ◽  
Jonathan Wider ◽  
Nadine Theisen ◽  
Martin Werner ◽  
Ullrich Köthe ◽  
...  

<p>Tracing the spatio-temporal distribution of water isotopologues (e.g., H<sub>2</sub><sup>16</sup>O, H<sub>2</sub><sup>18</sup>O,HD<sup>16</sup>O, D<sub>2</sub><sup>16</sup>O), in the atmosphere allows insights in to the hydrological cycle and surface-atmosphere interactions. Strong relationships between atmospheric circulation and isotopologue variability exist, mitigated by fractionation during phase transitions of water. Isotopic gradients correlate with precipitation amount, temperature, with distance to source areas of evaporation and often follow topographic features. Isotope-enabled general circulation models (iGCMs) have been established to explicitly simulate the processes that lead to these distributions, in response to the changes in radiative forcing, boundary conditions, and including effects of internal variability of the climate system. However, few of these iGCMs<sup>1,2 </sup>of varying complexity exist to date and isotopic tracers decrease their computational efficiency.</p><p>Here, we evaluate the potential of replacing the explicit simulation of the isotopic component in the water cycle by statistical learning for offline model evaluation at interannual to multi-millennial timescales. This is challenging. While the relevant fractionation processes are well understood, the climate system is a chaotic, nonstationary system of high dimensionality. Therefore, successful statistical prediction requires the (so far elusive) understanding of the timescale-dependent relationships in the climate system. We present a case study on the feasibility of this approach.</p><p>We focus on the impact of variable selection (primarily surface temperature, precipitation and sea-level pressure) and boundary conditions (CO<sub>2 </sub>concentrations, ice sheet distribution). We also compare different approaches to dimensionality reduction, and compare the performance of different machine-learning approaches including simple linear regression, random forests, Gaussian Processes and different types of neural networks. The accuracy of the predictions is evaluated using regional and global area-weighted mean squared errors across training and evaluation data from individual GCM simulations and across climatic states. We find a high spatial variability of prediction accuracy, modest in many locations with the presently employed approaches. We obtain encouraging results for the prediction of isotope variability in Greenland and the Antarctic.</p><p>References</p><p>[1] Tindall, J. C., P. J. Valdes, and Louise C. Sime. "Stable water isotopes in HadCM3: Isotopic signature of El Niño–Southern Oscillation and the tropical amount effect." <em>Journal of Geophysical Research: Atmospheres</em> 114.D4 (2009)</p><p>[2] Werner, Martin, et al. "Glacial–interglacial changes in H 2 18 O, HDO and deuterium excess–results from the fully coupled ECHAM5/MPI-OM Earth system model." <em>Geoscientific Model Development</em> 9.2 (2016): 647-670.</p>


2021 ◽  
Author(s):  
Ying Yang ◽  
Huaixin Cao

Abstract With the rapid development of machine learning, artificial neural networks provide a powerful tool to represent or approximate many-body quantum states. It was proved that every graph state can be generated by a neural network. In this paper, we aim to introduce digraph states and explore their neural network representations (NNRs). Based on some discussions about digraph states and neural network quantum states (NNQSs), we construct explicitly the NNR for any digraph state, implying every digraph state is an NNQS. The obtained results will provide a theoretical foundation for solving the quantum many-body problem with machine learning method whenever the wave-function is known as an unknown digraph state or it can be approximated by digraph states.


2020 ◽  
Vol 2020 (9) ◽  
Author(s):  
Gregor Kasieczka ◽  
Simone Marzani ◽  
Gregory Soyez ◽  
Giovanni Stagnitto

Abstract The past few years have seen a rapid development of machine-learning algorithms. While surely augmenting performance, these complex tools are often treated as black-boxes and may impair our understanding of the physical processes under study. The aim of this paper is to move a first step into the direction of applying expert-knowledge in particle physics to calculate the optimal decision function and test whether it is achieved by standard training, thus making the aforementioned black-box more transparent. In particular, we consider the binary classification problem of discriminating quark-initiated jets from gluon-initiated ones. We construct a new version of the widely used N-subjettiness, which features a simpler theoretical behaviour than the original one, while maintaining, if not exceeding, the discrimination power. We input these new observables to the simplest possible neural network, i.e. the one made by a single neuron, or perceptron, and we analytically study the network behaviour at leading logarithmic accuracy. We are able to determine under which circumstances the perceptron achieves optimal performance. We also compare our analytic findings to an actual implementation of a perceptron and to a more realistic neural network and find very good agreement.


2021 ◽  
Author(s):  
Guang An Ooi ◽  
Mehmet Burak Özakin ◽  
Tarek Mahmoud Mostafa ◽  
Hakan Bagci ◽  
Shehab Ahmed ◽  
...  

Abstract In the wake of today's industrial revolution, many advanced technologies and techniques have been developed to address the complex challenges in well integrity evaluation. One of the most prominent innovations is the integration of physics-based data science for robust downhole measurements. This paper introduces a promising breakthrough in electromagnetism-based corrosion imaging using physics informed machine learning (PIML), tested and validated on the cross-sections of real metal casings/tubing with defects of various sizes, locations, and spacing. Unlike existing electromagnetism-based inspection tools, where only circumferential average metal thickness is measured, this research investigates the artificial intelligence (AI)-assisted interpretation of a unique arrangement of electromagnetic (EM) sensors. This facilitates the development of a novel solution for through-tubing corrosion imaging that enhances defect detection with pixel-level accuracy. The developed framework incorporates a finite-difference time-domain (FDTD)-based EM forward solver and an artificial neural network (ANN), namely the long short-term memory recurrent neural network (LSTM-RNN). The ANN is trained using the results generated from the FDTD solver, which simulates sensor readings for different scenarios of defects. The integration of the array EM-sensor responses and an ANN enabled generalizable and accurate measurements of metal loss percentage across various experimental defects. It also enabled the precise predictions of the defects’ aperture sizes, numbers, and locations in 360-degree coverage. Results were plotted in customized 2D heat-maps for any desired cross-section of the test casings. Further analysis of different techniques demonstrated that the LSTM-RNN could show higher precision and robustness compared to regular dense NNs, especially in the case of multiple defects. The LSTM-RNN is validated using additional data from simulated and experimental data. The results show reliable predictions even with limited training data. The model accurately predicted defects of larger and smaller sizes that were intentionally excluded from the training data to demonstrate generalizability. This highlights a major advance toward corrosion imaging behind tubing. This novel technique paves the way for the use of similar concepts on other sensors in multiple barriers imaging. Further work includes improvement to the sensor package and ANNs by adding a third dimension to the imaging capabilities to produce 3D images of defects on casings.


Author(s):  
Samuel A. Stein

Tremendous progress has been witnessed in artificial intelligence within the domain of neural network backed deep learning systems and its applications. As we approach the post Moore’s Law era, the limit of semiconductor fabrication technology along with a rapid increase in data generation rates have lead to an impending growing challenge of tackling newer and more modern machine learning problems. In parallel, quantum computing has exhibited rapid development in recent years. Due to the potential of a quantum speedup, quantum based learning applications have become an area of significant interest, in hopes that we can leverage quantum systems to solve classical problems. In this work, we propose a quantum deep learning architecture; we demonstrate our quantum neural network architecture on tasks ranging from binary and multi-class classification to generative modelling. Powered by a modified quantum differentiation function along with a hybrid quantum-classic design, our architecture encodes the data with a reduced number of qubits and generates a quantum circuit, loading it onto a quantum platform where the model learns the optimal states iteratively. We conduct intensive experiments on both the local computing environment and IBM-Q quantum platform. The evaluation results demonstrate that our architecture is able to outperform Tensorflow-Quantum by up to 12.51% and 11.71% for a comparable classic deep neural network on the task of classification trained with the same network settings. Furthermore, our GAN architecture runs the discriminator and the generator purely on quantum hardware and utilizes the swap test on qubits to calculate the values of loss functions. In comparing our quantum GAN, we note our architecture is able to achieve similar performance with 98.5% reduction on the parameter set when compared to classical GANs. With the same number of parameters, additionally, QuGAN outperforms other quantum based GANs in the literature for up to 125.0% in terms of similarity between generated distributions and original data sets.


2019 ◽  
Vol 8 (3) ◽  
pp. 8428-8432

Due to the rapid development of the communication technologies and global networking, lots of daily human life activities such as electronic banking, social networks, ecommerce, etc are transferred to the cyberspace. The anonymous, open and uncontrolled infrastructure of the internet enables an excellent platform for cyber attacks. Phishing is one of the cyber attacks in which attackers open some fraudulent websites similar to the popular and legal websites to steal the user’s sensitive information. Machine learning techniques such as J48, Support Vector Machine (SVM), Logistic Regression (LR), Naive Bayes (NB) and Artificial Neural Network (ANN) were widely to detect the phishing attacks. But, getting goodquality training data is one of the biggest problems in machine learning. So, a deep learning method called Deep Neural Network (DNN) is introduced to detect the phishing Uniform Resource Locators (URLs). Initially, a feature extractor is used to construct a 30-dimension feature vector based on URL-based features, HTML-based features and domain-based features. These features are given as input to the DNN classifier for phishing attack detection. It consists of one input layer, multiple hidden layers and one output layer. The multiple hidden layers in DNN try to learn high-level features in an incremental manner. Finally, the DNN returns a probability value which represent the phishing URLs and legitimate URLs. By using DNN the accuracy, precision and recall of phishing attack detection is improved.


Sign in / Sign up

Export Citation Format

Share Document