Deep neural networks to downscale ocean climate models

Author(s):  
Marie Déchelle-Marquet ◽  
Marina Levy ◽  
Patrick Gallinari ◽  
Michel Crepon ◽  
Sylvie Thiria

<p>Ocean currents are a major source of impact on climate variability, through the heat transport they induce for instance. Ocean climate models have quite low resolution of about 50 km. Several dynamical processes such as instabilities and filaments which have a scale of 1km have a strong influence on the ocean state. We propose to observe and model these fine scale effects by a combination of satellite high resolution SST observations (1km resolution, daily observations) and mesoscale resolution altimetry observations (10km resolution, weekly observations) with deep neural networks. Whereas the downscaling of climate models has been commonly addressed with assimilation approaches, in the last few years neural networks emerged as powerful multi-scale analysis method. Besides, the large amount of available oceanic data makes attractive the use of deep learning to bridge the gap between scales variability.</p><p>This study aims at reconstructing the multi-scale variability of oceanic fields, based on the high resolution NATL60 model of ocean observations at different spatial resolutions: low-resolution sea surface height (SSH) and high resolution SST. As the link between residual neural networks and dynamical systems has recently been established, such a network is trained in a supervised way to reconstruct the high variability of SSH and ocean currents at submesoscale (a few kilometers). To ensure the conservation of physical aspects in the model outputs, physical knowledge is incorporated into the deep learning models training. Different validation methods are investigated and the model outputs are tested with regards to their physical plausibility. The method performance is discussed and compared to other baselines (namely convolutional neural network). The generalization of the proposed method on different ocean variables such as sea surface chlorophyll or sea surface salinity is also examined.</p>

2021 ◽  
Author(s):  
Rilwan A. Adewoyin ◽  
Peter Dueben ◽  
Peter Watson ◽  
Yulan He ◽  
Ritabrata Dutta

AbstractClimate models (CM) are used to evaluate the impact of climate change on the risk of floods and heavy precipitation events. However, these numerical simulators produce outputs with low spatial resolution that exhibit difficulties representing precipitation events accurately. This is mainly due to computational limitations on the spatial resolution used when simulating multi-scale weather dynamics in the atmosphere. To improve the prediction of high resolution precipitation we apply a Deep Learning (DL) approach using input data from a reanalysis product, that is comparable to a climate model’s output, but can be directly related to precipitation observations at a given time and location. Further, our input excludes local precipitation, but includes model fields (weather variables) that are more predictable and generalizable than local precipitation. To this end, we present TRU-NET (Temporal Recurrent U-Net), an encoder-decoder model featuring a novel 2D cross attention mechanism between contiguous convolutional-recurrent layers to effectively model multi-scale spatio-temporal weather processes. We also propose a non-stochastic variant of the conditional-continuous (CC) loss function to capture the zero-skewed patterns of rainfall. Experiments show that our models, trained with our CC loss, consistently attain lower RMSE and MAE scores than a DL model prevalent in precipitation downscaling and outperform a state-of-the-art dynamical weather model. Moreover, by evaluating the performance of our model under various data formulation strategies, for the training and test sets, we show that there is enough data for our deep learning approach to output robust, high-quality results across seasons and varying regions.


2021 ◽  
Vol 13 (18) ◽  
pp. 3568
Author(s):  
Bo Ping ◽  
Yunshan Meng ◽  
Cunjin Xue ◽  
Fenzhen Su

Meso- and fine-scale sea surface temperature (SST) is an essential parameter in oceanographic research. Remote sensing is an efficient way to acquire global SST. However, single infrared-based and microwave-based satellite-derived SST cannot obtain complete coverage and high-resolution SST simultaneously. Deep learning super-resolution (SR) techniques have exhibited the ability to enhance spatial resolution, offering the potential to reconstruct the details of SST fields. Current SR research focuses mainly on improving the structure of the SR model instead of training dataset selection. Different from generating the low-resolution images by downscaling the corresponding high-resolution images, the high- and low-resolution SST are derived from different sensors. Hence, the structure similarity of training patches may affect the SR model training and, consequently, the SST reconstruction. In this study, we first discuss the influence of training dataset selection on SST SR performance, showing that the training dataset determined by the structure similarity index (SSIM) of 0.6 can result in higher reconstruction accuracy and better image quality. In addition, in the practical stage, the spatial similarity between the low-resolution input and the objective high-resolution output is a key factor for SST SR. Moreover, the training dataset obtained from the actual AMSR2 and MODIS SST images is more suitable for SST SR because of the skin and sub-skin temperature difference. Finally, the SST reconstruction accuracies obtained from different SR models are relatively consistent, yet the differences in reconstructed image quality are rather significant.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1579
Author(s):  
Dongqi Wang ◽  
Qinghua Meng ◽  
Dongming Chen ◽  
Hupo Zhang ◽  
Lisheng Xu

Automatic detection of arrhythmia is of great significance for early prevention and diagnosis of cardiovascular disease. Traditional feature engineering methods based on expert knowledge lack multidimensional and multi-view information abstraction and data representation ability, so the traditional research on pattern recognition of arrhythmia detection cannot achieve satisfactory results. Recently, with the increase of deep learning technology, automatic feature extraction of ECG data based on deep neural networks has been widely discussed. In order to utilize the complementary strength between different schemes, in this paper, we propose an arrhythmia detection method based on the multi-resolution representation (MRR) of ECG signals. This method utilizes four different up to date deep neural networks as four channel models for ECG vector representations learning. The deep learning based representations, together with hand-crafted features of ECG, forms the MRR, which is the input of the downstream classification strategy. The experimental results of big ECG dataset multi-label classification confirm that the F1 score of the proposed method is 0.9238, which is 1.31%, 0.62%, 1.18% and 0.6% higher than that of each channel model. From the perspective of architecture, this proposed method is highly scalable and can be employed as an example for arrhythmia recognition.


2021 ◽  
Vol 12 (3) ◽  
pp. 46-47
Author(s):  
Nikita Saxena

Space-borne satellite radiometers measure Sea Surface Temperature (SST), which is pivotal to studies of air-sea interactions and ocean features. Under clear sky conditions, high resolution measurements are obtainable. But under cloudy conditions, data analysis is constrained to the available low resolution measurements. We assess the efficiency of Deep Learning (DL) architectures, particularly Convolutional Neural Networks (CNN) to downscale oceanographic data from low spatial resolution (SR) to high SR. With a focus on SST Fields of Bay of Bengal, this study proves that Very Deep Super Resolution CNN can successfully reconstruct SST observations from 15 km SR to 5km SR, and 5km SR to 1km SR. This outcome calls attention to the significance of DL models explicitly trained for the reconstruction of high SR SST fields by using low SR data. Inference on DL models can act as a substitute to the existing computationally expensive downscaling technique: Dynamical Downsampling. The complete code is available on this Github Repository.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dipendra Jha ◽  
Vishu Gupta ◽  
Logan Ward ◽  
Zijiang Yang ◽  
Christopher Wolverton ◽  
...  

AbstractThe application of machine learning (ML) techniques in materials science has attracted significant attention in recent years, due to their impressive ability to efficiently extract data-driven linkages from various input materials representations to their output properties. While the application of traditional ML techniques has become quite ubiquitous, there have been limited applications of more advanced deep learning (DL) techniques, primarily because big materials datasets are relatively rare. Given the demonstrated potential and advantages of DL and the increasing availability of big materials datasets, it is attractive to go for deeper neural networks in a bid to boost model performance, but in reality, it leads to performance degradation due to the vanishing gradient problem. In this paper, we address the question of how to enable deeper learning for cases where big materials data is available. Here, we present a general deep learning framework based on Individual Residual learning (IRNet) composed of very deep neural networks that can work with any vector-based materials representation as input to build accurate property prediction models. We find that the proposed IRNet models can not only successfully alleviate the vanishing gradient problem and enable deeper learning, but also lead to significantly (up to 47%) better model accuracy as compared to plain deep neural networks and traditional ML techniques for a given input materials representation in the presence of big data.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Rama K. Vasudevan ◽  
Maxim Ziatdinov ◽  
Lukas Vlcek ◽  
Sergei V. Kalinin

AbstractDeep neural networks (‘deep learning’) have emerged as a technology of choice to tackle problems in speech recognition, computer vision, finance, etc. However, adoption of deep learning in physical domains brings substantial challenges stemming from the correlative nature of deep learning methods compared to the causal, hypothesis driven nature of modern science. We argue that the broad adoption of Bayesian methods incorporating prior knowledge, development of solutions with incorporated physical constraints and parsimonious structural descriptors and generative models, and ultimately adoption of causal models, offers a path forward for fundamental and applied research.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Tiago Pereira ◽  
Maryam Abbasi ◽  
Bernardete Ribeiro ◽  
Joel P. Arrais

AbstractIn this work, we explore the potential of deep learning to streamline the process of identifying new potential drugs through the computational generation of molecules with interesting biological properties. Two deep neural networks compose our targeted generation framework: the Generator, which is trained to learn the building rules of valid molecules employing SMILES strings notation, and the Predictor which evaluates the newly generated compounds by predicting their affinity for the desired target. Then, the Generator is optimized through Reinforcement Learning to produce molecules with bespoken properties. The innovation of this approach is the exploratory strategy applied during the reinforcement training process that seeks to add novelty to the generated compounds. This training strategy employs two Generators interchangeably to sample new SMILES: the initially trained model that will remain fixed and a copy of the previous one that will be updated during the training to uncover the most promising molecules. The evolution of the reward assigned by the Predictor determines how often each one is employed to select the next token of the molecule. This strategy establishes a compromise between the need to acquire more information about the chemical space and the need to sample new molecules, with the experience gained so far. To demonstrate the effectiveness of the method, the Generator is trained to design molecules with an optimized coefficient of partition and also high inhibitory power against the Adenosine $$A_{2A}$$ A 2 A and $$\kappa$$ κ opioid receptors. The results reveal that the model can effectively adjust the newly generated molecules towards the wanted direction. More importantly, it was possible to find promising sets of unique and diverse molecules, which was the main purpose of the newly implemented strategy.


2021 ◽  
Author(s):  
Vladislav Vasilevich Alekseev ◽  
Denis Mihaylovich Orlov ◽  
Dmitry Anatolevich Koroteev

Abstract The approaches of building and methods of using the digital core are currently developing rapidly. The use of these methods makes it possible to obtain petrophysical information by non-destructive methods quickly. Digital rock physics includes two main stages: constructing models and modeling various physical processes on the obtained models. Our work proposes using deep learning methods for mineral and pore space segmentation instead of classical methods such as threshold image processing. Deep neural networks have long been able to show their advantages in many areas of computer vision. This paper proposes and tests methods that help identify different minerals in images from a scanning electron microscope. We used images of rocks of the Achimov formation, which are arkoses, as samples. We tested various deep neural networks such as LinkNet, U-Net, ResUNet, and pix2pix and identified those that performed best in segmentation.


Sign in / Sign up

Export Citation Format

Share Document