Solution of Inverse Problems Using Multilayer Quaternion Neural Networks

Author(s):  
Takehiko Ogawa
2021 ◽  
Author(s):  
Ronan Fablet ◽  
Bertrand Chapron ◽  
Lucas Drumetz ◽  
Etienne Memin ◽  
Olivier Pannekoucke ◽  
...  

<p>This paper addresses representation learning for the resolution of inverse problems  with geophysical dynamics. Among others, examples of inverse problems of interest include space-time interpolation, short-term forecasting, conditional simulation w.r.t. available observations, downscaling problems… From a methodological point of view, we rely on a variational data assimilation framework. Data assimilation (DA) aims to reconstruct the time evolution of some state given a series of  observations, possibly noisy and irregularly-sampled. Here, we investigate DA from a machine learning point of view backed by an underlying variational representation.  Using automatic differentiation tools embedded in deep learning frameworks, we introduce end-to-end neural network architectures for variational data assimilation. It comprises two key components: a variational model and a gradient-based solver both implemented as neural networks. A key feature of the proposed end-to-end learning architecture is that we may train the neural networks models using both supervised and unsupervised strategies. We first illustrate applications to the reconstruction of Lorenz-63 and Lorenz-96 systems from partial and noisy observations. Whereas the gain issued from the supervised learning setting emphasizes the relevance of groundtruthed observation dataset for real-world case-studies, these results also suggest new means to design data assimilation models from data. Especially, they suggest that learning task-oriented representations of the underlying dynamics may be beneficial. We further discuss applications to short-term forecasting and sampling design along with preliminary results for the reconstruction of sea surface currents from satellite altimetry data. </p><p>This abstract is supported by a preprint available online: https://arxiv.org/abs/2007.12941</p>


2016 ◽  
Vol 93 (1) ◽  
Author(s):  
Matteo di Volo ◽  
Raffaella Burioni ◽  
Mario Casartelli ◽  
Roberto Livi ◽  
Alessandro Vezzani

2019 ◽  
Vol 62 (3) ◽  
pp. 445-455
Author(s):  
Johannes Schwab ◽  
Stephan Antholzer ◽  
Markus Haltmeier

Abstract Deep learning and (deep) neural networks are emerging tools to address inverse problems and image reconstruction tasks. Despite outstanding performance, the mathematical analysis for solving inverse problems by neural networks is mostly missing. In this paper, we introduce and rigorously analyze families of deep regularizing neural networks (RegNets) of the form $$\mathbf {B}_\alpha + \mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Bα+Nθ(α)Bα, where $$\mathbf {B}_\alpha $$Bα is a classical regularization and the network $$\mathbf {N}_{\theta (\alpha )} \mathbf {B}_\alpha $$Nθ(α)Bα is trained to recover the missing part $${\text {Id}}_X - \mathbf {B}_\alpha $$IdX-Bα not found by the classical regularization. We show that these regularizing networks yield a convergent regularization method for solving inverse problems. Additionally, we derive convergence rates (quantitative error estimates) assuming a sufficient decay of the associated distance function. We demonstrate that our results recover existing convergence and convergence rates results for filter-based regularization methods as well as the recently introduced null space network as special cases. Numerical results are presented for a tomographic sparse data problem, which clearly demonstrate that the proposed RegNets improve classical regularization as well as the null space network.


Sign in / Sign up

Export Citation Format

Share Document