scholarly journals A comparison between current-based integral equations approaches for eddy current problems

2021 ◽  
Vol 2090 (1) ◽  
pp. 012137
Author(s):  
F Lucchini ◽  
N Marconato

Abstract In this paper, a comparison between two current-based Integral Equations approaches for eddy current problems is presented. In particular, the very well-known and widely adopted loop-current formulation (or electric vector potential formulation) is compared to the less common J-φ formulation. Pros and cons of the two formulations with respect to the problem size are discussed, as well as the adoption of low-rank approximation techniques. Although rarely considered in the literature, it is shown that the J-φ formulation may offer some useful advantages when large problems are considered. Indeed, for large-scale problems, while the computational efforts required by the two formulations are comparable, the J-φ formulation does not require any particular attention when non-simply connected domains are considered.

2021 ◽  
Vol 47 (2) ◽  
pp. 1-34
Author(s):  
Umberto Villa ◽  
Noemi Petra ◽  
Omar Ghattas

We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. E211-E225 ◽  
Author(s):  
Manuel Amaya ◽  
Jan Petter Morten ◽  
Linus Boman

We have developed an approximation to the Hessian for inversion of 3D controlled source electromagnetic data. Our approach can considerably reduce the numerical complexity in terms of the number of forward solutions as well as the size and complexity of the calculations required to compute the update direction from the Gauss-Newton equation. The approach makes use of “supershots,” in which several source positions are combined for simultaneous-source simulations. The resulting Hessian can be described as a low-rank approximation to the Gauss-Newton Hessian. The structure of the approximate Hessian facilitates a matrix-free direct solver for the Gauss-Newton equation, and the reduced memory complexity allows the use of a large number of unknowns. We studied the crosstalk introduced in the approximation, and we determined how the dissipative nature of marine electromagnetic field propagation reduces the impact of this noise. Inversion results from recent field data demonstrated the numerical and practical feasibility of the approach.


2014 ◽  
Vol 24 (08) ◽  
pp. 1440005 ◽  
Author(s):  
FENGYU CONG ◽  
GUOXU ZHOU ◽  
PIIA ASTIKAINEN ◽  
QIBIN ZHAO ◽  
QIANG WU ◽  
...  

Non-negative tensor factorization (NTF) has been successfully applied to analyze event-related potentials (ERPs), and shown superiority in terms of capturing multi-domain features. However, the time-frequency representation of ERPs by higher-order tensors are usually large-scale, which prevents the popularity of most tensor factorization algorithms. To overcome this issue, we introduce a non-negative canonical polyadic decomposition (NCPD) based on low-rank approximation (LRA) and hierarchical alternating least square (HALS) techniques. We applied NCPD (LRAHALS and benchmark HALS) and CPD to extract multi-domain features of a visual ERP. The features and components extracted by LRAHALS NCPD and HALS NCPD were very similar, but LRAHALS NCPD was 70 times faster than HALS NCPD. Moreover, the desired multi-domain feature of the ERP by NCPD showed a significant group difference (control versus depressed participants) and a difference in emotion processing (fearful versus happy faces). This was more satisfactory than that by CPD, which revealed only a group difference.


2014 ◽  
Vol 2014 ◽  
pp. 1-11
Author(s):  
Jinjiang Li ◽  
Mengjun Li ◽  
Hui Fan

Existing image inpainting algorithm based on low-rank matrix approximation cannot be suitable for complex, large-scale, damaged texture image. An inpainting algorithm based on low-rank approximation and texture direction is proposed in the paper. At first, we decompose the image using low-rank approximation method. Then the area to be repaired is interpolated by level set algorithm, and we can reconstruct a new image by the boundary values of level set. In order to obtain a better restoration effect, we make iteration for low-rank decomposition and level set interpolation. Taking into account the impact of texture direction, we segment the texture and make low-rank decomposition at texture direction. Experimental results show that the new algorithm is suitable for texture recovery and maintaining the overall consistency of the structure, which can be used to repair large-scale damaged image.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Ling Wang ◽  
Hongqiao Wang ◽  
Guangyuan Fu

Extensions of kernel methods for the class imbalance problems have been extensively studied. Although they work well in coping with nonlinear problems, the high computation and memory costs severely limit their application to real-world imbalanced tasks. The Nyström method is an effective technique to scale kernel methods. However, the standard Nyström method needs to sample a sufficiently large number of landmark points to ensure an accurate approximation, which seriously affects its efficiency. In this study, we propose a multi-Nyström method based on mixtures of Nyström approximations to avoid the explosion of subkernel matrix, whereas the optimization to mixture weights is embedded into the model training process by multiple kernel learning (MKL) algorithms to yield more accurate low-rank approximation. Moreover, we select subsets of landmark points according to the imbalance distribution to reduce the model’s sensitivity to skewness. We also provide a kernel stability analysis of our method and show that the model solution error is bounded by weighted approximate errors, which can help us improve the learning process. Extensive experiments on several large scale datasets show that our method can achieve a higher classification accuracy and a dramatical speedup of MKL algorithms.


Author(s):  
Yuchen Guo ◽  
Guiguang Ding ◽  
Jungong Han ◽  
Hang Shao ◽  
Xin Lou ◽  
...  

Zero-shot learning (ZSL) is a recently emerging research topic which aims to build classification models for unseen classes with knowledge from auxiliary seen classes. Though many ZSL works have shown promising results on small-scale datasets by utilizing a bilinear compatibility function, the ZSL performance on large-scale datasets with many classes (say, ImageNet) is still unsatisfactory. We argue that the bilinear compatibility function is a low-rank approximation of the true compatibility function such that it is not expressive enough especially when there are a large number of classes because of the rank limitation. To address this issue, we propose a novel approach, termed as High-rank Deep Embedding Networks (GREEN), for ZSL with many classes. In particular, we propose a feature-dependent mixture of softmaxes as the image-class compatibility function, which is a simple extension of the bilinear compatibility function, but yields much better results. It utilizes a mixture of non-linear transformations with feature-dependent latent variables to approximate the true function in a high-rank way, which makes GREEN more expressive. Experiments on several datasets including ImageNet demonstrate GREEN significantly outperforms the state-of-the-art approaches.


Sign in / Sign up

Export Citation Format

Share Document