diagonal scaling
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 7)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Tomonari Sei

AbstractIt is shown that for any given multi-dimensional probability distribution with regularity conditions, there exists a unique coordinate-wise transformation such that the transformed distribution satisfies a Stein-type identity. A sufficient condition for the existence is referred to as copositivity of distributions. The proof is based on an energy minimization problem over a totally geodesic subset of the Wasserstein space. The result is considered as an alternative to Sklar’s theorem regarding copulas, and is also interpreted as a generalization of a diagonal scaling theorem. The Stein-type identity is applied to a rating problem of multivariate data. A numerical procedure for piece-wise uniform densities is provided. Some open problems are also discussed.


Geophysics ◽  
2021 ◽  
pp. 1-63
Author(s):  
Hamideh Sanavi ◽  
Peyman P. Moghaddam ◽  
Felix J. Herrmann

We propose a true amplitude solution to the seismic imaging problem. We derive a diagonal scaling approach for the normal operator approximation in the curvelet domain. This is based on the theorem which states that curvelets remain approximately invariant under the action of the normal operator. We use curvelets as essential tools for both approximation and inversion. We also exploit the theorem which states that curvelet-domain approximation should be smooth in phase space by enforcing smoothness of curvelet coefficients in angle and space domain.We analyze our method using a reverse time migration-demigration code, simulating the acoustic wave equation on different synthetic models. Our method produces a good resolution with reflecting dips and reproduces true amplitude reflectors and compensates for incomplete illumination in seismic images.


2021 ◽  
Vol 13 (01) ◽  
pp. 2150013
Author(s):  
Songyang Hou ◽  
Xiwei Li ◽  
Dongdong Wang ◽  
Zhiwei Lin

A mid-node mass lumping scheme is proposed to formulate the lumped mass matrices of serendipity elements for accurate structural vibration analysis. Since the row-sum technique leads to unacceptable negative lumped mass components for serendipity elements, the diagonal scaling HRZ method is frequently employed to construct lumped mass matrices of serendipity elements. In this work, through introducing a lumped mass matrix template that includes the HRZ lumped mass matrix as a special case, an analytical frequency accuracy measure is rationally derived with particular reference to the classical eight-node serendipity element. The theoretical results clearly reveal that the standard HRZ mass matrix actually does not offer the optimal frequency accuracy in accordance with the given lumped mass matrix template. On the other hand, by employing the nature of non-negative shape functions associated with the mid-nodes of serendipity elements, a mid-node lumped mass matrix (MNLM) formulation is introduced for the mass lumping of serendipity elements without corner nodal mass components, which essentially corresponds to the optimal frequency accuracy in the context of the given lumped mass matrix template. Both theoretical and numerical results demonstrate that MNLM yields better frequency accuracy than the standard HRZ lumped mass matrix formulation for structural vibration analysis.


Author(s):  
Kehelwala D. G. Maduranga ◽  
Kyle E. Helfrich ◽  
Qiang Ye

Recurrent neural networks (RNNs) have been successfully used on a wide range of sequential data problems. A well known difficulty in using RNNs is the vanishing or exploding gradient problem. Recently, there have been several different RNN architectures that try to mitigate this issue by maintaining an orthogonal or unitary recurrent weight matrix. One such architecture is the scaled Cayley orthogonal recurrent neural network (scoRNN) which parameterizes the orthogonal recurrent weight matrix through a scaled Cayley transform. This parametrization contains a diagonal scaling matrix consisting of positive or negative one entries that can not be optimized by gradient descent. Thus the scaling matrix is fixed before training and a hyperparameter is introduced to tune the matrix for each particular task. In this paper, we develop a unitary RNN architecture based on a complex scaled Cayley transform. Unlike the real orthogonal case, the transformation uses a diagonal scaling matrix consisting of entries on the complex unit circle which can be optimized using gradient descent and no longer requires the tuning of a hyperparameter. We also provide an analysis of a potential issue of the modReLU activiation function which is used in our work and several other unitary RNNs. In the experiments conducted, the scaled Cayley unitary recurrent neural network (scuRNN) achieves comparable or better results than scoRNN and other unitary RNNs without fixing the scaling matrix.


Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. S285-S299
Author(s):  
Nasser Kazemi

We have studied the preconditioned conjugate gradient (CG) algorithm in the context of shot-record extended model domain least-squares migration. The CG algorithm is a powerful iterative technique that can solve the least-squares migration problem efficiently; however, to see the merits of least-squares migration, one needs to apply the algorithm for several iterations. Generally speaking, the convergence rate of the CG algorithm depends on the condition number of the operator. Preconditioners are a family of operators that are easy to build and invert. Proper preconditioners can cluster the eigenvalues of the original operator; hence, they reduce the condition number of the operator that one wishes to invert. Accordingly, preconditioning the operator can, in theory, improve the convergence rate of the algorithm. In least-squares migration, the diagonal scaling of the Hessian and the approximated inverse of the Hessian are proven to work well as a preconditioner. We develop and apply two types of preconditioners for the shot-record extended model domain least-squares migration problem. The first preconditioner belongs to the diagonal scaling category, and a second preconditioner is a filter-based approach, which approximates the partial Hessian operators by local convolutional filters. The goal is to increase the convergence rate of the shot-record extended model domain least-squares migration using the reformulated cost function with a preconditioned operator. Experiments with a synthetic Sigsbee model and a real data example from the Gulf of Mexico, Mississippi Canyon data set, indicate that preconditioning the linear system of the equations improves the convergence rate of the algorithm.


Sign in / Sign up

Export Citation Format

Share Document