scholarly journals Implicitly localized ensemble observational update to cope with nonlocal/nonlinear data constraints in large-size inverse problems

2019 ◽  
Author(s):  
Jean-Michel Brankart
2020 ◽  
Author(s):  
Jean-Michel Brankart

<p>Many practical applications involve the resolution of large-size inverse problems, without providing more than a moderate-size sample to describe the prior probability distribution. In this situation, additional information must be supplied to augment the effective dimension of the available sample, for instance using a covariance localization approach. In this study, it is suggested that covariance localization can be efficiently applied to an approximate variant of the Metropolis/Hastings algorithm, by modulating the ensemble members by the large-scale patterns of other members. Modulation is used to design a (global) proposal probability distribution (i) that can be sampled at a very low cost, (ii) that automatically accounts for a localized prior covariance, and (iii) that leads to an efficient sampler for the augmented prior probability distribution or for the posterior probability distribution. The resulting algorithm is applied to an academic example, illustrating (i) the effectiveness of covariance localization, (ii) the ability of the method to deal with nonlocal/nonlinear observation operators and non-Gaussian observation errors, (iii) the reliability, resolution and optimality of the updated ensemble, using probabilistic scores appropriate to a non-Gaussian posterior distribution, and (iv) the scalability of the algorithm as a function of the size of the problem. The codes are openly available from github.com/brankart/ensdam.</p>


Geophysics ◽  
1994 ◽  
Vol 59 (5) ◽  
pp. 818-829 ◽  
Author(s):  
John C. VanDecar ◽  
Roel Snieder

It is not uncommon now for geophysical inverse problems to be parameterized by [Formula: see text] to [Formula: see text] unknowns associated with upwards of [Formula: see text] to [Formula: see text] data constraints. The matrix problem defining the linearization of such a system (e.g., [Formula: see text]m = b) is usually solved with a least‐squares criterion [Formula: see text]. The size of the matrix, however, discourages the direct solution of the system and researchers often turn to iterative techniques such as the method of conjugate gradients to obtain an estimate of the least‐squares solution. These iterative methods take advantage of the sparseness of [Formula: see text], which often has as few as 2–3 percent of its elements nonzero, and do not require the calculation (or storage) of the matrix [Formula: see text]. Although there are usually many more data constraints than unknowns, these problems are, in general, underdetermined and therefore require some sort of regularization to obtain a solution. When the regularization is simple damping, the conjugate gradients method tends to converge in relatively few iterations. However, when derivative‐type regularization is applied (first derivative constraints to obtain the flattest model that fits the data; second derivative to obtain the smoothest), the convergence of parts of the solution may be drastically inhibited. In a series of 1-D examples and a synthetic 2-D crosshole tomography example, we demonstrate this problem and also suggest a method of accelerating the convergence through the preconditioning of the conjugate gradient search directions. We derive a 1-D preconditioning operator for the case of first derivative regularization using a WKBJ approximation. We have found that preconditioning can reduce the number of iterations necessary to obtain satisfactory convergence by up to an order of magnitude. The conclusions we present are also relevant to Bayesian inversion, where a smoothness constraint is imposed through an a priori covariance of the model.


2002 ◽  
Vol 50 (1) ◽  
pp. 50-58 ◽  
Author(s):  
R. Zaridze ◽  
G. Bit-Babik ◽  
K. Tavzarashvili ◽  
D.P. Economou ◽  
N.K. Uzunoglu

Author(s):  
R. A. Ricks ◽  
Angus J. Porter

During a recent investigation concerning the growth of γ' precipitates in nickel-base superalloys it was observed that the sign of the lattice mismatch between the coherent particles and the matrix (γ) was important in determining the ease with which matrix dislocations could be incorporated into the interface to relieve coherency strains. Thus alloys with a negative misfit (ie. the γ' lattice parameter was smaller than the matrix) could lose coherency easily and γ/γ' interfaces would exhibit regularly spaced networks of dislocations, as shown in figure 1 for the case of Nimonic 115 (misfit = -0.15%). In contrast, γ' particles in alloys with a positive misfit could grow to a large size and not show any such dislocation arrangements in the interface, thus indicating that coherency had not been lost. Figure 2 depicts a large γ' precipitate in Nimonic 80A (misfit = +0.32%) showing few interfacial dislocations.


Author(s):  
H. Weiland ◽  
D. P. Field

Recent advances in the automatic indexing of backscatter Kikuchi diffraction patterns on the scanning electron microscope (SEM) has resulted in the development of a new type of microscopy. The ability to obtain statistically relevant information on the spatial distribution of crystallite orientations is giving rise to new insight into polycrystalline microstructures and their relation to materials properties. A limitation of the technique in the SEM is that the spatial resolution of the measurement is restricted by the relatively large size of the electron beam in relation to various microstructural features. Typically the spatial resolution in the SEM is limited to about half a micron or greater. Heavily worked structures exhibit microstructural features much finer than this and require resolution on the order of nanometers for accurate characterization. Transmission electron microscope (TEM) techniques offer sufficient resolution to investigate heavily worked crystalline materials.Crystal lattice orientation determination from Kikuchi diffraction patterns in the TEM (Figure 1) requires knowledge of the relative positions of at least three non-parallel Kikuchi line pairs in relation to the crystallite and the electron beam.


Author(s):  
Patricia G. Calarco ◽  
Margaret C. Siebert

Visualization of preimplantation mammalian embryos by electron microscopy is difficult due to the large size of the ircells, their relative lack of internal structure, and their highly hydrated cytoplasm. For example, the fertilized egg of the mouse is a single cell of approximately 75μ in diameter with little organized cytoskelet on and apaucity ofor ganelles such as endoplasmic reticulum (ER) and Golgi material. Thus, techniques that work well on tissues or cell lines are often not adaptable to embryos at either the LM or EM level.Over several years we have perfected techniques for visualization of mammalian embryos by LM and TEM, SEM and for the pre-embedding localization of antigens. Post-embedding antigenlocalization in thin sections of mouse oocytes and embryos has presented a more difficult challenge and has been explored in LR White, LR Gold, soft EPON (after etching of sections), and Lowicryl K4M. To date, antigen localization has only been achieved in Lowicryl-embedded material, although even with polymerization at-40°C, the small ER vesicles characteristic of embryos are unrecognizable.


Author(s):  
K. Ohi ◽  
M. Mizuno ◽  
T. Kasai ◽  
Y. Ohkura ◽  
K. Mizuno ◽  
...  

In recent years, with electron microscopes coming into wider use, their installation environments do not necessarily give their performance full play. Their environmental conditions include air-conditioners, magnetic fields, and vibrations. We report a jointly developed entirely new vibration isolator which is effective against the vibrations transmitted from the floor.Conventionally, large-sized vibration isolators which need the digging of a pit have been used. These vibration isolators, however, are large present problems of installation and maintenance because of their large-size.Thus, we intended to make a vibration isolator which1) eliminates the need for changing the installation room2) eliminates the need of maintenance and3) are compact in size and easily installable.


Methodology ◽  
2019 ◽  
Vol 15 (3) ◽  
pp. 97-105
Author(s):  
Rodrigo Ferrer ◽  
Antonio Pardo

Abstract. In a recent paper, Ferrer and Pardo (2014) tested several distribution-based methods designed to assess when test scores obtained before and after an intervention reflect a statistically reliable change. However, we still do not know how these methods perform from the point of view of false negatives. For this purpose, we have simulated change scenarios (different effect sizes in a pre-post-test design) with distributions of different shapes and with different sample sizes. For each simulated scenario, we generated 1,000 samples. In each sample, we recorded the false-negative rate of the five distribution-based methods with the best performance from the point of view of the false positives. Our results have revealed unacceptable rates of false negatives even with effects of very large size, starting from 31.8% in an optimistic scenario (effect size of 2.0 and a normal distribution) to 99.9% in the worst scenario (effect size of 0.2 and a highly skewed distribution). Therefore, our results suggest that the widely used distribution-based methods must be applied with caution in a clinical context, because they need huge effect sizes to detect a true change. However, we made some considerations regarding the effect size and the cut-off points commonly used which allow us to be more precise in our estimates.


Sign in / Sign up

Export Citation Format

Share Document