scholarly journals Inertia-Constrained Pixel-by-Pixel Nonnegative Matrix Factorisation: A Hyperspectral Unmixing Method Dealing with Intra-Class Variability

2018 ◽  
Vol 10 (11) ◽  
pp. 1706 ◽  
Author(s):  
Charlotte Revel ◽  
Yannick Deville ◽  
Véronique Achard ◽  
Xavier Briottet ◽  
Christiane Weber

Blind source separation is a common processing tool to analyse the constitution of pixels of hyperspectral images. Such methods usually suppose that pure pixel spectra (endmembers) are the same in all the image for each class of materials. In the framework of remote sensing, such an assumption is no longer valid in the presence of intra-class variability due to illumination conditions, weathering, slight variations of the pure materials, etc. In this paper, we first describe the results of investigations highlighting intra-class variability measured in real images. Considering these results, a new formulation of the linear mixing model is presented leading to two new methods. Unconstrained pixel-by-pixel NMF (UP-NMF) is a new blind source separation method based on the assumption of a linear mixing model, which can deal with intra-class variability. To overcome the limitations of UP-NMF, an extended method is also proposed, named Inertia-constrained Pixel-by-pixel NMF (IP-NMF). For each sensed spectrum, these extended versions of NMF extract a corresponding set of source spectra. A constraint is set to limit the spreading of each source’s estimates in IP-NMF. The proposed methods are first tested on a semi-synthetic data set built with spectra extracted from a real hyperspectral image and then numerically mixed. We thus demonstrate the interest of our methods for realistic source variabilities. Finally, IP-NMF is tested on a real data set and it is shown to yield better performance than state of the art methods.

2010 ◽  
pp. 186-206
Author(s):  
Saeid Sanei ◽  
Bahador Makkiabadi

Tensor factorization (TF) is introduced as a powerful tool for solving multi-way problems. As an effective and major application of this technique, separation of sound particularly speech signal sources from their corresponding convolutive mixtures is described and the results are demonstrated. The method is flexible and can easily incorporate all possible parameters or factors into the separation formulation. As a consequence of that fewer assumptions (such as uncorrelatedness and independency) will be required. The new formulation allows further degree of freedom to the original parallel factor analysis (PARAFAC) problem in which the scaling and permutation problems of the frequency domain blind source separation (BSS) can be resolved. Based on the results of experiments using real data in a simulated medium, it has been concluded that compared to conventional frequency domain BSS methods, both objective and subjective results are improved when the proposed algorithm is used.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2021 ◽  
Vol 2 ◽  
Author(s):  
Chengjie Li ◽  
Lidong Zhu ◽  
Zhongqiang Luo ◽  
Zhen Zhang ◽  
Yilun Liu ◽  
...  

In space-based AIS (Automatic Identification System), due to the high orbit and wide coverage of the satellite, there are many self-organizing communities within the observation range of the satellite, and the signals will inevitably conflict, which reduces the probability of ship detection. In this paper, to improve system processing power and security, according to the characteristics of neural network that can efficiently find the optimal solution of a problem, proposes a method that combines the problem of blind source separation with BP neural network, using the generated suitable data set to train the neural network, thereby automatically generating a traditional blind signal separation algorithm with a more stable separation effect. At last, through the simulation results of combining the blind source separation problem with BP neural network, the performance and stability of the space-based AIS can be effectively improved.


2020 ◽  
Vol 223 (3) ◽  
pp. 1565-1583
Author(s):  
Hoël Seillé ◽  
Gerhard Visser

SUMMARY Bayesian inversion of magnetotelluric (MT) data is a powerful but computationally expensive approach to estimate the subsurface electrical conductivity distribution and associated uncertainty. Approximating the Earth subsurface with 1-D physics considerably speeds-up calculation of the forward problem, making the Bayesian approach tractable, but can lead to biased results when the assumption is violated. We propose a methodology to quantitatively compensate for the bias caused by the 1-D Earth assumption within a 1-D trans-dimensional Markov chain Monte Carlo sampler. Our approach determines site-specific likelihood functions which are calculated using a dimensionality discrepancy error model derived by a machine learning algorithm trained on a set of synthetic 3-D conductivity training images. This is achieved by exploiting known geometrical dimensional properties of the MT phase tensor. A complex synthetic model which mimics a sedimentary basin environment is used to illustrate the ability of our workflow to reliably estimate uncertainty in the inversion results, even in presence of strong 2-D and 3-D effects. Using this dimensionality discrepancy error model we demonstrate that on this synthetic data set the use of our workflow performs better in 80 per cent of the cases compared to the existing practice of using constant errors. Finally, our workflow is benchmarked against real data acquired in Queensland, Australia, and shows its ability to detect the depth to basement accurately.


2019 ◽  
Vol 11 (3) ◽  
pp. 249 ◽  
Author(s):  
Pejman Rasti ◽  
Ali Ahmad ◽  
Salma Samiei ◽  
Etienne Belin ◽  
David Rousseau

In this article, we assess the interest of the recently introduced multiscale scattering transform for texture classification applied for the first time in plant science. Scattering transform is shown to outperform monoscale approaches (gray-level co-occurrence matrix, local binary patterns) but also multiscale approaches (wavelet decomposition) which do not include combinatory steps. The regime in which scatter transform also outperforms a standard CNN architecture in terms of data-set size is evaluated ( 10 4 instances). An approach on how to optimally design the scatter transform based on energy contrast is provided. This is illustrated on the hard and open problem of weed detection in culture crops of high density from the top view in intensity images. An annotated synthetic data-set available under the form of a data challenge and a simulator are proposed for reproducible science (https://uabox.univ-angers.fr/index.php/s/iuj0knyzOUgsUV9). Scatter transform only trained on synthetic data shows an accuracy of 85 % when tested on real data.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. M1-M10 ◽  
Author(s):  
Leonardo Azevedo ◽  
Ruben Nunes ◽  
Pedro Correia ◽  
Amílcar Soares ◽  
Luis Guerreiro ◽  
...  

Due to the nature of seismic inversion problems, there are multiple possible solutions that can equally fit the observed seismic data while diverging from the real subsurface model. Consequently, it is important to assess how inverse-impedance models are converging toward the real subsurface model. For this purpose, we evaluated a new methodology to combine the multidimensional scaling (MDS) technique with an iterative geostatistical elastic seismic inversion algorithm. The geostatistical inversion algorithm inverted partial angle stacks directly for acoustic and elastic impedance (AI and EI) models. It was based on a genetic algorithm in which the model perturbation at each iteration was performed recurring to stochastic sequential simulation. To assess the reliability and convergence of the inverted models at each step, the simulated models can be projected in a metric space computed by MDS. This projection allowed distinguishing similar from variable models and assessing the convergence of inverted models toward the real impedance ones. The geostatistical inversion results of a synthetic data set, in which the real AI and EI models are known, were plotted in this metric space along with the known impedance models. We applied the same principle to a real data set using a cross-validation technique. These examples revealed that the MDS is a valuable tool to evaluate the convergence of the inverse methodology and the impedance model variability among each iteration of the inversion process. Particularly for the geostatistical inversion algorithm we evaluated, it retrieves reliable impedance models while still producing a set of simulated models with considerable variability.


Sign in / Sign up

Export Citation Format

Share Document