scholarly journals Potential of Using Phase Correlation in Distributed Scatterer InSAR Applied to Built Scenarios

2020 ◽  
Vol 12 (4) ◽  
pp. 686
Author(s):  
Guoqiang Shi ◽  
Peifeng Ma ◽  
Hui Lin ◽  
Bo Huang ◽  
Bowen Zhang ◽  
...  

The improved spatial resolution of Synthetic Aperture Radar (SAR) images from newly launched sensors has promoted a more frequent use of distributed scatterer (DS) interferometry (DSI) in urban monitoring, pursuing sufficient and detailed measurements. However, the commonly used statistical methods for homogeneous pixel clustering by exploring amplitude information are firstly, computationally intensive; furthermore, their necessity when applied to high-coherent built scenarios is little discussed in the literature. This paper explores the potential of using phase information for the detection of homogeneous pixels on built surfaces. We propose a simple phase-correlated pixel (PCP) clustering and introduce a coherence-weighted phase link (WPL), i.e., PCPWPL, to pursue a faster processing of interferogram phase denoising. Rather than relying on the statistical tests of amplitude characteristics, we exploit vector correlation in the complex domain to identify PCPs with similar phase observations, thus, avoiding the intensive hypothesis test. A coherence-weighted phase linking is applied for DS phase reconstruction. The estimation of geophysical parameters, e.g., deformation, is completed using an integrated network of persistent scatterers (PS) and DS. Efficiency of the proposed method is fairly illustrated by both synthetic and real data experiments. Pros and cons of the proposed PCPWPL were analyzed with the comparison to a conventional amplitude-based strategy using an X-band CosmoSkyMed dataset. It is demonstrated that the use of phase correlation is sufficient for DS monitoring in built scenarios, with equivalent measurement quantity and cheaper computational cost.

2021 ◽  
pp. 1-7
Author(s):  
Julian Wucherpfennig ◽  
Aya Kachi ◽  
Nils-Christian Bormann ◽  
Philipp Hunziker

Abstract Binary outcome models are frequently used in the social sciences and economics. However, such models are difficult to estimate with interdependent data structures, including spatial, temporal, and spatio-temporal autocorrelation because jointly determined error terms in the reduced-form specification are generally analytically intractable. To deal with this problem, simulation-based approaches have been proposed. However, these approaches (i) are computationally intensive and impractical for sizable datasets commonly used in contemporary research, and (ii) rarely address temporal interdependence. As a way forward, we demonstrate how to reduce the computational burden significantly by (i) introducing analytically-tractable pseudo maximum likelihood estimators for latent binary choice models that exhibit interdependence across space and time and by (ii) proposing an implementation strategy that increases computational efficiency considerably. Monte Carlo experiments show that our estimators recover the parameter values as good as commonly used estimation alternatives and require only a fraction of the computational cost.


Author(s):  
Maddalena Cavicchioli

Abstract We derive sufficient conditions for the existence of second and fourth moments of Markov switching multivariate generalized autoregressive conditional heteroscedastic processes in the general vector specification. We provide matrix expressions in closed form for such moments, which are obtained by using a Markov switching vector autoregressive moving-average representation of the initial process. These expressions are shown to be readily programmable in addition of greatly reducing the computational cost. As theoretical applications of the results, we derive the spectral density matrix of the squares and cross products, propose a new definition of multivariate kurtosis measure to recognize heavy-tailed features in financial real data, and provide a matrix expression in closed form of the impulse-response function for the volatility. An empirical example illustrates the results.


2021 ◽  
Author(s):  
Anuj Dhoj Thapa

Gillespie's algorithm, also known as the Stochastic Simulation Algorithm (SSA), is an exact simulation method for the Chemical Master Equation model of well-stirred biochemical systems. However, this method is computationally intensive when some fast reactions are present in the system. The tau-leap scheme developed by Gillespie can speed up the stochastic simulation of these biochemically reacting systems with negligible loss in accuracy. A number of tau-leaping methods were proposed, including the explicit tau-leaping and the implicit tau-leaping strategies. Nonetheless, these schemes have low order of accuracy. In this thesis, we investigate tau-leap strategies which achieve high accuracy at reduced computational cost. These strategies are tested on several biochemical systems of practical interest.


Author(s):  
James Farrow

ABSTRACT ObjectivesThe SA.NT DataLink Next Generation Linkage Management System (NGLMS) stores linked data in the form of a graph (in the computer science sense) comprised of nodes (records) and edges (record relationships or similarities). This permits efficient pre-clustering techniques based on transitive closure to form groups of records which relate to the same individual (or other selection criteria). ApproachOnly information known (or at least highly likely) to be relevant is extracted from the graph as superclusters. This operation is computationally inexpensive when the underlying information is stored as a graph and may be able to be done on-the-fly for typical clusters. More computationally intensive analysis and/or further clustering may then be performed on this smaller subgraph. Canopy clustering and using blocking used to reduce pairwise comparisons are expressions of the same type of approach. ResultsSubclusters for manual review based on transitive closure are typically computationally inexpensive enough to extract from the NGLMS that they are extracted on-demand during manual clerical review activities. There is no necessity to pre-calculate these clusters. Once extracted further analysis is undertaken on these smaller data groupings for visualisation and presentation for review and quality analysis. More computationally expensive techniques can be used at this point to prepare data for visualisation or provide hints to manual reviewers. 
Extracting high-recall groups of data records for review but providing them to reviews grouped further into high precision groups as the result of a second pass has resulted in a reduction of the time taken for clerical reviewers at SANT DataLink to manual review a group by 30–40%. The reviewers are able to manipulate whole groups of related records at once rather than individual records. ConclusionPre-clustering reduces the computational cost associated with higher order clustering and analysis algorithms. Algorithms which scale by n^2 (or more) are typical in comparison scenarios. By breaking the problem into pieces the computational cost can be reduced. Typically breaking a problem into many pieces reduces the cost in proportion to the number of pieces the problem can be broken into. This cost reduction can make techniques possible which would otherwise be computationally prohibitive.


2011 ◽  
Vol 11 (04) ◽  
pp. 571-587 ◽  
Author(s):  
WILLIAM ROBSON SCHWARTZ ◽  
HELIO PEDRINI

Fractal image compression is one of the most promising techniques for image compression due to advantages such as resolution independence and fast decompression. It exploits the fact that natural scenes present self-similarity to remove redundancy and obtain high compression rates with smaller quality degradation compared to traditional compression methods. The main drawback of fractal compression is its computationally intensive encoding process, due to the need for searching regions with high similarity in the image. Several approaches have been developed to reduce the computational cost to locate similar regions. In this work, we propose a method based on robust feature descriptors to speed up the encoding time. The use of robust features provides more discriminative and representative information for regions of the image. When the regions are better represented, the search for similar parts of the image can be reduced to focus only on the most likely matching candidates, which leads to reduction on the computational time. Our experimental results show that the use of robust feature descriptors reduces the encoding time while keeping high compression rates and reconstruction quality.


Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. V223-V232 ◽  
Author(s):  
Zhicheng Geng ◽  
Xinming Wu ◽  
Sergey Fomel ◽  
Yangkang Chen

The seislet transform uses the wavelet-lifting scheme and local slopes to analyze the seismic data. In its definition, the designing of prediction operators specifically for seismic images and data is an important issue. We have developed a new formulation of the seislet transform based on the relative time (RT) attribute. This method uses the RT volume to construct multiscale prediction operators. With the new prediction operators, the seislet transform gets accelerated because distant traces get predicted directly. We apply our method to synthetic and real data to demonstrate that the new approach reduces computational cost and obtains excellent sparse representation on test data sets.


Mathematics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 604 ◽  
Author(s):  
Victor Korolev ◽  
Andrey Gorshenin

Mathematical models are proposed for statistical regularities of maximum daily precipitation within a wet period and total precipitation volume per wet period. The proposed models are based on the generalized negative binomial (GNB) distribution of the duration of a wet period. The GNB distribution is a mixed Poisson distribution, the mixing distribution being generalized gamma (GG). The GNB distribution demonstrates excellent fit with real data of durations of wet periods measured in days. By means of limit theorems for statistics constructed from samples with random sizes having the GNB distribution, asymptotic approximations are proposed for the distributions of maximum daily precipitation volume within a wet period and total precipitation volume for a wet period. It is shown that the exponent power parameter in the mixing GG distribution matches slow global climate trends. The bounds for the accuracy of the proposed approximations are presented. Several tests for daily precipitation, total precipitation volume and precipitation intensities to be abnormally extremal are proposed and compared to the traditional PoT-method. The results of the application of this test to real data are presented.


Author(s):  
Hiroki Yamashita ◽  
Guanchu Chen ◽  
Yeefeng Ruan ◽  
Paramsothy Jayakumar ◽  
Hiroyuki Sugiyama

A high-fidelity computational terrain dynamics model plays a crucial role in accurate vehicle mobility performance prediction under various maneuvering scenarios on deformable terrain. Although many computational models have been proposed using either finite element (FE) or discrete element (DE) approaches, phenomenological constitutive assumptions in FE soil models make the modeling of complex granular terrain behavior very difficult and DE soil models are computationally intensive, especially when considering a wide range of terrain. To address the limitations of existing deformable terrain models, this paper presents a hierarchical FE–DE multiscale tire–soil interaction simulation capability that can be integrated in the monolithic multibody dynamics solver for high-fidelity off-road mobility simulation using high-performance computing (HPC) techniques. It is demonstrated that computational cost is substantially lowered by the multiscale soil model as compared to the corresponding pure DE model while maintaining the solution accuracy. The multiscale tire–soil interaction model is validated against the soil bin mobility test data under various wheel load and tire inflation pressure conditions, thereby demonstrating the potential of the proposed method for resolving challenging vehicle-terrain interaction problems.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Bo Lin ◽  
Jiying Liu ◽  
Meihua Xie ◽  
Jubo Zhu

After establishing the sparse representation of the source signal subspace, we propose a new method to estimate the direction of arrival (DOA) by solving anℓ1-norm minimization for sparse signal recovery of the source powers. Second-order cone programming is applied to reformulate this optimization problem, and it is solved effectively by employing the interior point method. Due to the keeping of the signal subspace and the discarding of the noise subspace, the proposed method is more robust to noise than many other sparsity-based methods. The real data tests and the numerical simulations demonstrate that the proposed method has improved accuracy and robustness to noise, and it is not sensitive to the knowledge about the number of sources. We discuss the computational cost of our method theoretically, and the experiment results verify the computational effectiveness.


2020 ◽  
Vol 223 (3) ◽  
pp. 2009-2026
Author(s):  
Frederik Link ◽  
Georg Rümpker ◽  
Ayoub Kaviani

SUMMARY We present a technique to derive robust estimates for the crustal thickness and elastic properties, including anisotropy, from shear wave splitting of converted phases in receiver functions. We combine stacking procedures with a correction scheme for the splitting effect of the crustal converted Ps-phase and its first reverberation, the PpPs-phase, where we also allow for a predefined dipping Moho. The incorporation of two phases stabilizes the analysis procedure and allows to simultaneously solve for the crustal thickness, the ratio of average P- to S-wave velocities, the percentage of anisotropy and the fast-axis direction. The stacking is based on arrival times and polarizations computed using a ray-based algorithm. Synthetic tests show the robustness of the technique and its applicability to tectonic settings where dip of the Moho is significant. These tests also demonstrate that the effects of a dipping layer boundary may overprint a possible anisotropic signature. To constrain the uncertainty of our results we perform statistical tests based on a bootstrapping approach. We distinguish between different model classes by comparing the coherency of the stacked amplitudes after moveout correction. We apply the new technique to real-data examples from different tectonic regimes and show that coherency of the stacked receiver functions can be improved, when anisotropy and a dipping Moho are included in the analysis. The examples underline the advantages of statistical analyses when dealing with stacking procedures and potentially ambiguous solutions.


Sign in / Sign up

Export Citation Format

Share Document