scholarly journals Multivariable inversion using exhaustive grid search and high-performance GPU processing: a new perspective

2020 ◽  
Vol 221 (2) ◽  
pp. 905-927
Author(s):  
Ioannis E Venetis ◽  
Vasso Saltogianni ◽  
Stathis Stiros ◽  
Efstratios Gallopoulos

SUMMARY Exhaustive searches in regular grids is a traditional and effective method for inversion, that is numerical solution of systems of non-linear equations which cannot be solved using formal algebraic techniques. However, this technique is effective for very few (3–4) variables and is slow. Recently, the first limitation was to a major degree overpassed with the new TOPological INVersion (TOPINV) algorithm which was used for inversion of systems with up to 18, or even more unknown variables. The novelty of this algorithm is that it is not based on the principle of the mean minimum misfit (cost function) between observations and model predictions, used by most inversion techniques. The new algorithm investigates for each gridpoint whether misfits of each observation are within specified uncertainty intervals, and stores clusters of ‘successful’ gridpoints in matrix form. These clusters (ensembles, sets) of gridpoints are tested whether they satisfy certain criteria and are then used to compute one or more optimal statistical solutions. The new algorithm is efficient for highly non-linear problems with high measurement uncertainties (low signal-to-noise ratio, SNR) and poor distribution of observations, that is problems leading to complicated 3-D mean misfit surfaces without dominant peaks, but it is slow when running in common computers. To overcome this limitation, we used GPUs which permit parallel processing in common computers, but faced another computational problem: GPU parallel processing supports only up to three dimensions. To solve this problem, we used CUDA programming and optimized the distribution of the computational load to all GPU cores. This leads up to 100x speedup relative to common CPU processing, as is derived from comparative tests with synthetic data for two typical inversion geophysical problems with up to 18 unknown variables, Mogi magma source modeling and elastic dislocation modeling of seismic faults. This impressive speedup makes the GPU/CUDA implementation of TOPINV practical even for low-latency solution of certain geophysical problems. This speedup in calculations also permitted to investigate the performance of the new algorithm in relation to the density of the adopted grids. We focused on a typical problem of elastic dislocation in unfavorable conditions (poor observations geometry, data with low SNR) and on synthetic observations with noise, so that the difference of each solution from the ‘true’/reference value was known (accuracy-based approach). Application of the algorithm revealed stable, accurate and precise solutions, with quality increasing with the grid density. Solution defects (bias), mainly produced by very coarse grids, can be identified through specific diagnostic criteria, dictating finer search grids.

Geophysics ◽  
2005 ◽  
Vol 70 (4) ◽  
pp. V109-V120 ◽  
Author(s):  
Claudio Bagaini

I analyze the problem of estimating differences in the arrival times of a seismic wavefront recorded by an array of sensors. The two-sensor problem is tackled first, showing that even an approximate knowledge of the wavelet, such as its power spectrum, can substantially increase the accuracy of the time-delay estimate and reduce the signal-to-noise ratio (S/N) threshold for reliable time-delay estimation. The use of the complex trace, although beneficial for time-delay estimates in the presence of frequency-independent phase shifts, reduces the estimation accuracy in poor S/N conditions. I compare the performance of five time-delay estimators for arrays of sensors. Four of five estimators are based on crosscorrelation with a reference signal derived according to one of the following criteria: one trace in the array randomly selected, the stack of all array traces, the stack of all array traces iteratively updated, and (possible only for synthetic data) the noise-free wavelet. Another method, which is referred to as integration of differential delays, is based on the solution of an overdetermined system of linear equations built using the time delays between each pair of sensors. In all the situations considered, the performance of crosscorrelation with a trace of the array randomly selected is significantly worse than the other methods. Integration of differential delays proved to be the best-performing method for a large range of S/N conditions, particularly in the presence of large fluctuations in time delays and large bandwidth. However, for small time delays with respect to the wavelet duration, or if a priori knowledge of the moveout can be used to detrend the original data, crosscorrelation with a stacked trace performs similarly to integration of differential delays.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 2021 (6) ◽  
Author(s):  
Guillaume Bossard ◽  
Axel Kleinschmidt ◽  
Ergin Sezgin

Abstract We construct a pseudo-Lagrangian that is invariant under rigid E11 and transforms as a density under E11 generalised diffeomorphisms. The gauge-invariance requires the use of a section condition studied in previous work on E11 exceptional field theory and the inclusion of constrained fields that transform in an indecomposable E11-representation together with the E11 coset fields. We show that, in combination with gauge-invariant and E11-invariant duality equations, this pseudo-Lagrangian reduces to the bosonic sector of non-linear eleven-dimensional supergravity for one choice of solution to the section condi- tion. For another choice, we reobtain the E8 exceptional field theory and conjecture that our pseudo-Lagrangian and duality equations produce all exceptional field theories with maximal supersymmetry in any dimension. We also describe how the theory entails non-linear equations for higher dual fields, including the dual graviton in eleven dimensions. Furthermore, we speculate on the relation to the E10 sigma model.


2021 ◽  
Vol 11 (2) ◽  
pp. 790
Author(s):  
Pablo Venegas ◽  
Rubén Usamentiaga ◽  
Juan Perán ◽  
Idurre Sáez de Ocáriz

Infrared thermography is a widely used technology that has been successfully applied to many and varied applications. These applications include the use as a non-destructive testing tool to assess the integrity state of materials. The current level of development of this application is high and its effectiveness is widely verified. There are application protocols and methodologies that have demonstrated a high capacity to extract relevant information from the captured thermal signals and guarantee the detection of anomalies in the inspected materials. However, there is still room for improvement in certain aspects, such as the increase of the detection capacity and the definition of a detailed characterization procedure of indications, that must be investigated further to reduce uncertainties and optimize this technology. In this work, an innovative thermographic data analysis methodology is proposed that extracts a greater amount of information from the recorded sequences by applying advanced processing techniques to the results. The extracted information is synthesized into three channels that may be represented through real color images and processed by quaternion algebra techniques to improve the detection level and facilitate the classification of defects. To validate the proposed methodology, synthetic data and actual experimental sequences have been analyzed. Seven different definitions of signal-to-noise ratio (SNR) have been used to assess the increment in the detection capacity, and a generalized application procedure has been proposed to extend their use to color images. The results verify the capacity of this methodology, showing significant increments in the SNR compared to conventional processing techniques in thermographic NDT.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Müjgan Ercan Karadağ ◽  
Emiş Deniz Akbulut ◽  
Esin Avcı ◽  
Esra Fırat Oğuz ◽  
Saadet Kader ◽  
...  

AbstractObjectiveHemoglobinopathies are a common public health problem in Turkey. In the screening of these disorders in population, cation-exchange high performance liquid chromatography (HPLC) is accepted as the gold standard method. In this study, the aim was to assess four different HPLC devices used in hemoglobinopathy screening.Materials and methodsA total of 58 blood samples were analyzed with four different HPLC methods (Bio-Rad variant II, Agilent 1100, Tosoh G8 and Trinity Ultra2 trademarks).ResultsThe comparison study demonstrated a good correlation between the results of each HPLC analyzer and the reference value obtained by averaging all the HbA2 results belonging to the methods tested in the study [ (Tosoh G8 (r=0.988), Bio-Rad variant II (r=0.993), Agilent 1100 (r=0.98) and Trinity Ultra2 (r=0.992) ]. HbA2 determination in the presence of HbE was interfered in both Bio-Rad variant II and Tosoh G8.ConclusionThe analyzers were found to have compatible HbA2 results but with accompanying different degrees of proportional and systematic biases. HPLC analyzers may be affected by different hemoglobin variants at different HbA2 concentrations, which is an important point to take into consideration during the evaluation of HbA2 results in thalassemia screening.


Sign in / Sign up

Export Citation Format

Share Document