scholarly journals Appraisal of magnetotelluric galvanic electric distortion by optimizing amplitude and phase tensor relations

Geophysics ◽  
2020 ◽  
Vol 85 (3) ◽  
pp. E79-E98 ◽  
Author(s):  
Maik Neukirch ◽  
Savitri Galiana ◽  
Xavier Garcia

The introduction of the phase tensor marked a major breakthrough in the analysis and treatment of electric field galvanic distortion in the magnetotellurics method. Recently, the phase tensor formulation has been extended to a complete impedance tensor decomposition by introducing the complementary amplitude tensor, and both tensors can be further parameterized to represent geometric properties such as dimensionality, strike angle, and macroscopic anisotropy. Both tensors are characteristic for the electromagnetic induction phenomenon in the conductive subsurface with its specific geometric structure. The central hypothesis is that this coupling should result in similarities in both tensor’s geometric parameters, skew, strike, and anisotropy. A synthetic example illustrates that the undistorted amplitude tensor parameters are more similar to the phase tensor than increasingly distorted ones and provides empiric evidence for the predictability of the proposed hypothesis. Conclusions drawn are reverse engineered to produce an objective function that minimizes when amplitude and phase tensor parameter dissimilarity is, along with any present distortion, minimal. A genetic algorithm with such an objective function is used to systematically seek the distortion parameters necessary to correct any affected amplitude tensor and, thus, impedance data. The successful correction of a large synthetic impedance data set with random distortion further supports the central hypothesis and serves as comparison to the state-of-the-art. The classic BC87 data set sites lit007/ lit008 and lit901/ lit902 have been noted by various authors to contain significant distortion and a 3D regional response, thus invalidating current distortion analysis methods and eluding geologic interpretation. Correction of the BC87 responses based on the present hypothesis conforms to the regional geology.

Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. E301-E310 ◽  
Author(s):  
Maik Neukirch ◽  
Daniel Rudolf ◽  
Xavier Garcia ◽  
Savitri Galiana

The introduction of the phase tensor marked a breakthrough in the understanding and analysis of electric galvanic distortion effects. It has been used for (distortion-free) dimensionality analysis, distortion analysis, mapping, and subsurface model inversion. However, the phase tensor can only represent half of the information contained in a complete impedance data set. Nevertheless, to avoid uncertainty due to galvanic distortion effects, practitioners often choose to discard half of the measured data and concentrate interpretation efforts on the phase tensor part. Our work assesses the information loss due to pure phase tensor interpretation of a complete impedance data set. To achieve this, a new MT impedance tensor decomposition into the known phase tensor and a newly defined amplitude tensor is motivated and established. In addition, the existence and uniqueness of the amplitude tensor is proven. Synthetic data are used to illustrate the amplitude tensor information content compared with the phase tensor. Although the phase tensor only describes the inductive effects within the subsurface, the amplitude tensor holds information about inductive and galvanic effects that can help to identify conductivity or thickness of (conductive) anomalies more accurately than the phase tensor. Furthermore, the amplitude and phase tensors sense anomalies at different periods, and thus the combination of both provides a means to evaluate and differentiate anomaly top depths in the event of data unavailability at extended period ranges, e.g., due to severe noise.


2019 ◽  
Author(s):  
Y-h. Taguchi ◽  
Turki Turki

AbstractBackgroundIdentifying effective candidate drug compounds in patients with neurological disorders based on gene expression data is of great importance to the neurology field. By identifying effective candidate drugs to a given neurological disorder, neurologists would (1) reduce the time searching for effective treatments; and (2) gain additional useful information that leads to a better treatment outcome. Although there are many strategies to screen drug candidate in pre-clinical stage, it is not easy to check if candidate drug compounds can be also effective to human.ObjectiveWe tried to propose a strategy to screen genes whose expression is altered in model animal experiments to be compared with gene expressed differentically with drug treatment to human cell lines.MethodsRecently proposed tensor decomposition (TD) based unsupervised feature extraction (FE) is applied to single cell (sc) RNA-seq experiments of Alzheimer’s disease model animal mouse brain.ResultsFour hundreds and one genes are screened as those differentially expressed during Aβ accumulation as age progresses. These genes are significantly overlapped with those expressed differentially with the known drug treatments for three independent data sets: LINCS, DrugMatrix and GEO.ConclusionOur strategy, application of TD based unsupervised FE, is useful one to screen drug candidate compounds using scRNA-seq data set.


MATEMATIKA ◽  
2020 ◽  
Vol 36 (1) ◽  
pp. 43-49
Author(s):  
T Dwi Ary Widhianingsih ◽  
Heri Kuswanto ◽  
Dedy Dwi Prastyo

Logistic regression is one of the commonly used classification methods. It has some advantages, specifically related to hypothesis testing and its objective function. However, it also has some disadvantages in the case of high-dimensional data, such as multicolinearity, over-fitting, and a high computational burden. Ensemblebased classification methods have been proposed to overcome these problems. The logistic regression ensemble (LORENS) method is expected to improve the classification performance of basic logistic regression. In this paper, we apply it to the case of drug discovery with the objective of obtaining candidate compounds to protect the normal non-cancerous cells, which is considered to be a problem with a data-set of high dimensionality. The experimental results show that it performs well, with an accuracy of 69% and AUC of 0.7306.


Author(s):  
Roland Winkler ◽  
Frank Klawonn ◽  
Rudolf Kruse

High dimensions have a devastating effect on the FCM algorithm and similar algorithms. One effect is that the prototypes run into the centre of gravity of the entire data set. The objective function must have a local minimum in the centre of gravity that causes FCM’s behaviour. In this paper, examine this problem. This paper answers the following questions: How many dimensions are necessary to cause an ill behaviour of FCM? How does the number of prototypes influence the behaviour? Why has the objective function a local minimum in the centre of gravity? How must FCM be initialised to avoid the local minima in the centre of gravity? To understand the behaviour of the FCM algorithm and answer the above questions, the authors examine the values of the objective function and develop three test environments that consist of artificially generated data sets to provide a controlled environment. The paper concludes that FCM can only be applied successfully in high dimensions if the prototypes are initialized very close to the cluster centres.


Author(s):  
Christopher E. Gillies ◽  
Xiaoli Gao ◽  
Nilesh V. Patel ◽  
Mohammad-Reza Siadat ◽  
George D. Wilson

Personalized medicine is customizing treatments to a patient’s genetic profile and has the potential to revolutionize medical practice. An important process used in personalized medicine is gene expression profiling. Analyzing gene expression profiles is difficult, because there are usually few patients and thousands of genes, leading to the curse of dimensionality. To combat this problem, researchers suggest using prior knowledge to enhance feature selection for supervised learning algorithms. The authors propose an enhancement to the LASSO, a shrinkage and selection technique that induces parameter sparsity by penalizing a model’s objective function. Their enhancement gives preference to the selection of genes that are involved in similar biological processes. The authors’ modified LASSO selects similar genes by penalizing interaction terms between genes. They devise a coordinate descent algorithm to minimize the corresponding objective function. To evaluate their method, the authors created simulation data where they compared their model to the standard LASSO model and an interaction LASSO model. The authors’ model outperformed both the standard and interaction LASSO models in terms of detecting important genes and gene interactions for a reasonable number of training samples. They also demonstrated the performance of their method on a real gene expression data set from lung cancer cell lines.


2020 ◽  
Vol 223 (3) ◽  
pp. 1565-1583
Author(s):  
Hoël Seillé ◽  
Gerhard Visser

SUMMARY Bayesian inversion of magnetotelluric (MT) data is a powerful but computationally expensive approach to estimate the subsurface electrical conductivity distribution and associated uncertainty. Approximating the Earth subsurface with 1-D physics considerably speeds-up calculation of the forward problem, making the Bayesian approach tractable, but can lead to biased results when the assumption is violated. We propose a methodology to quantitatively compensate for the bias caused by the 1-D Earth assumption within a 1-D trans-dimensional Markov chain Monte Carlo sampler. Our approach determines site-specific likelihood functions which are calculated using a dimensionality discrepancy error model derived by a machine learning algorithm trained on a set of synthetic 3-D conductivity training images. This is achieved by exploiting known geometrical dimensional properties of the MT phase tensor. A complex synthetic model which mimics a sedimentary basin environment is used to illustrate the ability of our workflow to reliably estimate uncertainty in the inversion results, even in presence of strong 2-D and 3-D effects. Using this dimensionality discrepancy error model we demonstrate that on this synthetic data set the use of our workflow performs better in 80 per cent of the cases compared to the existing practice of using constant errors. Finally, our workflow is benchmarked against real data acquired in Queensland, Australia, and shows its ability to detect the depth to basement accurately.


2020 ◽  
Vol 93 (1108) ◽  
pp. 20190441 ◽  
Author(s):  
Roushanak Rahmat ◽  
Frederic Brochu ◽  
Chao Li ◽  
Rohitashwa Sinha ◽  
Stephen John Price ◽  
...  

Objectives: Glioblastoma multiforme (GBM) is a highly infiltrative primary brain tumour with an aggressive clinical course. Diffusion tensor imaging (DT-MRI or DTI) is a recently developed technique capable of visualising subclinical tumour spread into adjacent brain tissue. Tensor decomposition through p and q maps can be used for planning of treatment. Our objective was to develop a tool to automate the segmentation of DTI decomposed p and q maps in GBM patients in order to inform construction of radiotherapy target volumes. Methods: Chan-Vese level set model is applied to segment the p map using the q map as its initial starting point. The reason of choosing this model is because of the robustness of this model on either conventional MRI or only DTI. The method was applied on a data set consisting of 50 patients having their gross tumour volume delineated on their q map and Chan-Vese level set model uses these superimposed masks to incorporate the infiltrative edges. Results: The expansion of tumour boundary from q map to p map is clearly visible in all cases and the Dice coefficient (DC) showed a mean similarity of 74% across all 50 patients between the manually segmented ground truth p map and the level set automatic segmentation. Conclusion: Automated segmentation of the tumour infiltration boundary using DTI and tensor decomposition is possible using Chan-Vese level set methods to expand q map to p map. We have provided initial validation of this technique against manual contours performed by experienced clinicians. Advances in knowledge: This novel automated technique to generate p maps has the potential to individualise radiation treatment volumes and act as a decision support tool for the treating oncologist.


2011 ◽  
Vol 1 (1) ◽  
pp. 1-16 ◽  
Author(s):  
Roland Winkler ◽  
Frank Klawonn ◽  
Rudolf Kruse

High dimensions have a devastating effect on the FCM algorithm and similar algorithms. One effect is that the prototypes run into the centre of gravity of the entire data set. The objective function must have a local minimum in the centre of gravity that causes FCM’s behaviour. In this paper, examine this problem. This paper answers the following questions: How many dimensions are necessary to cause an ill behaviour of FCM? How does the number of prototypes influence the behaviour? Why has the objective function a local minimum in the centre of gravity? How must FCM be initialised to avoid the local minima in the centre of gravity? To understand the behaviour of the FCM algorithm and answer the above questions, the authors examine the values of the objective function and develop three test environments that consist of artificially generated data sets to provide a controlled environment. The paper concludes that FCM can only be applied successfully in high dimensions if the prototypes are initialized very close to the cluster centres.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. R411-R427 ◽  
Author(s):  
Gang Yao ◽  
Nuno V. da Silva ◽  
Michael Warner ◽  
Di Wu ◽  
Chenhao Yang

Full-waveform inversion (FWI) is a promising technique for recovering the earth models for exploration geophysics and global seismology. FWI is generally formulated as the minimization of an objective function, defined as the L2-norm of the data residuals. The nonconvex nature of this objective function is one of the main obstacles for the successful application of FWI. A key manifestation of this nonconvexity is cycle skipping, which happens if the predicted data are more than half a cycle away from the recorded data. We have developed the concept of intermediate data for tackling cycle skipping. This intermediate data set is created to sit between predicted and recorded data, and it is less than half a cycle away from the predicted data. Inverting the intermediate data rather than the cycle-skipped recorded data can then circumvent cycle skipping. We applied this concept to invert cycle-skipped first arrivals. First, we picked up the first breaks of the predicted data and the recorded data. Second, we linearly scaled down the time difference between the two first breaks of each shot into a series of time shifts, the maximum of which was less than half a cycle, for each trace in this shot. Third, we moved the predicted data with the corresponding time shifts to create the intermediate data. Finally, we inverted the intermediate data rather than the recorded data. Because the intermediate data are not cycle-skipped and contain the traveltime information of the recorded data, FWI with intermediate data updates the background velocity model in the correct direction. Thus, it produces a background velocity model accurate enough for carrying out conventional FWI to rebuild the intermediate- and short-wavelength components of the velocity model. Our numerical examples using synthetic data validate the intermediate-data concept for tackling cycle skipping and demonstrate its effectiveness for the application to first arrivals.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. EN49-EN61
Author(s):  
Yudi Pan ◽  
Lingli Gao

Full-waveform inversion (FWI) of surface waves is becoming increasingly popular among shallow-seismic methods. Due to a huge amount of data and the high nonlinearity of the objective function, FWI usually requires heavy computational costs and may converge toward a local minimum. To mitigate these problems, we have reformulated FWI under a multiobjective framework and adopted a random objective waveform inversion (ROWI) method for surface-wave characterization. Three different measure functions were used, whereas the combination of one measure function with one shot independently provided one of the [Formula: see text] objective functions ([Formula: see text] is the total number of shots). We have randomly chose and optimized one objective function at each iteration. We performed a synthetic test to compare the performance of the ROWI and conventional FWI approaches, which showed that the convergence of ROWI is faster and more robust compared with conventional FWI approaches. We also applied ROWI to a field data set acquired in Rheinstetten, Germany. ROWI successfully reconstructed the main geologic feature, a refilled trench, in the final result. The comparison between the ROWI result and a migrated ground-penetrating radar profile further proved the effectiveness of ROWI in reconstructing the near-surface S-wave velocity model. We also ran the same field example by using a poor initial model. In this case, conventional FWI failed whereas ROWI still reconstructed the subsurface model to a fairly good level, which highlighted the relatively low dependency of ROWI on the initial model.


Sign in / Sign up

Export Citation Format

Share Document