Concerted, Computing-Intense Novel MFL Approach Ensuring Reliability and Reducing the Need for Dig Verification

Author(s):  
Johannes Palmer ◽  
Aaron Schartner ◽  
Andrey Danilov ◽  
Vincent Tse

Abstract Magnetic Flux Leakage (MFL) is a robust technology with high data coverage. Decades of continuous sizing improvement allowed for industry-accepted sizing reliability. The continuous optimization of sizing processes ensures accurate results in categorizing metal loss features. However, the identified selection of critical anomalies is not always optimal; sometimes anomalies are dug up too early or unnecessarily, this can be caused by the feature type in the field (true metal loss shape) being incorrectly identified which affects sizing and tolerance. In addition, there is the possibility for incorrectly identifying feature types causing false under-calls. Today, complex empirical formulas together with multifaceted lookup tables fed by pull tests, synthetic data, dig verifications, machine learning, artificial intelligence and last but not least human expertise translate MFL signals into metal loss assessments with high levels of success. Nevertheless, two important principal elements are limiting the possible MFL sizing optimization. One is the empirical character of the signal interpretation. The other is the implicitly induced data and result simplification. The reason to go this principal route for many years is simple: it is methodologically impossible to calculate the metal source geometry directly from the signals. In addition, the pure number of possible relevant geometries is so large that simplification is necessary and inevitable. Moreover, the second methodological reason is the ambiguity of the signal, which defines the target of metal loss sizing as the most probable solution. However, even under the best conditions, the most probable one is not necessarily the correct one. This paper describes a novel, fundamentally different approach as a basic alternative to the common MFL-analysis approach described above. A calculation process is presented, which overcomes the empirical nature of traditional approaches by using a result optimization method that relies on intense computing and avoids any simplification. Additionally, the strategy to overcome MFL ambiguity will be shown. Together with the operator, detailed blind-test examples demonstrate the enormous level of detail, repeatability and accuracy of this groundbreaking technological method with the potential to reduce tool tolerance, increase sizing accuracy, increase growth rate accuracy, and help optimize the dig program to target critical features with greater confidence.

Medical image registration has important value in actual clinical applications. From the traditional time-consuming iterative similarity optimization method to the short time-consuming supervised deep learning to today's unsupervised learning, the continuous optimization of the registration strategy makes it more feasible in clinical applications. This survey mainly focuses on unsupervised learning methods and introduces the latest solutions for different registration relationships. The registration for inter-modality is a more challenging topic. The application of unsupervised learning in registration for inter-modality is the focus of this article. In addition, this survey also proposes ideas for future research methods to show directions of the future research.


2020 ◽  
Vol 34 (04) ◽  
pp. 3842-3849
Author(s):  
Jicong Fan ◽  
Yuqian Zhang ◽  
Madeleine Udell

This paper develops new methods to recover the missing entries of a high-rank or even full-rank matrix when the intrinsic dimension of the data is low compared to the ambient dimension. Specifically, we assume that the columns of a matrix are generated by polynomials acting on a low-dimensional intrinsic variable, and wish to recover the missing entries under this assumption. We show that we can identify the complete matrix of minimum intrinsic dimension by minimizing the rank of the matrix in a high dimensional feature space. We develop a new formulation of the resulting problem using the kernel trick together with a new relaxation of the rank objective, and propose an efficient optimization method. We also show how to use our methods to complete data drawn from multiple nonlinear manifolds. Comparative studies on synthetic data, subspace clustering with missing data, motion capture data recovery, and transductive learning verify the superiority of our methods over the state-of-the-art.


2019 ◽  
Vol 2019 ◽  
pp. 1-23 ◽  
Author(s):  
Amir Shabani ◽  
Behrouz Asgarian ◽  
Saeed Asil Gharebaghi ◽  
Miguel A. Salido ◽  
Adriana Giret

In this paper, a new optimization algorithm called the search and rescue optimization algorithm (SAR) is proposed for solving single-objective continuous optimization problems. SAR is inspired by the explorations carried out by humans during search and rescue operations. The performance of SAR was evaluated on fifty-five optimization functions including a set of classic benchmark functions and a set of modern CEC 2013 benchmark functions from the literature. The obtained results were compared with twelve optimization algorithms including well-known optimization algorithms, recent variants of GA, DE, CMA-ES, and PSO, and recent metaheuristic algorithms. The Wilcoxon signed-rank test was used for some of the comparisons, and the convergence behavior of SAR was investigated. The statistical results indicated SAR is highly competitive with the compared algorithms. Also, in order to evaluate the application of SAR on real-world optimization problems, it was applied to three engineering design problems, and the results revealed that SAR is able to find more accurate solutions with fewer function evaluations in comparison with the other existing algorithms. Thus, the proposed algorithm can be considered an efficient optimization method for real-world optimization problems.


Author(s):  
Usman K. Choudhary ◽  
Rachel Lee ◽  
Robert Worthingham

The NoPig system is an above ground metal-loss detection tool utilizing magnetics. Sensors at ground level detect disturbances in the magnetic field around the pipeline generated by impressed alternating current (AC) signals. This tool is intended for use on segments of pipeline which are considered unpiggable. Previous field trials indicated the tool was capable of detecting metal-loss in small diameter seamless pipe. Trials on electric resistance weld (ERW) or double submerged arc weld (DSAW) pipe were inconclusive. Modifications have been made to the NoPig hardware and analysis software to correct for the non-uniform magnetic fields produced by seamed pipe and girth welds. The study reported in this paper is a field trial of the modified inspection system. Recently inline inspected pipelines of nominal pipe size (NPS) 12 and 16 were selected for survey. Pipeline segments where significant metal-loss was detected from Inline Inspection (ILI) were selected for the blind test. Eight hundred meter sections of pipeline were surveyed at each of these locations to ensure a range of pipe conditions were included. After all surveys were complete, significant features were excavated and actual measurements were obtained. This paper describes the field inspection program as well as the analysis process used to verify the detection capabilities of the modified NoPig system. The results will include discussion of the positional accuracy, detection capability and threshold of the system. This analysis will help determine if the NoPig system is suitable alternative for assessing the integrity of unpiggable pipeline segments.


Author(s):  
John G. Michopoulos ◽  
Athanasios Iliopoulos

Motivated by the need to determine the mechanical, electrical and thermal properties of contact surfaces between deformable materials that conduct electricity and heat, we are presenting here a method for characterizing certain topological characteristics of rough surfaces. The inverse identification of a set of parameters associated with the parametric representation of any rough surface based on profilometric data is described in contrast with the standard one parameter approaches. The description of the surface topography parametrization is first given in terms of a function that enables the generation of synthetic data. Objective functions are created based on both the profilometric evaluations of the parametric representation of the surface as well as its power spectrum. A statistical Monte Carlo based optimization method is implemented for determining the characteristic parameters needed for further analysis that leads to the determination of other physical properties of the surface. Numerical application of the method validates the efficiency and the accuracy of the proposed approach.


2020 ◽  
Vol 39 (3) ◽  
pp. 3183-3193
Author(s):  
Jieya Li ◽  
Liming Yang

The classical principal component analysis (PCA) is not sparse enough since it is based on the L2-norm that is also prone to be adversely affected by the presence of outliers and noises. In order to address the problem, a sparse robust PCA framework is proposed based on the min of zero-norm regularization and the max of Lp-norm (0 < p ≤ 2) PCA. Furthermore, we developed a continuous optimization method, DC (difference of convex functions) programming algorithm (DCA), to solve the proposed problem. The resulting algorithm (called DC-LpZSPCA) is convergent linearly. In addition, when choosing different p values, the model can keep robust and is applicable to different data types. Numerical simulations are simulated in artificial data sets and Yale face data sets. Experiment results show that the proposed method can maintain good sparsity and anti-outlier ability.


Medical image registration has important value in actual clinical applications. From the traditional time-consuming iterative similarity optimization method to the short time-consuming supervised deep learning to today's unsupervised learning, the continuous optimization of the registration strategy makes it more feasible in clinical applications. This survey mainly focuses on unsupervised learning methods and introduces the latest solutions for different registration relationships. The registration for inter-modality is a more challenging topic. The application of unsupervised learning in registration for inter-modality is the focus of this article. In addition, this survey also proposes ideas for future research methods to show directions of the future research.


2016 ◽  
Vol 7 (4) ◽  
pp. 23-51 ◽  
Author(s):  
Mahamed G.H. Omran ◽  
Maurice Clerc

This paper proposes a new population-based simplex method for continuous function optimization. The proposed method, called Adaptive Population-based Simplex (APS), is inspired by the Low-Dimensional Simplex Evolution (LDSE) method. LDSE is a recent optimization method, which uses the reflection and contraction steps of the Nelder-Mead Simplex method. Like LDSE, APS uses a population from which different simplexes are selected. In addition, a local search is performed using a hyper-sphere generated around the best individual in a simplex. APS is a tuning-free approach, it is easy to code and easy to understand. APS is compared with five state-of-the-art approaches on 23 functions where five of them are quasi-real-world problems. The experimental results show that APS generally performs better than the other methods on the test functions. In addition, a scalability study has been conducted and the results show that APS can work well with relatively high-dimensional problems.


2020 ◽  
Vol 62 (8) ◽  
pp. 1107-1120
Author(s):  
Pedro Miraldo ◽  
João R. Cardoso

Abstract This paper addresses the problem of finding the closest generalized essential matrix from a given $$6\times 6$$ 6 × 6 matrix, with respect to the Frobenius norm. To the best of our knowledge, this nonlinear constrained optimization problem has not been addressed in the literature yet. Although it can be solved directly, it involves a large number of constraints, and any optimization method to solve it would require much computational effort. We start by deriving a couple of unconstrained formulations of the problem. After that, we convert the original problem into a new one, involving only orthogonal constraints, and propose an efficient algorithm of steepest descent type to find its solution. To test the algorithms, we evaluate the methods with synthetic data and conclude that the proposed steepest descent-type approach is much faster than the direct application of general optimization techniques to the original formulation with 33 constraints and to the unconstrained ones. To further motivate the relevance of our method, we apply it in two pose problems (relative and absolute) using synthetic and real data.


Sign in / Sign up

Export Citation Format

Share Document