Global optimization of generalized nonhyperbolic moveout approximation for long-offset normal moveout

Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. S141-S149 ◽  
Author(s):  
Hanjie Song ◽  
Jinhai Zhang ◽  
Zhenxing Yao

The approximation of normal moveout is essential for estimating the anisotropy parameters of anisotropic media. The generalized nonhyperbolic moveout approximation (GMA) brings considerable improvement in accuracy compared with known analytical approximations. However, this is still prone to relatively large errors in the presence of relatively long offsets and large anisotropic parameters, which would degrade the inversion accuracy due to error accumulation in velocity analysis. We optimize the constant coefficients for all possible groupings of the anellipticity parameter and the offset-to-depth ratio (O/D) within practical ranges. Theoretical analyses and numerical experiments indicate that the traditional optimization scheme, using the two-norm objective function solved by the least-squares method, could not provide an error-constrained result; in addition, a direct optimization without constant-coefficient extension would not lead to a satisfactory accuracy improvement. We construct the objective function using the maximum norm and solve it by using a simulated annealing algorithm; in addition, we extend the total number of constant coefficients in the GMA to achieve additional significant improvements in the accuracy. We use a normalized traveltime and offset so that the optimized constant coefficients are independent of the model. The optimized constant coefficients are obtained over a fine grid of the anellipticity parameter (0–0.5) and the O/D (0–4) that covers most practical ranges. Our optimization scheme does not increase the computational complexity but can significantly improve the accuracy. The relative error after optimization is always below a given tolerable error threshold 0.01%, which is better than the original error 0.21% of GMA. Scanning of the velocity and anellipticity parameter indicates that the original GMA has relatively large errors; in contrast, the optimized GMA can obtain more accurate results, which are essential for flattening the moveout and helpful for reducing error accumulations.

Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. T125-T135 ◽  
Author(s):  
Jin-Hai Zhang ◽  
Zhen-Xing Yao

Implicit finite-difference (FD) migration is unconditionally stable and is popular in handling strong velocity variations, but its extension to strongly transversely anisotropic media with vertical symmetric axis media is difficult. Traditional local optimizations generate the optimized coefficients for each pair of Thomsen anisotropy parameters independently, which can degrade results substantially for large anisotropy variations and lead to a huge table. We developed an implicit FD method using the analytic Taylor-series expansion and used a global optimization scheme to improve its accuracy for wide phase angles. We first extended the number of the constant coefficients; then we relaxed the coefficient of the time-delay extrapolation term by tuning a small factor such that error is less than 0.1%. Finally, we optimized the constant coefficients using a simulated annealing algorithm by constraining that all the error functions on a fine grid of the whole anisotropic region did not exceed 0.5% simultaneously. The extended number of the constant coefficients and the relaxed coefficient greatly enhanced the flexibility of matching the dispersion relation and significantly improved the ability of handling strong anisotropy over a much wider range. Compared with traditional local optimization, our scheme does not need any table and table lookup. For each order of the FD method, only one group of optimized coefficients is enough to handle strong variations in velocity and anisotropy. More importantly, our global optimization scheme guarantees the accuracy for various possible ranges of anisotropy parameters, no matter how strong the anisotropy is. For the globally optimized second-order FD method, the accurate phase angle is up to 58°, and the increase is about 18°–22°. For the globally optimized fourth-order FD method, the accurate phase angle is up to 77°, and the increase is about 22°–27°.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. S137-S147 ◽  
Author(s):  
Zheng He ◽  
Jinhai Zhang ◽  
Zhenxing Yao

Explicit finite-difference (FD) schemes are widely used in the seismic exploration field due to their simplicity in implementation and low computational cost. However, they suffer from strong artifacts caused by using coarse grids for high-frequency applications. The optimization of constant coefficients is popular in reducing spatial dispersions, but current methods could not guarantee that the bandwidth of the tolerable dispersion error is the widest. We have applied the Remez exchange algorithm to optimize the constant coefficients of the explicit FD schemes, for conventional and staggered grids. The resulting dispersion errors are distributed alternately between the maxima and minima in the passband of the filter, which is consistent with the most important equal-ripple property of the error magnitude for the optimal solution according to the Chebyshev criterion. The Remez exchange algorithm can determine the optimal coefficients of the FD method with only a few iterations, and the resulting operator has a wider bandwidth compared with previous solutions. It can handle arbitrary orders without the influence of local minima. Its computational cost for solving the objective function is comparable to that of the least-squares method, but its bandwidth is wider. Its accuracy is also higher than that of the maximum norm solved by the simulated annealing algorithm, but its computational cost is much lower. Theoretically, the equal-ripple error can offer the widest bandwidth for suppressing numerical dispersions among all solutions obtained by the constant-coefficient optimization. In other words, we can obtain a smaller error limitation than traditional methods under the same bandwidth. This superiority over traditional methods is essential for reducing the total error accumulation, which is helpful to avoid rapid error accumulations especially for large-scale models and long-term problems.


2017 ◽  
Vol 65 (4) ◽  
pp. 479-488 ◽  
Author(s):  
A. Boboń ◽  
A. Nocoń ◽  
S. Paszek ◽  
P. Pruski

AbstractThe paper presents a method for determining electromagnetic parameters of different synchronous generator models based on dynamic waveforms measured at power rejection. Such a test can be performed safely under normal operating conditions of a generator working in a power plant. A generator model was investigated, expressed by reactances and time constants of steady, transient, and subtransient state in the d and q axes, as well as the circuit models (type (3,3) and (2,2)) expressed by resistances and inductances of stator, excitation, and equivalent rotor damping circuits windings. All these models approximately take into account the influence of magnetic core saturation. The least squares method was used for parameter estimation. There was minimized the objective function defined as the mean square error between the measured waveforms and the waveforms calculated based on the mathematical models. A method of determining the initial values of those state variables which also depend on the searched parameters is presented. To minimize the objective function, a gradient optimization algorithm finding local minima for a selected starting point was used. To get closer to the global minimum, calculations were repeated many times, taking into account the inequality constraints for the searched parameters. The paper presents the parameter estimation results and a comparison of the waveforms measured and calculated based on the final parameters for 200 MW and 50 MW turbogenerators.


2020 ◽  
pp. 000370282097751
Author(s):  
Xin Wang ◽  
Xia Chen

Many spectra have a polynomial-like baseline. Iterative polynomial fitting (IPF) is one of the most popular methods for baseline correction of these spectra. However, the baseline estimated by IPF may have substantially error when the spectrum contains significantly strong peaks or have strong peaks located at the endpoints. First, IPF uses temporary baseline estimated from the current spectrum to identify peak data points. If the current spectrum contains strong peaks, then the temporary baseline substantially deviates from the true baseline. Some good baseline data points of the spectrum might be mistakenly identified as peak data points and are artificially re-assigned with a low value. Second, if a strong peak is located at the endpoint of the spectrum, then the endpoint region of the estimated baseline might have significant error due to overfitting. This study proposes a search algorithm-based baseline correction method (SA) that aims to compress sample the raw spectrum to a dataset with small number of data points and then convert the peak removal process into solving a search problem in artificial intelligence (AI) to minimize an objective function by deleting peak data points. First, the raw spectrum is smoothened out by the moving average method to reduce noise and then divided into dozens of unequally spaced sections on the basis of Chebyshev nodes. Finally, the minimal points of each section are collected to form a dataset for peak removal through search algorithm. SA selects the mean absolute error (MAE) as the objective function because of its sensitivity to overfitting and rapid calculation. The baseline correction performance of SA is compared with those of three baseline correction methods: Lieber and Mahadevan–Jansen method, adaptive iteratively reweighted penalized least squares method, and improved asymmetric least squares method. Simulated and real FTIR and Raman spectra with polynomial-like baselines are employed in the experiments. Results show that for these spectra, the baseline estimated by SA has fewer error than those by the three other methods.


Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. C219-C227 ◽  
Author(s):  
Hanjie Song ◽  
Yingjie Gao ◽  
Jinhai Zhang ◽  
Zhenxing Yao

The approximation of normal moveout is essential for estimating the anisotropy parameters of the transversally isotropic media with vertical symmetry axis (VTI). We have approximated the long-offset moveout using the Padé approximation based on the higher order Taylor series coefficients for VTI media. For a given anellipticity parameter, we have the best accuracy when the numerator is one order higher than the denominator (i.e., [[Formula: see text]]); thus, we suggest using [4/3] and [7/6] orders for practical applications. A [7/6] Padé approximation can handle a much larger offset and stronger anellipticity parameter. We have further compared the relative traveltime errors between the Padé approximation and several approximations. Our method shows great superiority to most existing methods over a wide range of offset (normalized offset up to 2 or offset-to-depth ratio up to 4) and anellipticity parameter (0–0.5). The Padé approximation provides us with an attractive high-accuracy scheme with an error that is negligible within its convergence domain. This is important for reducing the error accumulation especially for deeper substructures.


Author(s):  
Safiye Turgay

Facility layout design problem considers the departments’ physcial layout design with area requirements in some restrictions such as material handling costs, remoteness and distance requests. Briefly, facility layout problem related to optimization of the layout costs and working conditions. This paper proposes a new multi objective simulated annealing algorithm for solving of the unequal area in layout design. Using of the different objective weights are generated with entropy approach and used in the alternative layout design. Multi objective function takes into the objective function and constraints. The suggested heuristic algorithm used the multi-objective parameters for initialization. Then prefered the entropy approach determines the weight of the objective functions. After the suggested improved simulated annealing approach applied to whole developed model. A multi-objective simulated annealing algorithm is implemented to increase the diversity and reduce the chance of getting layout conditions in local optima.


Author(s):  
Oscar Brito Augusto

In this work a planning methodology for deep-water anchor deployment of anchor lines for offshore platforms and floating production systems aiming at operational resources optimization is explored, by minimizing a multi criteria objective function. A Simulated Annealing Algorithm was used to optimize the objective function. As an additional advantage, inherited from the proposed methodology, the planning automation is achieved. Planning automation overcomes the traditional way based on trial error exercise, where an engineer using an anchoring application, decides how much of work wire and anchoring line must be paid out from both the floating system and the supply boat and additionally which horizontal force must be applied to the line trying settle the anchor on a previously defined target in the ocean floor. Some cases, from anchor deployment of some MODUs operating in deep-water oil fields in Brazil, are shown demonstrating some potentialities of the proposed model.


2021 ◽  
Vol 11 (20) ◽  
pp. 9584
Author(s):  
Weihua Wei ◽  
Fangxu Peng ◽  
Yingli Li ◽  
Bingrui Chen ◽  
Yiqi Xu ◽  
...  

Firstly, the force of an extrusion roller under actual working condition was analyzed while the contact stress between the roller shaft and the roller sleeve and the extrusion force between the roller sleeve and the material were calculated. Secondly, static analysis of the extrusion roller was carried out using ANSYS software, and conclusively, the stress concentration appears at the roller sleeve’s inner ring step. Furthermore, an optimization scheme of the setting transition arc at the step of the contact surface between roller shaft and roller sleeve was proposed, and a simulation test was carried out., Finally, the maximum equivalent stress of the extrusion roller was set at the minimum value of the objective function; the extrusion roller was further optimized by using the direct optimization module in ANSYS Workbench. The results from optimization show that the maximum equivalent stress is reduced by 29% and the maximum deformation is decreased by 28%. It can be seen that the optimization scheme meets the strength and deformation requirements of the extrusion roller design. The optimization scheme can effectively improve the bearing capacity of the extrusion roller and reduce its production cost. This can provide a reference for the design of the roller press.


2021 ◽  
Vol 18 (6) ◽  
pp. 8314-8330
Author(s):  
Ningning Zhao ◽  
◽  
Mingming Duan

<abstract> <p>In this study, a multi-objective optimized mathematical model of stand pre-allocation is constructed with the shortest travel distance for passengers, the lowest cost for airlines and the efficiency of stand usage as the overall objectives. The actual data of 12 flights at Lanzhou Zhongchuan Airport are analyzed by application and solved by simulated annealing algorithm. The results of the study show that the total objective function of the constructed model allocation scheme is reduced by 40.67% compared with the actual allocation scheme of the airport, and the distance traveled by passengers is reduced by a total of 4512 steps, while one stand is saved and the efficiency of stand use is increased by 31%, in addition to the reduction of airline cost by 300 RMB. In summary, the model constructed in the study has a high practical application value and is expected to be used for airport stand pre-allocation decision in the future.</p> </abstract>


Author(s):  
Małgorzata Rabiej ◽  
Stanisław Rabiej

To decompose a wide-angle X-ray diffraction (WAXD) curve of a semi-crystalline polymer into crystalline peaks and amorphous halos, a theoretical best-fitted curve, i.e. a mathematical model, is constructed. In fitting the theoretical curve to the experimental one, various functions can be used to quantify and minimize the deviations between the curves. The analyses and calculations performed in this work have proved that the quality of the model, its parameters and consequently the information on the structure of the investigated polymer are considerably dependent on the shape of an objective function. It is shown that the best models are obtained employing the least-squares method in which the sum of squared absolute errors is minimized. On the other hand, the methods in which the objective functions are based on the relative errors do not give a good fit and should not be used. The comparison and evaluation were performed using WAXD curves of seven polymers: isotactic polypropylene, polyvinylidene fluoride, cellulose I, cellulose II, polyethylene, polyethylene terephthalate and polyamide 6. The methods were compared and evaluated using statistical tests and measures of the quality of fitting.


Sign in / Sign up

Export Citation Format

Share Document