On: “An Automatic Least‐Squares Multimodel Method for Magnetic Interpretation” by Peter H. McGrath and Peter J. Hood (GEOPHYSICS, April 1973, p. 349–358)

Geophysics ◽  
1974 ◽  
Vol 39 (5) ◽  
pp. 692-693 ◽  
Author(s):  
M. Al‐Chalabi

McGrath and Hood present a magnetic interpretation method whereby the search for a solution is carried out in the (hyper) space of n parameters defining the shape and position of an assumed model. The problem is an optimization problem and should be viewed within the general context of nonlinear optimization techniques. McGrath and Hood simply present one optimization method. The usefulness of individual methods is limited. One could similarly propose the use of the method of rotating coordinates (Rosenbrock, 1960), the “complex” method (Box, 1965), Davidon’s methods (Fletcher and Powell, 1963; Stewart, 1967; Davidon, 1969), etc. We currently have a wealth of these methods at our disposal. In fact, the use of these methods for magnetic interpretation has already been presented (Al‐Chalabi, 1970). As this and subsequent work indicated (Al‐Chalabi, 1972), these methods should be used as an integral group for interpreting magnetic and gravity anomalies. The exclusive use of individual methods is inefficient. Studies performed on objective functions used in magnetic and gravity interpretation have shown that the behavior of these functions in the parameter hyperspace is extremely complicated. Consequently, the search for a solution requires different strategies at different stages between the initial estimate and the ultimate solution (Al‐Chalabi, 1970, 1972).

1996 ◽  
Vol 118 (4) ◽  
pp. 733-740 ◽  
Author(s):  
Eungsoo Shin ◽  
D. A. Streit

A new spring balancing technique, called a two-phase optimization method, is presented. Phase 1 uses harmonic synthesis to provide a system configuration which achieves an approximation to a desired dynamic system response. Phase 2 uses results of harmonic synthesis as initial conditions for dynamic system optimization. Optimization techniques compensate for nonlinearities in machine dynamics. Example applications to robot manipulators and to walking machine legs are presented and discussed.


1988 ◽  
Author(s):  
Wang Qinghuan ◽  
Sun Zhiqin

A new procedure employed in computer-aided design of centrifugal compressor stage to determine its over-all dimensions is described in this paper. By the use of the COMPLEX METHOD, the arbitrary number of variables to be optimized can be specified to remove the hidden danger of the local optima which stems from adopting a few, for example two or three, variables to be optimized. This procedure is available for any complicated implicit nonlinear objective function and ensures establishment of a true optimum solution. Numerical calculations have been carried out by using the computer program described here to check the ability of the optimization method. The results obtained by the calculations agree fairly well with that obtained by experiments.


Cardiovascular disease (CVD) is possibly the greatest reason for casualty and death rate among the number of inhabitants on the planet. Projection of cardiopathy is viewed as one of the most crucial subjects in the area of clinical records exploration. The measure of information in the social insurance industry is massive. The Data mining process transforms the huge range of unrefined medical service data into meaningful information that can lead to erudite decision and projection. Some recent investigations have applied data exploratory procedures too in CVD estimation. However, only very few studies have revealed the elements that play crucial role in envisioning CVDs. It is imperative to opt for the combination of correct and significant elements that can enhance the functioning of the forecasting prototypes. This study aims to ascertain meaningful elements and data mining procedures that can enrich the correctness of foretelling CVDs. Prognostic models were formulated employing distinctive blend of features selection modified teaching learning optimization techniques, SVM and boosting classification. Here the proposed strategy gives high precision outcomes with existing classification.


1992 ◽  
Vol 114 (4) ◽  
pp. 524-531 ◽  
Author(s):  
J. S. Agapiou

The optimization problem for multistage machining systems has been investigated. Due to uneven time requirements at different stages in manufacturing, there could be idle times at various stations. It may be advantageous to reduce the values of machining parameters in order to reduce the cost at stations that require less machining time. However, optimization techniques available through the literature do not effectively utilize the idle time for the different stations generated during the balancing of the system. Proposed in this paper is an optimization method which utilizes the idle time to the full extent at all machining stations, with the intention of improving tool life and thus achieving cost reduction. The mathematical analysis considers the optimization of the production cost with an equality constraint of zero idle time for the stations with idle time. Physical constraints regarding the cutting parameters, force, power, surface finish, etc., as they arise in different operations, are also considered. The aforementioned problem has been theoretically analyzed and a computational algorithm developed. The advantages and effectiveness of the proposed approach are finally established through an example.


Author(s):  
Qian Wang ◽  
Lucas Schmotzer ◽  
Yongwook Kim

<p>Structural designs of complex buildings and infrastructures have long been based on engineering experience and a trial-and-error approach. The structural performance is checked each time when a design is determined. An alternative strategy based on numerical optimization techniques can provide engineers an effective and efficient design approach. To achieve an optimal design, a finite element (FE) program is employed to calculate structural responses including forces and deformations. A gradient-based or gradient-free optimization method can be integrated with the FE program to guide the design iterations, until certain convergence criteria are met. Due to the iterative nature of the numerical optimization, a user programming is required to repeatedly access and modify input data and to collect output data of the FE program. In this study, an approximation method was developed so that the structural responses could be expressed as approximate functions, and that the accuracy of the functions could be adaptively improved. In the method, the FE program was not required to be directly looped in the optimization iterations. As a practical illustrative example, a 3D reinforced concrete building structure was optimized. The proposed method worked very well and optimal designs were found to reduce the torsional responses of the building.</p>


Biometrics ◽  
2017 ◽  
pp. 907-932 ◽  
Author(s):  
Niladri Sekhar Datta ◽  
Himadri Sekhar Dutta ◽  
Koushik Majumder

Fuzzy logic deals with approximate rather than fixed and exact reasoning. Fuzzy variables may have a truth value that ranges in degree between 0 and 1; extended to handle the concept of partial truth where the truth value may range between completely true or completely false. This computational logic uses truth degrees as a mathematical model of the vagueness phenomenon while probability is a mathematical model of ignorance. A huge number of complex problems may be solve using Fuzzy logic specifically Fuzzy modeling and optimization method. Fuzzy modeling is the understanding of the problem and analysis of the Fuzzy information where the Fuzzy optimization solves Fuzzy model optimally using optimization techniques via membership functions. In this research article authors describe the Fuzzy rules and its application and the different types of well known problems solved by the Fuzzy optimization technique.


Geophysics ◽  
1993 ◽  
Vol 58 (8) ◽  
pp. 1074-1083 ◽  
Author(s):  
D. Bhaskara Rao ◽  
M. J. Prakash ◽  
N. Ramesh Babu

The decrease of density contrast in sedimentary basins can often be approximated by an exponential function. Theoretical Fourier transforms are derived for symmetric trapezoidal, vertical fault, vertical prism, syncline, and anticline models. This is desirable because there are no equivalent closed form solutions in the space domain for these models combined with an exponential density contrast. These transforms exhibit characteristic minima, maxima, and zero values, and hence graphical methods have been developed for interpretation of model parameters. After applying end corrections to improve the discrete transforms of observed gravity data, the transforms are interpreted for model parameters. This method is first tested on two synthetic models, then applied to gravity anomalies over the San Jacinto graben and Los Angeles basin.


2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Maamar Zahra ◽  
Yulin Wang ◽  
Bouabdellah Kechar ◽  
Yasmine Derdour ◽  
Wenjia Ding

Maximizing the network lifetime and data collection are two major functions in WSN. For this aim, mobility is proposed as a solution to improve the data collection process and promote energy efficiency. In this paper, we focus on Sink mobility which has the role of data collection. The problem is how to find an optimal data collection trajectory for the Mobile Sink using approximate optimization techniques. To address this challenge, we propose an optimization model for the Mobile Sink to improve the data collection process and thus to extend the network lifetime of WSN. Our proposition is based on a multiobjective function using a Weighted Sum Method (WSM) by adapting two metaheuristics methods, Tabu Search (TS) and Simulated Annealing (SA), to this problem. To test our proposal by experiment, we designed and developed an Integrated Environment of Optimization and Simulation based on metaheuristics tool (IEOSM). The environment IEOSM helps us to determine the best optimization method in terms of optimal trajectory, execution time, and quality of data collection. The IEOSM also integrates a powerful simulation tool to evaluate the methods in terms of energy consumption, data collection, and latency.


Geophysics ◽  
1965 ◽  
Vol 30 (2) ◽  
pp. 228-233 ◽  
Author(s):  
Charles E. Corbató

A procedure suitable for use on high‐speed digital computers is presented for interpreting two‐dimensional gravity anomalies. In order to determine the shape of a disturbing mass with known density contrast, an initial model is assumed and gravity anomalies are calculated and compared with observed values at n points, where n is greater than the number of unknown variables (e.g. depths) of the model. Adjustments are then made to the model by a least‐squares approximation which uses the partial derivatives of the anomalies so that the residuals are reduced to a minimum. In comparison with other iterative techniques, convergence is very rapid. A convenient method to use for both the calculation of the anomalies and the adjustments is the two‐dimensional method of Talwani, Worzel, and Landisman, (1959) in which the outline of the body is polygonized and the anomalies and the partial derivatives of the anomaly with respect to the depth of a vertex on the body can be expressed as functions of the coordinates of the vertex. Not only depths but under certain circumstances regional gravity values may be evaluated; however, the relationship of the disturbing body to the gravity information may impose certain limitations on the application of the procedure.


Geophysics ◽  
1960 ◽  
Vol 25 (3) ◽  
pp. 569-585 ◽  
Author(s):  
Roland G. Henderson

In the interpretation of magnetic and gravity anomalies, downward continuation of fields and calculation of first and second vertical derivatives of fields have been recognized as effective means for bringing into focus the latent diagnostic features of the data. A comprehensive system has been devised for the calculation of any or all of these derived fields on modern electronic digital computing equipment. The integral for analytic continuation above the plane is used with a Lagrange extrapolation polynomial to derive a general determinantal expression from which the field at depth and the various derivatives on the surface and at depth can be obtained. It is shown that the general formula includes as special cases some of the formulas appearing in the literature. The process involves a “once for all depths” summing of grid values on a system of concentric circles about each point followed by application of the appropriate one or more of the 19 sets of coefficients derived for the purpose. Theoretical and observed multilevel data are used to illustrate the processes and to discuss the errors. The coefficients can be used for less extensive computations on a desk calculator.


Sign in / Sign up

Export Citation Format

Share Document