Curve Fitting and Deconvolution of Instrumental Broadening: A Simulated Annealing Approach

1995 ◽  
Vol 49 (3) ◽  
pp. 273-278 ◽  
Author(s):  
A. Ferry ◽  
P. Jacobsson

A curve-fitting procedure based on the simulated annealing algorithm has been developed for the analysis of spectral Raman data. By the inclusion of a priori information about the instrumental broadening in the definition of the cost function that is minimized, effects of the finite instrumental resolution are eliminated from the resulting fit. The ability of the method to reproduce original band shapes is tested on synthesized spectra and FT-Raman spectra of diamond recorded at different resolutions with different apodization functions. The procedure yields the global optimum of the fitted parameters and is easily implemented on a personal computer.

1997 ◽  
Vol 40 (1) ◽  
Author(s):  
G. Böhm ◽  
G. Rossi ◽  
A. Vesnaver

3D reflection tomography allows the macro-model of complex geological structures to be reconstructed. In the usual approach, the spatial distribution of the velocity field is discretized by regular grids. This choice simplifies the development of the related software, but introduces two serious drawbacks: various domains of the model may be poorly covered, and a relevant mismatch between the grid and a complex velocity field may occur. So the tomographic inversion becomes unstable, unreliable and necessarily blurred. In this paper we introduce an algorithm to adapt the grid to the available ray paths and to the velocity field in sequence: so we get irregular grids with a locally variable resolution. We can guide the grid fitting procedure interactively, if we are going to introduce some geological a priori information; otherwise, we define a fully automatic approach, which exploits the Delauny triangles and Voronoi polygons.


2019 ◽  
Vol 490 (4) ◽  
pp. 5904-5920
Author(s):  
Maria Chira ◽  
Manolis Plionis

ABSTRACT We develop an optimization algorithm, using simulated annealing for the quantification of patterns in astronomical data based on techniques developed for robotic vision applications. The methodology falls in the category of cost minimization algorithms and it is based on user-determined interaction – among the pattern elements – criteria that define the properties of the sought structures. We applied the algorithm on a large variety of mock images and we constrained the free parameters; α and k, which express the amount of noise in the image and how strictly the algorithm seeks for cocircular structures, respectively. We find that the two parameters are interrelated and also that, independently of the pattern properties, an appropriate selection for most of the images would be log k = −2 and 0 < α ≲ 0.04. The width of the effective α-range, for different values of k, is reduced when more interaction coefficients are taken into account for the definition of the patterns of interest. Finally, we applied the algorithm on N-body simulation dark-matter halo data and on the HST image of the lensing Abell 2218 cluster to conclude that this versatile technique could be applied for the quantification of structure and for identifying coherence in astronomical patterns.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Nataša Glišović

Project planning, defining the limitations and resources by leveling the resources available, have a great importance for the management projects. All these activities directly affect the duration and the cost of the project. To get a competitive value on the market, the project must be completed at the optimum time. In other to be competitive enough the optimum or near optimum solutions of time cost tradeoff and the resource leveling and resource constrained scheduling problems should be obtained in the planning phase of the project. One important aspect of the project management is activity crashing, that is, reducing activity time by adding more resources such as workers and overtime. It is important to decide the optimal crash plan to complete the project within the desired time period. The comparison of fuzzy simulated annealing and the genetic algorithm based on the crashing method is introduced in this paper to evaluate project networks and determine the optimum crashing configuration that minimizes the average project cost, caused by being late and crashing costs in the presence of vagueness and uncertainty. The evaluation results based on a real case study indicate that the method can be reliably applied to engineering projects.


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. WB19-WB35 ◽  
Author(s):  
Cyril Schamper ◽  
Fayçal Rejiba ◽  
Roger Guérin

Electromagnetic induction (EMI) methods are widely used to determine the distribution of the electrical conductivity and are well adapted to the delimitation of aquifers and clayey layers because the electromagnetic field is strongly perturbed by conductive media. The multicomponent EMI device that was used allowed the three components of the secondary magnetic field (the radial [Formula: see text], the tangential [Formula: see text], and the vertical [Formula: see text]) to be measured at 10 frequencies ranging from 110 to 56 kHz in one single sounding with offsets ranging from 20 to 400 m. In a continuing endeavor to improve the reliability with which the thickness and conductivity are inverted, we focused our research on the use of components other than the vertical magnetic field Hz. Because a separate sensitivity analysis of [Formula: see text] and [Formula: see text] suggests that [Formula: see text] is more sensitive to variations in the thickness of a near-surface conductive layer, we developed an inversion tool able to make single-sounding and laterally constrained 1D interpretation of both components jointly, associated with an adapted random search algorithm for single-sounding processing for which almost no a priori information is available. Considering the complementarity of [Formula: see text] and [Formula: see text] components, inversion tests of clean and noisy synthetic data showed an improvement in the definition of the thickness of a near-surface conductive layer. This inversion code was applied to the karst site of the basin of Fontaine-Sous-Préaux, near Rouen (northwest of France). Comparison with an electrical resistivity tomography tends to confirm the reliability of the interpretation from the EMI data with the developed inversion tool.


Geophysics ◽  
2001 ◽  
Vol 66 (5) ◽  
pp. 1438-1449 ◽  
Author(s):  
Seiichi Nagihara ◽  
Stuart A. Hall

In the northern continental slope of the Gulf of Mexico, large oil and gas reservoirs are often found beneath sheetlike, allochthonous salt structures that are laterally extensive. Some of these salt structures retain their diapiric feeders or roots beneath them. These hidden roots are difficult to image seismically. In this study, we develop a method to locate and constrain the geometry of such roots through 3‐D inverse modeling of the gravity anomalies observed over the salt structures. This inversion method utilizes a priori information such as the upper surface topography of the salt, which can be delineated by a limited coverage of 2‐D seismic data; the sediment compaction curve in the region; and the continuity of the salt body. The inversion computation is based on the simulated annealing (SA) global optimization algorithm. The SA‐based gravity inversion has some advantages over the approach based on damped least‐squares inversion. It is computationally efficient, can solve underdetermined inverse problems, can more easily implement complex a priori information, and does not introduce smoothing effects in the final density structure model. We test this inversion method using synthetic gravity data for a type of salt geometry that is common among the allochthonous salt structures in the Gulf of Mexico and show that it is highly effective in constraining the diapiric root. We also show that carrying out multiple inversion runs helps reduce the uncertainty in the final density model.


2019 ◽  
Vol 27 (4) ◽  
pp. 21-24
Author(s):  
Sergey Mikhailovich Podolchak

A logical-probabilistic method for evaluating the test result is proposed, which is based on the theory of evidence of Dempster-Schafer with some assumptions that do not affect the final result. Currently, there is an acute question of creating new types of rocket technology in connection with a change in the situation on the international and domestic market. When creating new samples, it is necessary to pay special attention to the level of their reliability, but also remember to take into account the financial component of projects for the development and manufacture of products. In this regard, research is currently being conducted not only in the direction of increasing the reliability of complex technical systems, which include rocket engines, but also in reducing the cost of their refinement. One of the research options in this direction was proposed by the author in this work. The aim of the work and research as a whole was to demonstrate the capabilities of the chosen method for evaluating the test results, according to which it would be possible to draw conclusions about the success of the tests themselves. As studies have shown, the logical-probabilistic method for evaluating test results based on the Dempster-Schafer theory of evidence, due to the lack of a priori information, can be used in the development of new rocket engine models, but only in a narrow direction. More widely, this method can be used in the design of products based on accumulated experience (amount of information) on existing analogues. Dempster-Schafer proof theory can be applied at earlier design stages, but only in combination with other reliability models.


2021 ◽  
Author(s):  
Taqiaden Alshameri ◽  
Yude Dong ◽  
Abdullah Alqadhi

Abstract Fixture synthesis addresses the problem of fixture-elements placement on the workpiece surfaces. This article presents a novel variant of the Simulated Annealing (SA) algorithm called Declining Neighborhood Simulated Annealing (DNSA) specifically developed for the problem of fixture synthesis. The objective is to minimize measurement errors in the machined features induced by the misalignment at workpiece-locator contact points. The algorithm systematically evaluates different fixture layouts to reach a sufficient approximation of the global optimum robust layout. For each iteration, a set of previously accepted candidates are exploited to predict the next move. Throughout the progress of the algorithm, the search space is reduced and the new candidates are designated according to a declining Probability Density Function (PDF). To assure best performance, the DNSA parameters are configured using the Technique for Order Preference by Similarity to Ideal Solution (TOPOSIS). Moreover, the parameters are set to auto-adapt the complexity of a given input based on a Shanon entropy index. The optimization process is carried out automatically in the Computer-Aided Design (CAD) environment NX; a computer code was developed for this purpose using the Application Programming Interface (API) NXOpen. Benchmark examples from industrial partner and literature demonstrate satisfactory results.


2012 ◽  
Vol 7 (1) ◽  
pp. 7-15
Author(s):  
T. O. Weber ◽  
Wilhelmus A. M. V. Noije

This paper approaches the problem of analog circuit synthesis through the use of a Simulated Annealing algorithm with capability of performing crossovers with past anchor solutions (solutions better than all the others in one of the specifications) and modifying the weight of the Aggregate Objective Function specifications in order to escape local minimums. Search for the global optimum is followed by search for the Pareto front, which represents the trade-offs involved in the design and it is performed using the proposed algorithm together with Particle Swarm Optimization. In order to check the performance of the algorithm, the synthesis of a Miller Amplifier was accomplished in two different situations. The first was the comparison of 40 syntheses for Adaptive Simulated Annealing (ASA), Simulate Annealing/Quenching (SA/SQ) and the proposed SA/SQ algorithm with crossovers using a 20-minute bounded optimization with the aim of comparing the solutions of each method. Results were compared using Wilcoxon-Mann-Whitney test with a significance of 0.05 and showed that simulated annealing with crossovers have higher change of returning a good solution than the other algorithms used in this test. The second situation was the synthesis not bounded by time aiming to achieve the best circuit in order to test the use of crossovers in SA/SQ. The final amplifier using the proposed algorithm had 15.6 MHz of UGF, 82.6 dBV, 61º phase margin, 26 MV/s slew rate, area of 980 μm² and current supply of 297 μA in a 0.35 μm technology and was performed in 84 minutes.


1990 ◽  
Vol 88 (4) ◽  
pp. 1802-1810 ◽  
Author(s):  
W. A. Kuperman ◽  
Michael D. Collins ◽  
John S. Perkins ◽  
N. R. Davis

Sign in / Sign up

Export Citation Format

Share Document