scholarly journals MAXIMUM- INSCRIBED AND MINIMUM-CIRCUMSCRIBED FITTING FOR CO-ORDINATE MEASURING MACHINE

2010 ◽  
Vol 13 (3) ◽  
pp. 5-13
Author(s):  
Ha Thi Thu Thai ◽  
Phuoc Hong Nguyen

This paper describes algorithms that fit geometric shapes to data sets according to maximum- inscribed (MI) and minimum- circumscribed (MC) fit. We use these fits to build the CMM’s (Coordinate Measuring Machine) software in cases of circle, sphere and cylinder. For each case, we obtain the fit by two methods: first, by (relative easy) least squares fit method and then refine by MI and MC fit method. Although, the later method is substantially more complicated than the former one, Its results are used to make comparision with the the results of least squares method in order to give more options in the CMM software. In the near future we will continue to develop MI and MC fit with an effective algorithm- Simulated Annealing algorithm.

2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Juan Frausto-Solis ◽  
Ernesto Liñan-García ◽  
Mishael Sánchez-Pérez ◽  
Juan Paulo Sánchez-Hernández

The Chaotic Multiquenching Annealing algorithm (CMQA) is proposed. CMQA is a new algorithm, which is applied to protein folding problem (PFP). This algorithm is divided into three phases: (i) multiquenching phase (MQP), (ii) annealing phase (AP), and (iii) dynamical equilibrium phase (DEP). MQP enforces several stages of quick quenching processes that include chaotic functions. The chaotic functions can increase the exploration potential of solutions space of PFP. AP phase implements a simulated annealing algorithm (SA) with an exponential cooling function. MQP and AP are delimited by different ranges of temperatures; MQP is applied for a range of temperatures which goes from extremely high values to very high values; AP searches for solutions in a range of temperatures from high values to extremely low values. DEP phase finds the equilibrium in a dynamic way by applying least squares method. CMQA is tested with several instances of PFP.


Author(s):  
Sam Anand ◽  
Sridhar Jaganathan ◽  
Sampath Damodarasamy

Abstract This paper presents a new and accurate algorithm for assessing circularity tolerance from a set of data points obtained from a Coordinate Measuring Machine (CMM). This method, called Selective Zone Search algorithm, divides the workspace into small sectors called search zones and searches for the extreme points in these zones. These extreme points are used to draw a pair of concentric circles with minimum radial separation. The radial difference gives the circularity. The methodology has been tested with several example data sets and the results have been compared with the Least Squares method, Minimum Spanning Circle method and the Voronoi Diagram method.


Author(s):  
Craig M. Shakarji ◽  
Vijay Srinivasan

We present elegant algorithms for fitting a plane, two parallel planes (corresponding to a slot or a slab) or many parallel planes in a total (orthogonal) least-squares sense to coordinate data that is weighted. Each of these problems is reduced to a simple 3×3 matrix eigenvalue/eigenvector problem or an equivalent singular value decomposition problem, which can be solved using reliable and readily available commercial software. These methods were numerically verified by comparing them with brute-force minimization searches. We demonstrate the need for such weighted total least-squares fitting in coordinate metrology to support new and emerging tolerancing standards, for instance, ISO 14405-1:2010. The widespread practice of unweighted fitting works well enough when point sampling is controlled and can be made uniform (e.g., using a discrete point contact Coordinate Measuring Machine). However, we demonstrate that nonuniformly sampled points (arising from many new measurement technologies) coupled with unweighted least-squares fitting can lead to erroneous results. When needed, the algorithms presented also solve the unweighted cases simply by assigning the value one to each weight. We additionally prove convergence from the discrete to continuous cases of least-squares fitting as the point sampling becomes dense.


Author(s):  
C. J. Rolls ◽  
W. ElMaraghy ◽  
H. ElMaraghy

Abstract Reverse engineering (RE), may be defined as the process of generating computer aided design models (CAD) from existing or prototype parts. The process has been used for many years in industry. It has markedly increased in implementation in the past few years, primarily due to the introduction of rapid part digitization technologies. Current industrial applications include CAD model construction from artisan geometry, such as in automotive body styling, the generation of custom fits to human surfaces, and quality control. This paper summarizes the principles of operation behind many commercially available part digitization technologies, and discusses techniques involved in part digitization using a coordinate measuring machine (CMM) and laser scanner. An overall error characterization of the laser scanning digitization process is presented for a particular scanner. This is followed by a discussion of the merits and considerations involved in generating combined data sets with characteristics indicative of the design intent of specific part features. Issues in facilitating the assembly, or registration, of the different types of data into a single point set are discussed.


2012 ◽  
Vol 162 ◽  
pp. 171-178 ◽  
Author(s):  
Takaaki Oiwa ◽  
Harunaho Daido ◽  
Junichi Asama

This paper deals with parameter identification for a three-degrees-of-freedom (3-DOF) parallel manipulator, based on measurement redundancy. A redundant passive chain with a displacement sensor connects the moving stage to the machine frame. The passive chain is sequentially placed in three directions at approximately right angles to one another to reliably detect the motion of the stage. Linear encoders measure changes in lengths of the passive chain and the three actuated chains of the manipulator during traveling of the moving stage. Comparison between the measured length and the length calculated from forward kinematics of the 3-DOF manipulator reveals a length error of the passive chain. The least-squares method using a Jacobian matrix corrects 27 kinematic parameters so that the length errors of the passive chain are minimized. The above calculations were accomplished in both numerical simulations and experiments employing a coordinate measuring machine based on the parallel manipulator. Moreover, a length measurement simulation of gauge block measurement and a measurement experiment using the measuring machine were performed to verify the identified parameters.


2013 ◽  
Vol 760-762 ◽  
pp. 1987-1991
Author(s):  
Yun Fa Li

To master the variation regularity of finance, obtain greater benefits in stock investment. study of the support vector machine and application in prediction of stock market. The simulated annealing algorithm to optimize the least squares support vector machine prediction model, and the least square support vector machine and simulated annealing algorithm is described, given the optimal prediction model. Through the research on the simulation of the Hang Seng Index, shows that this method is simple, fast convergence, the algorithm with high accuracy. Has the actual guiding sense for investors, the stock market of the financial firm to operate.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Salah H. R. Ali

Quality of coordinate measuring machine (CMM) in dimension and form metrology is designed and performed at the NIS. The experimental investigation of CMM performance is developed by using reference Flick standard. The measurement errors of corresponding geometric evaluation algorithm (LSQ, ME, MC, and MI) and probe scanning speed (1, 2, 3, 4, and 5 mm/s) are obtained through repeated arrangement, comparison, and judgment. The experimental results show that the roundness error deviation can be evaluated effectively and exactly for CMM performance by using Flick standard. Some of influencing quantities for diameter and roundness form errors may dominate the results at all fitting algorithms under certain circumstances. It can be shown that the 2 mm/s probe speed gives smaller roundness error than 1, 3, 4, and 5 mm/s within 0.2 : 0.3 μm. It ensures that measurement at 2 mm/s is the best case to satisfy the high level of accuracy in the certain condition. Using Flick standard as a quality evaluation tool noted a high precision incremental in diameter and roundness form indication. This means a better transfer stability of CMM quality could be significantly improved. Moreover, some error formulae of data sets have been postulated to correlate the diameter and roundness measurements within the application range. Uncertainty resulting from CMM and environmental temperature has been evaluated and confirmed the quality degree of confidence in the proposed performance investigation.


2008 ◽  
Vol 8 (2) ◽  
pp. 6409-6436 ◽  
Author(s):  
C. A. Cantrell

Abstract. The representation of data, whether geophysical observations, numerical model output or laboratory results, by a best fit straight line is a routine practice in the geosciences and other fields. While the literature is full of detailed analyses of procedures for fitting straight lines to values with uncertainties, a surprising number of scientists blindly use the standard least squares method, such as found on calculators and in spreadsheet programs, that assumes no uncertainties in the x values. Here, the available procedures for estimating the best fit straight line to data, including those applicable to situations for uncertainties present in both the x and y variables, are reviewed. Representative methods that are presented in the literature for bivariate weighted fits are compared using several sample data sets, and guidance is presented as to when the somewhat more involved iterative methods are required, or when the standard least-squares procedure would be expected to be satisfactory. A spreadsheet-based template is made available that employs one method for bivariate fitting.


2019 ◽  
Vol 26 (6) ◽  
pp. 1995-2016
Author(s):  
Himanshu Rathore ◽  
Shirsendu Nandi ◽  
Peeyush Pandey ◽  
Surya Prakash Singh

Purpose The purpose of this paper is to examine the efficacy of diversification-based learning (DBL) in expediting the performance of simulated annealing (SA) in hub location problems. Design/methodology/approach This study proposes a novel diversification-based learning simulated annealing (DBLSA) algorithm for solving p-hub median problems. It is executed on MATLAB 11.0. Experiments are conducted on CAB and AP data sets. Findings This study finds that in hub location models, DBLSA algorithm equipped with social learning operator outperforms the vanilla version of SA algorithm in terms of accuracy and convergence rates. Practical implications Hub location problems are relevant in aviation and telecommunication industry. This study proposes a novel application of a DBLSA algorithm to solve larger instances of hub location problems effectively in reasonable computational time. Originality/value To the best of the author’s knowledge, this is the first application of DBL in optimisation. By demonstrating its efficacy, this study steers research in the direction of learning mechanisms-based metaheuristic applications.


Sign in / Sign up

Export Citation Format

Share Document