Electromagnetic and thermal parameter identification method for best prediction of temperature distribution on transformer tank covers

Author(s):  
Patricia Penabad Durán ◽  
Paolo Di Barba ◽  
Xose Lopez-Fernandez ◽  
Janusz Turowski

Purpose – The purpose of this paper is to describe a parameter identification method based on multiobjective (MO) deterministic and non-deterministic optimization algorithms to compute the temperature distribution on transformer tank covers. Design/methodology/approach – The strategy for implementing the parameter identification process consists of three main steps. The first step is to define the most appropriate objective function and the identification problem is solved for the chosen parameters using single-objective (SO) optimization algorithms. Then sensitivity to measurement error of the computational model is assessed and finally it is included as an additional objective function, making the identification problem a MO one. Findings – Computations with identified/optimal parameters yield accurate results for a wide range of current values and different conductor arrangements. From the numerical solution of the temperature field, decisions on dimensions and materials can be taken to avoid overheating on transformer covers. Research limitations/implications – The accuracy of the model depends on its parameters, such as heat exchange coefficients and material properties, which are difficult to determine from formulae or from the literature. Thus the goal of the presented technique is to achieve the best possible agreement between measured and numerically calculated temperature values. Originality/value – Differing from previous works found in the literature, sensitivity to measurement error is considered in the parameter identification technique as an additional objective function. Thus, solutions less sensitive to measurement errors at the expenses of a degradation in accuracy are identified by means of MO optimization algorithms.

2019 ◽  
Vol 91 (8) ◽  
pp. 1147-1155 ◽  
Author(s):  
Xiaofeng Liu ◽  
Bangzhao Zhou ◽  
Boyang Xiao ◽  
Guoping Cai

Purpose The purpose of this paper is to present a method to obtain the inertia parameter of a captured unknown space target. Design/methodology/approach An inertia parameter identification method is proposed in the post-capture scenario in this paper. This method is to resolve parameter identification with two steps: coarse estimation and precise estimation. In the coarse estimation step, all the robot arms are fixed and inertia tensor of the combined system is first calculated by the angular momentum conservation equation of the system. Then, inertia parameters of the unknown target are estimated using the least square method. Second, in the precise estimation step, the robot arms are controlled to move and then inertia parameters are once again estimated by optimization method. In the process of optimization, the coarse estimation results are used as an initial value. Findings Numerical simulation results prove that the method presented in this paper is effective for identifying the inertia parameter of a captured unknown target. Practical implications The presented method can also be applied to identify the inertia parameter of space robot. Originality/value In the classic momentum-based identification method, the linear momentum and angular momentum of system, both considered to be conserved, are used to identify the parameter of system. If the elliptical orbit in space is considered, the conservation of linear momentum is wrong. In this paper, an identification based on the conservation of angular momentum and dynamics is presented. Compared with the classic momentum-based method, this method can get a more accurate identification result.


1993 ◽  
Vol 264 (6) ◽  
pp. E902-E911 ◽  
Author(s):  
D. C. Bradley ◽  
G. M. Steil ◽  
R. N. Bergman

We introduce a novel technique for estimating measurement error in time courses and other continuous curves. This error estimate is used to reconstruct the original (error-free) curve. The measurement error of the data is initially assumed, and the data are smoothed with "Optimal Segments" such that the smooth curve misses the data points by an average amount consistent with the assumed measurement error. Thus the differences between the smooth curve and the data points (the residuals) are tentatively assumed to represent the measurement error. This assumption is checked by testing the residuals for randomness. If the residuals are nonrandom, it is concluded that they do not resemble measurement error, and a new measurement error is assumed. This process continues reiteratively until a satisfactory (i.e., random) group of residuals is obtained. In this case the corresponding smooth curve is taken to represent the original curve. Monte Carlo simulations of selected typical situations demonstrated that this new method ("OOPSEG") estimates measurement error accurately and consistently in 30- and 15-point time courses (r = 0.91 and 0.78, respectively). Moreover, smooth curves calculated by OOPSEG were shown to accurately recreate (predict) original, error-free curves for a wide range of measurement errors (2-20%). We suggest that the ability to calculate measurement error and reconstruct the error-free shape of data curves has wide applicability in data analysis and experimental design.


2019 ◽  
Vol 29 (3) ◽  
pp. 448-463 ◽  
Author(s):  
Manuel E. Rademaker ◽  
Florian Schuberth ◽  
Theo K. Dijkstra

Purpose The purpose of this paper is to enhance consistent partial least squares (PLSc) to yield consistent parameter estimates for population models whose indicator blocks contain a subset of correlated measurement errors. Design/methodology/approach Correction for attenuation as originally applied by PLSc is modified to include a priori assumptions on the structure of the measurement error correlations within blocks of indicators. To assess the efficacy of the modification, a Monte Carlo simulation is conducted. Findings In the presence of population measurement error correlation, estimated parameter bias is generally small for original and modified PLSc, with the latter outperforming the former for large sample sizes. In terms of the root mean squared error, the results are virtually identical for both original and modified PLSc. Only for relatively large sample sizes, high population measurement error correlation, and low population composite reliability are the increased standard errors associated with the modification outweighed by a smaller bias. These findings are regarded as initial evidence that original PLSc is comparatively robust with respect to misspecification of the structure of measurement error correlations within blocks of indicators. Originality/value Introducing and investigating a new approach to address measurement error correlation within blocks of indicators in PLSc, this paper contributes to the ongoing development and assessment of recent advancements in partial least squares path modeling.


2017 ◽  
Vol 89 (3) ◽  
pp. 425-433 ◽  
Author(s):  
Qiang Xue ◽  
Duan Haibin

Purpose The purpose of this paper is to propose a new approach for aerodynamic parameter identification of hypersonic vehicles, which is based on Pigeon-inspired optimization (PIO) algorithm, with the objective of overcoming the disadvantages of traditional methods based on gradient such as New Raphson method, especially in noisy environment. Design/methodology/approach The model of hypersonic vehicles and PIO algorithm is established for aerodynamic parameter identification. Using the idea, identification problem will be converted into the optimization problem. Findings A new swarm optimization method, PIO algorithm is applied in this identification process. Experimental results demonstrated the robustness and effectiveness of the proposed method: it can guarantee accurate identification results in noisy environment without fussy calculation of sensitivity. Practical implications The new method developed in this paper can be easily applied to solve complex optimization problems when some traditional method is failed, and can afford the accurate hypersonic parameter for control rate design of hypersonic vehicles. Originality/value In this paper, the authors converted this identification problem into the optimization problem using the new swarm optimization method – PIO. This new approach is proved to be reasonable through simulation.


2017 ◽  
Vol 16 (1) ◽  
pp. 93-98 ◽  
Author(s):  
Estephan J. Moana-Filho ◽  
Aurelio A. Alonso ◽  
Flavia P. Kapos ◽  
Vladimir Leon-Salazar ◽  
Scott H. Durand ◽  
...  

AbstractBackground and purpose (aims)Measurement error of intraoral quantitative sensory testing (QST) has been assessed using traditional methods for reliability, such as intraclass correlation coefficients (ICCs). Most studies reporting QST reliability focused on assessingone source of measurement error at a time, e.g., inter- or intra-examiner (test–retest) reliabilities and employed two examiners to test inter-examiner reliability. The present study used a complex design with multiple examiners with the aim of assessing the reliability of intraoral QST taking account of multiple sources of error simultaneously.MethodsFour examiners of varied experience assessed 12 healthy participants in two visits separated by 48 h. Seven QST procedures to determine sensory thresholds were used: cold detection (CDT), warmth detection (WDT), cold pain (CPT), heat pain (HPT), mechanical detection (MDT), mechanical pain (MPT) and pressure pain (PPT). Mixed linear models were used to estimate variance components for reliability assessment; dependability coefficients were used to simulate alternative test scenarios.ResultsMost intraoral QST variability arose from differences between participants (8.8–30.5%), differences between visits within participant (4.6–52.8%), and error (13.3–28.3%). For QST procedures other than CDT and MDT, increasing the number of visits with a single examiner performing the procedures would lead to improved dependability (dependability coefficient ranges: single visit, four examiners = 0.12–0.54; four visits, single examiner = 0.27–0.68). A wide range of reliabilities for QST procedures, as measured by ICCs, was noted for inter- (0.39–0.80) and intra-examiner (0.10–0.62) variation.ConclusionReliability of sensory testing can be better assessed by measuring multiple sources of error simultaneously instead of focusing on one source at a time. In experimental settings, large numbers of participants are needed to obtain accurate estimates of treatment effects based on QST measurements. This is different from clinical use, where variation between persons (the person main effect) is not a concern because clinical measurements are done on a single person.ImplicationsFuture studies assessing sensorytestingreliabilityinboth clinicaland experimental settings would benefit from routinely measuring multiple sources of error. The methods and results of this study can be used by clinical researchers to improve assessment of measurement error related to intraoral sensorytesting. This should lead to improved resource allocation when designing studies that use intraoral quantitative sensory testing in clinical and experimental settings.© 2017 Scandinavian Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.


Author(s):  
Łukasz Knypiński

Purpose The purpose of this paper is to execute the efficiency analysis of the selected metaheuristic algorithms (MAs) based on the investigation of analytical functions and investigation optimization processes for permanent magnet motor. Design/methodology/approach A comparative performance analysis was conducted for selected MAs. Optimization calculations were performed for as follows: genetic algorithm (GA), particle swarm optimization algorithm (PSO), bat algorithm, cuckoo search algorithm (CS) and only best individual algorithm (OBI). All of the optimization algorithms were developed as computer scripts. Next, all optimization procedures were applied to search the optimal of the line-start permanent magnet synchronous by the use of the multi-objective objective function. Findings The research results show, that the best statistical efficiency (mean objective function and standard deviation [SD]) is obtained for PSO and CS algorithms. While the best results for several runs are obtained for PSO and GA. The type of the optimization algorithm should be selected taking into account the duration of the single optimization process. In the case of time-consuming processes, algorithms with low SD should be used. Originality/value The new proposed simple nondeterministic algorithm can be also applied for simple optimization calculations. On the basis of the presented simulation results, it is possible to determine the quality of the compared MAs.


Author(s):  
Yuzhe Liu ◽  
Jun Wu ◽  
Liping Wang ◽  
Jinsong Wang ◽  
Dong Wang ◽  
...  

Purpose The purpose of this study is to develop a modified parameter identification method and a novel measurement method to calibrate a 3 degrees-of-freedom (3-DOF) parallel tool head. This parallel tool head is a parallel mechanism module in a five-axes hybrid machine tool. The proposed parameter identification method is named as the Modified Singular Value Decomposition (MSVD) method. It aims to overcome the difficulty of choosing the algorithm parameter in the regularization identification method. The novel measurement method is named as the vector projection (VP) method which is developed to expand the measurement range of self-made measurement implements. Design/methodology/approach Newton Iterative Algorithm based on Least Square Method is analyzed by using the Singular Value Decomposition method. Based on the analysis result, the MSVD method is proposed. The VP method transforms the angle measurement into the displacement measurement by taking full advantage of the ability that the 3-DOF parallel tool head can move in the X – Y plane. Findings The kinematic calibration approach is verified by calibration simulations, a Rotation Tool Center Point accuracy test and an experiment of machining an “S”-shaped test specimen. Originality/value The kinematic calibration approach with the MSVD method and VP method could be successfully applied to the 3-DOF parallel tool head and other 3-DOF parallel mechanisms.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Godson A. Tetteh ◽  
Kwasi Amoako-Gyampah ◽  
Amoako Kwarteng

Purpose Several research studies on Lean Six Sigma (LSS) have been done using the survey methodology. However, the use of surveys often relies on the measurement of variables, which cannot be directly observed, with attendant measurement errors. The purpose of this study is to develop a methodological framework consisting of a combination of four tools for identifying and assessing measurement error during survey research. Design/methodology/approach This paper evaluated the viability of the framework through an experimental study on the assessment of project management success in a developing country environment. The research design combined a control group, pretest and post-test measurements with structural equation modeling that enabled the assessment of differences between honest and fake survey responses. This paper tested for common method variance (CMV) using the chi-square test for the difference between unconstrained and fully constrained models. Findings The CMV results confirmed that there was significant shared variance among the different measures allowing us to distinguish between trait and faking responses and ascertain how much of the observed process measurement is because of measurement system variation as opposed to variation arising from the study’s constructs. Research limitations/implications The study was conducted in one country, and hence, the results may not be generalizable. Originality/value Measurement error during survey research, if not properly addressed, can lead to incorrect conclusions that can harm theory development. It can also lead to inappropriate recommendations for practicing managers. This study provides findings from a framework developed and assessed in a LSS project environment for identifying faking responses. This paper provides a robust framework consisting of four tools that provide guidelines on distinguishing between fake and trait responses. This tool should be of great value to researchers.


Sign in / Sign up

Export Citation Format

Share Document