scholarly journals A Novel Objective Function DYNO for Automatic Multi-variable Calibration and Application to Assess Effects of Velocity versus Temperature Data for 3D Lake Models Calibration

2021 ◽  
Author(s):  
Wei Xia ◽  
Taimoor Akhtar ◽  
Christine A. Shoemaker

Abstract. This study introduced a novel Dynamically Normalized objective function (DYNO) for multi-variable (i.e., temperature and velocity) model calibration problems. DYNO combines the error metrics of multiple variables into a single objective function by dynamically normalizing each variable's error terms using information available during the search. DYNO is proposed to dynamically adjust the weight of the error of each variable hence balancing the calibration to each variable during optimization search. The DYNO is applied to calibrate a tropical hydrodynamic model where temperature and velocity observation data are used for model calibration simultaneously. We also investigated the efficiency of DYNO by comparing the result of using DYNO to results of calibrating to either temperature or velocity observation only. The result indicates that DYNO can balance the calibration in terms of water temperature and velocity and that calibrating to only one variable (e.g., temperature or velocity) cannot guarantee the goodness-of-fit of another variable (e.g., velocity or temperature). Our study suggested that both temperature and velocity measures should be used for hydrodynamic model calibration in real practice. Our example problems were computed with a parallel optimization method PODS but DYNO can also be easily used in serial applications.

Author(s):  
Xiaochao Qian ◽  
Wei Li ◽  
Ming Yang

Model calibration is the procedure that adjusts the unknown parameters in order to fit the model to experimental data and improve predictive capability. However, it is difficult to implement the procedure because of the aleatory uncertainty. In this paper, a new method of model calibration based on uncertainty propagation is investigated. The calibration process is described as an optimization problem. A two-stage nested uncertainty propagation method is proposed to resolve this problem. Monte Carlo Simulation method is applied for the inner loop to propagate the aleatory uncertainty. Optimization method is applied for the outer loop to propagate the epistemic uncertainty. The optimization objective function is the consistency between the result of the inner loop and the experimental data. Thus, different consistency measurement methods for unary output and multivariate outputs are proposed as the optimization objective function. Finally, the thermal challenge problem is given to validate the reasonableness and effectiveness of the proposed method.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1452
Author(s):  
Cristian Mateo Castiblanco-Pérez ◽  
David Esteban Toro-Rodríguez ◽  
Oscar Danilo Montoya ◽  
Diego Armando Giral-Ramírez

In this paper, we propose a new discrete-continuous codification of the Chu–Beasley genetic algorithm to address the optimal placement and sizing problem of the distribution static compensators (D-STATCOM) in electrical distribution grids. The discrete part of the codification determines the nodes where D-STATCOM will be installed. The continuous part of the codification regulates their sizes. The objective function considered in this study is the minimization of the annual operative costs regarding energy losses and installation investments in D-STATCOM. This objective function is subject to the classical power balance constraints and devices’ capabilities. The proposed discrete-continuous version of the genetic algorithm solves the mixed-integer non-linear programming model that the classical power balance generates. Numerical validations in the 33 test feeder with radial and meshed configurations show that the proposed approach effectively minimizes the annual operating costs of the grid. In addition, the GAMS software compares the results of the proposed optimization method, which allows demonstrating its efficiency and robustness.


Coatings ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 774
Author(s):  
Haitao Luo ◽  
Rong Chen ◽  
Siwei Guo ◽  
Jia Fu

At present, hard coating structures are widely studied as a new passive damping method. Generally, the hard coating material is completely covered on the surface of the thin-walled structure, but the local coverage cannot only achieve better vibration reduction effect, but also save the material and processing costs. In this paper, a topology optimization method for hard coated composite plates is proposed to maximize the modal loss factors. The finite element dynamic model of hard coating composite plate is established. The topology optimization model is established with the energy ratio of hard coating layer to base layer as the objective function and the amount of damping material as the constraint condition. The sensitivity expression of the objective function to the design variables is derived, and the iteration of the design variables is realized by the Method of Moving Asymptote (MMA). Several numerical examples are provided to demonstrate that this method can obtain the optimal layout of damping materials for hard coating composite plates. The results show that the damping materials are mainly distributed in the area where the stored modal strain energy is large, which is consistent with the traditional design method. Finally, based on the numerical results, the experimental study of local hard coating composites plate is carried out. The results show that the topology optimization method can significantly reduce the frequency response amplitude while reducing the amount of damping materials, which shows the feasibility and effectiveness of the method.


2021 ◽  
Author(s):  
Markus Hrachowitz ◽  
Petra Hulsman ◽  
Hubert Savenije

<p>Hydrological models are often calibrated with respect to flow observations at the basin outlet. As a result, flow predictions may seem reliable but this is not necessarily the case for the spatiotemporal variability of system-internal processes, especially in large river basins. Satellite observations contain valuable information not only for poorly gauged basins with limited ground observations and spatiotemporal model calibration, but also for stepwise model development. This study explored the value of satellite observations to improve our understanding of hydrological processes through stepwise model structure adaption and to calibrate models both temporally and spatially. More specifically, satellite-based evaporation and total water storage anomaly observations were used to diagnose model deficiencies and to subsequently improve the hydrological model structure and the selection of feasible parameter sets. A distributed, process based hydrological model was developed for the Luangwa river basin in Zambia and calibrated with respect to discharge as benchmark. This model was modified stepwise by testing five alternative hypotheses related to the process of upwelling groundwater in wetlands, which was assumed to be negligible in the benchmark model, and the spatial discretization of the groundwater reservoir. Each model hypothesis was calibrated with respect to 1) discharge and 2) multiple variables simultaneously including discharge and the spatiotemporal variability in the evaporation and total water storage anomalies. The benchmark model calibrated with respect to discharge reproduced this variable well, as also the basin-averaged evaporation and total water storage anomalies. However, the evaporation in wetland dominated areas and the spatial variability in the evaporation and total water storage anomalies were poorly modelled. The model improved the most when introducing upwelling groundwater flow from a distributed groundwater reservoir and calibrating it with respect to multiple variables simultaneously. This study showed satellite-based evaporation and total water storage anomaly observations provide valuable information for improved understanding of hydrological processes through stepwise model development and spatiotemporal model calibration.</p>


Author(s):  
T. E. Potter ◽  
K. D. Willmert ◽  
M. Sathyamoorthy

Abstract Mechanism path generation problems which use link deformations to improve the design lead to optimization problems involving a nonlinear sum-of-squares objective function subjected to a set of linear and nonlinear constraints. Inclusion of the deformation analysis causes the objective function evaluation to be computationally expensive. An optimization method is presented which requires relatively few objective function evaluations. The algorithm, based on the Gauss method for unconstrained problems, is developed as an extension of the Gauss constrained technique for linear constraints and revises the Gauss nonlinearly constrained method for quadratic constraints. The derivation of the algorithm, using a Lagrange multiplier approach, is based on the Kuhn-Tucker conditions so that when the iteration process terminates, these conditions are automatically satisfied. Although the technique was developed for mechanism problems, it is applicable to any optimization problem having the form of a sum of squares objective function subjected to nonlinear constraints.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
M F Kragh ◽  
J T Lassen ◽  
J Rimestad ◽  
J Berntsen

Abstract Study question Do AI models for embryo selection provide actual implantation probabilities that generalise across clinics and patient demographics? Summary answer AI models need to be calibrated on representative data before providing reasonable agreements between predicted scores and actual implantation probabilities. What is known already AI models have been shown to perform well at discriminating embryos according to implantation likelihood, measured by area under curve (AUC). However, discrimination performance does not relate to how models perform with regards to predicting actual implantation likelihood, especially across clinics and patient demographics. In general, prediction models must be calibrated on representative data to provide meaningful probabilities. Calibration can be evaluated and summarised by “expected calibration error” (ECE) on score deciles and tested for significant lack of calibration using Hosmer-Lemeshow goodness-of-fit. ECE describes the average deviation between predicted probabilities and observed implantation rates and is 0 for perfect calibration. Study design, size, duration Time-lapse embryo videos from 18 clinics were used to develop AI models for prediction of fetal heartbeat (FHB). Model generalisation was evaluated on clinic hold-out models for the three largest clinics. Calibration curves were used to evaluate the agreement between AI-predicted scores and observed FHB outcome and summarised by ECE. Models were evaluated 1) without calibration, 2) calibration (Platt scaling) on other clinics’ data, and 3) calibration on the clinic’s own data (30%/70% for calibration/evaluation). Participants/materials, setting, methods A previously described AI algorithm, iDAScore, based on 115,842 time-lapse sequences of embryos, including 14,644 transferred embryos with known implantation data (KID), was used as foundation for training hold-out AI models for the three largest clinics (n = 2,829;2,673;1,327 KID embryos), such that their data were not included during model training. ECEs across the three clinics (mean±SD) were compared for models with/without calibration using KID embryos only, both overall and within subgroups of patient age (<36,36-40,>40 years). Main results and the role of chance The AUC across the three clinics was 0.675±0.041 (mean±SD) and unaffected by calibration. Without calibration, overall ECE was 0.223±0.057, indicating weak agreements between scores and actual implantation rates. With calibration on other clinics’ data, overall ECE was 0.040±0.013, indicating considerable improvements with moderate clinical variation. As implantation probabilities are both affected by clinical practice and patient demographics, subgroup analysis was conducted on patient age (<36,36-40,>40 years). With calibration on other clinics’ data, age-group ECEs were (0.129±0.055 vs. 0.078±0.033 vs. 0.072±0.015). These calibration errors were thus larger than the overall average ECE of 0.040, indicating poor generalisation across age. Including age as input to the calibration, age-group ECEs were (0.088±0.042 vs. 0.075±0.046 vs. 0.051±0.025), indicating improved agreements between scores and implantation rates across both clinics and age groups. With calibration including age on the clinic’s own data, however, the best calibrations were obtained with ECEs (0.060±0.017 vs. 0.040±0.010 vs. 0.039±0.009). The results indicate that both clinical practice and patient demographics influence calibration and thus ideally should be adjusted for. Testing lack of calibration using Hosmer-Lemeshow goodness-of-fit, only one age-group from one clinic appeared miscalibrated (P = 0.02), whereas all other age-groups from the three clinics were appropriately calibrated (P > 0.10). Limitations, reasons for caution In this study, AI model calibration was conducted based on clinic and age. Other patient metadata such as BMI and patient diagnosis may be relevant to calibrate as well. However, for both calibration and evaluation on the clinic’s own data, a substantiate amount of data for each subgroup is needed. Wider implications of the findings With calibrated scores, AI models can predict actual implantation likelihood for each embryo. Probability estimates are a strong tool for patient communication and clinical decisions such as deciding when to discard/freeze embryos. Model calibration may thus be the next step in improving clinical outcome and shortening time to live birth. Trial registration number This work is partly funded by the Innovation Fund Denmark (IFD) under File No. 7039-00068B and partly funded by Vitrolife A/S


Agromet ◽  
2011 ◽  
Vol 25 (1) ◽  
pp. 9
Author(s):  
Siti Nurdhawata ◽  
Bambang Dwi Dasanto

<em>Generally, reservoir can overcome problem of water availability in particular region. The reservoir collects excess water during rainy season to be used at the time of water shortage during dry season. In Pidie, the largest water sources are from Krueng Baro Geunik and Krueng Tiro. The reservoir is located at Krueng Rukoh with Krueng Tiro as the source of water supply. The reservoir provides water for irrigating and supplying domestic water in Baro (11.950 ha) and Tiro (6.330 ha) areas. There are 13 districts (216718 inhabitants) use the water from this reservoir. Given the population growing at rate of 0.52% then the water demand in the region increases. The aim of study was to estimate the volume of water entering the reservoir using the tank model. Calibration curve between the tank model output and observation data showed good correlation (R<sup>2</sup> = 0.7). The calibrated model was then used to calculate the discharge at Krueng Baro Geunik. A water balance analysis showed that the highest deficit occurred in September and the highest surplus in November. Based on this analysis, the capacity of Krueng Rukoh reservoir is able to fulfill its function assuming the rate of population growth and the irrigation area are constant.</em>


2013 ◽  
Vol 10 (4) ◽  
pp. 4597-4626
Author(s):  
S. H. P. W. Gamage ◽  
G. A. Hewa ◽  
S. Beecham

Abstract. The wide variability of hydrological losses in catchments is due to multiple variables that affect the rainfall-runoff process. Accurate estimation of hydrological losses is required for making vital decisions in design applications that are based on design rainfall models and rainfall-runoff models. Using representative single values of losses, despite their wide variability, is common practice, especially in Australian studies. This practice leads to issues such as over or under estimation of design floods. Probability distributions can be used as a better representation of losses. In particular, using joint probability approaches (JPA), probability distributions can be incorporated into hydrological loss parameters in design models. However, lack of understanding of loss distributions limits the benefit of using JPA. The aim of this paper is to identify a probability distribution function that can successfully describe hydrological losses in South Australian (SA) catchments. This paper describes suitable parametric and non-parametric distributions that can successfully describe observed loss data. The goodness-of-fit of the fitted distributions and quantification of the errors associated with quantile estimation are also discussed a two-parameter Gamma distribution was identified as one that successfully described initial loss (IL) data of the selected catchments. Also, a non-parametric standardised distribution of losses that describes both IL and continuing loss (CL) data were identified. The results obtained for the non-parametric methods were compared with similar studies carried out in other parts of Australia and a remarkable degree of consistency was observed. The results will be helpful in improving design flood applications.


Author(s):  
Muhammad Adeel ◽  
Yinglei Song

Background: In many applications of image processing, the enhancement of images is often a step necessary for their preprocessing. In general, for an enhanced image, the visual contrast as a whole and its refined local details are both crucial for achieving accurate results for subsequent classification or analysis. Objective: This paper proposes a new approach for image enhancement such that the global and local visual effects of an enhanced image can both be significantly improved. Methods: The approach utilizes the normalized incomplete Beta transform to map pixel intensities from an original image to its enhanced one. An objective function that consists of two parts is optimized to determine the parameters in the transform. One part of the objective function reflects the global visual effects in the enhanced image and the other one evaluates the enhanced visual effects on the most important local details in the original image. The optimization of the objective function is performed with an optimization technique based on the particle swarm optimization method. Results: Experimental results show that the approach is suitable for the automatic enhancement of images. Conclusion: The proposed approach can significantly improve both the global and visual contrasts of the image.


Sign in / Sign up

Export Citation Format

Share Document