scholarly journals A Gaussian Process Model for Color Camera Characterization: Assessment in Outdoor Levantine Rock Art Scenes

Sensors ◽  
2019 ◽  
Vol 19 (21) ◽  
pp. 4610 ◽  
Author(s):  
Adolfo Molada-Tebar ◽  
Gabriel Riutort-Mayol ◽  
Ángel Marqués-Mateu ◽  
José Luis Lerma

In this paper, we propose a novel approach to undertake the colorimetric camera characterization procedure based on a Gaussian process (GP). GPs are powerful and flexible nonparametric models for multivariate nonlinear functions. To validate the GP model, we compare the results achieved with a second-order polynomial model, which is the most widely used regression model for characterization purposes. We applied the methodology on a set of raw images of rock art scenes collected with two different Single Lens Reflex (SLR) cameras. A leave-one-out cross-validation (LOOCV) procedure was used to assess the predictive performance of the models in terms of CIE XYZ residuals and Δ E a b * color differences. Values of less than 3 CIELAB units were achieved for Δ E a b * . The output sRGB characterized images show that both regression models are suitable for practical applications in cultural heritage documentation. However, the results show that colorimetric characterization based on the Gaussian process provides significantly better results, with lower values for residuals and Δ E a b * . We also analyzed the induced noise into the output image after applying the camera characterization. As the noise depends on the specific camera, proper camera selection is essential for the photogrammetric work.

Author(s):  
Wei Li ◽  
Akhil Garg ◽  
Mi Xiao ◽  
Liang Gao

Abstract The power of electric vehicles (EVs) comes from lithium-ion batteries (LIBs). LIBs are sensitive to temperature. Too high and too low temperatures will affect the performance and safety of EVs. Therefore, a stable and efficient battery thermal management system (BTMS) is essential for an EV. This article has conducted a comprehensive study on liquid-cooled BTMS. Two cooling schemes are designed: the serpentine channel and the U-shaped channel. The results show that the cooling effect of two schemes is roughly the same, but the U-shaped channel can significantly decrease the pressure drop (PD) loss. The U-shaped channel is parameterized and modeled. A machine learning method called the Gaussian process (GP) model has been used to express the outputs such as temperature difference, temperature standard deviation, and pressure drop. A multi-objective optimization model is established using GP models, and the NSGA-II method is employed to drive the optimization process. The optimized scheme is compared with the initial design. The main findings are summarized as follows: the velocity of cooling water v decreases from 0.3 m/s to 0.22 m/s by 26.67%. Pressure drop decreases from 431.40 Pa to 327.11 Pa by 24.18%. The optimized solution has a significant reduction in pressure drop and helps to reduce parasitic power. The proposed method can provide a useful guideline for the liquid cooling design of large-scale battery packs.


Author(s):  
Yanwen Xu ◽  
Pingfeng Wang

Abstract The Gaussian Process (GP) model has become one of the most popular methods to develop computationally efficient surrogate models in many engineering design applications, including simulation-based design optimization and uncertainty analysis. When more observations are used for high dimensional problems, estimating the best model parameters of Gaussian Process model is still an essential yet challenging task due to considerable computation cost. One of the most commonly used methods to estimate model parameters is Maximum Likelihood Estimation (MLE). A common bottleneck arising in MLE is computing a log determinant and inverse over a large positive definite matrix. In this paper, a comparison of five commonly used gradient based and non-gradient based optimizers including Sequential Quadratic Programming (SQP), Quasi-Newton method, Interior Point method, Trust Region method and Pattern Line Search for likelihood function optimization of high dimension GP surrogate modeling problem is conducted. The comparison has been focused on the accuracy of estimation, the efficiency of computation and robustness of the method for different types of Kernel functions.


2021 ◽  
Author(s):  
Yanwen Xu ◽  
Pingfeng Wang

Abstract The Gaussian Process (GP) model has become one of the most popular methods and exhibits superior performance among surrogate models in many engineering design applications. However, the standard Gaussian process model is not able to deal with high dimensional applications. The root of the problem comes from the similarity measurements of the GP model that relies on the Euclidean distance, which becomes uninformative in the high-dimensional cases, and causes accuracy and efficiency issues. Limited studies explore this issue. In this study, thereby, we propose an enhanced squared exponential kernel using Manhattan distance that is more effective at preserving the meaningfulness of proximity measures and preferred to be used in the GP model for high-dimensional cases. The experiments show that the proposed approach has obtained a superior performance in high-dimensional problems. Based on the analysis and experimental results of similarity metrics, a guide to choosing the desirable similarity measures which result in the most accurate and efficient results for the Kriging model with respect to different sample sizes and dimension levels is provided in this paper.


2014 ◽  
Vol 487 ◽  
pp. 247-254
Author(s):  
Jin Xin Shao ◽  
Zi Xue Qiu ◽  
Jiang Yuan

The material dispersion of structure led to a lager error in crack length evaluating, the assessment method utilizing Gaussian Process (GP) model was proposed to solve the problem. The fatigue crack was monitored by active Lamb monitoring technology, and the four damage indices were extracted from the measured sensor signal, and then inputted to the GP model to realize the online evaluating of crack length. The fatigue test of hole-edge crack was made in LY12-CZ Aluminum specimen, which was used in aerospace structures frequently, the results shows that the method could efficiently decrease the evaluation error of crack length, which caused by material dispersion of structure.


Author(s):  
Ravi Kumar Pandit ◽  
David Infield

Loss of wind turbine power production identified through performance assessment is a useful tool for effective condition monitoring of a wind turbine. Power curves describe the nonlinear relationship between power generation and hub height wind speed and play a significant role in analyzing the performance of a turbine.Performance assessment using nonparametric models is gaining popularity. A Gaussian Process is a nonlinear, non-parametric probabilistic approach widely used for fitting models and forecasting applications due to its flexibility and mathematical simplicity. Its applications extended to both classification and regression related problems. Despite promising results, Gaussian Process application in wind turbine condition monitoring is limited.In this paper, a model based on a Gaussian Process developed for assessing the performance of a turbine. Here, a reference power curve using SCADA datasets from a healthy turbine is developed using a Gaussian Process and then was compared with a power curve from an unhealthy turbine. Error due to yaw misalignment is a standard issue with a wind turbine, which causes underperformance. Hence it is used as case study to test and validate the algorithm effectiveness.


2021 ◽  
Vol 11 (24) ◽  
pp. 11865
Author(s):  
Eduardo Molina ◽  
Laszlo Horvath

Current pallet design methodology frequently underestimates the load capacity of the pallet by assuming the payload is uniformly distributed and flexible. By considering the effect of payload characteristics and their interactions during pallet design, the structure of pallets can be optimized and raw material consumption reduced. The objective of this study was to develop a full description of how such payload characteristics affect load bridging on unit loads of stacked corrugated boxes on warehouse racking support. To achieve this goal, the authors expanded on a previously developed finite element model of a simplified unit load segment and conducted a study to screen for the significant factors and interactions. Subsequently, a Gaussian process (GP) regression model was developed to efficiently and accurately replicate the simulation model. Using this GP model, a quantification of the effects and interactions of all the identified significant factors was provided. With this information, packaging designers and researchers can engineer unit loads that consider the effect of the relevant design variables and their impact on pallet performance. Such a model has not been previously developed and can potentially reduce packaging materials’ costs.


2020 ◽  
Vol 10 (17) ◽  
pp. 6031
Author(s):  
Zhi Yu ◽  
Xiuzhi Shi ◽  
Jian Zhou ◽  
Rendong Huang ◽  
Yonggang Gou

A simple and accurate evaluation method of broken rock zone thickness (BRZT), which is usually used to describe the broken rock zone (BRZ), is meaningful, due to its ability to provide a reference for the roadway stability evaluation and support design. To create a relationship between various geological variables and the broken rock zone thickness (BRZT), the multiple linear regression (MLR), artificial neural network (ANN), Gaussian process (GP) and particle swarm optimization algorithm (PSO)-GP method were utilized, and the corresponding intelligence models were developed based on the database collected from various mines in China. Four variables including embedding depth (ED), drift span (DS), surrounding rock mass strength (RMS) and joint index (JI) were selected to train the intelligence model, while broken rock zone thickness (BRZT) is chosen as the output variable, and the k-fold cross-validation method was applied in the training process. After training, three validation metrics including variance account for (VAF), determination coefficient (R2) and root mean squared error (RMSE) were applied to describe the predictive performance of these developed models. After comparing performance based on a ranking method, the obtained results show that the PSO-GP model provides the best predictive performance in estimating broken rock zone thickness (BRZT). In addition, the sensitive effect of collected variables on broken rock zone thickness (BRZT) can be listed as JI, ED, DS and RMS, and JI was found to be the most sensitive factor.


2018 ◽  
Author(s):  
Caitlin C. Bannan ◽  
David Mobley ◽  
A. Geoff Skillman

<div>A variety of fields would benefit from accurate pK<sub>a</sub> predictions, especially drug design due to the affect a change in ionization state can have on a molecules physiochemical properties.</div><div>Participants in the recent SAMPL6 blind challenge were asked to submit predictions for microscopic and macroscopic pK<sub>a</sub>s of 24 drug like small molecules.</div><div>We recently built a general model for predicting pK<sub>a</sub>s using a Gaussian process regression trained using physical and chemical features of each ionizable group.</div><div>Our pipeline takes a molecular graph and uses the OpenEye Toolkits to calculate features describing the removal of a proton.</div><div>These features are fed into a Scikit-learn Gaussian process to predict microscopic pK<sub>a</sub>s which are then used to analytically determine macroscopic pK<sub>a</sub>s.</div><div>Our Gaussian process is trained on a set of 2,700 macroscopic pK<sub>a</sub>s from monoprotic and select diprotic molecules.</div><div>Here, we share our results for microscopic and macroscopic predictions in the SAMPL6 challenge.</div><div>Overall, we ranked in the middle of the pack compared to other participants, but our fairly good agreement with experiment is still promising considering the challenge molecules are chemically diverse and often polyprotic while our training set is predominately monoprotic.</div><div>Of particular importance to us when building this model was to include an uncertainty estimate based on the chemistry of the molecule that would reflect the likely accuracy of our prediction. </div><div>Our model reports large uncertainties for the molecules that appear to have chemistry outside our domain of applicability, along with good agreement in quantile-quantile plots, indicating it can predict its own accuracy.</div><div>The challenge highlighted a variety of means to improve our model, including adding more polyprotic molecules to our training set and more carefully considering what functional groups we do or do not identify as ionizable. </div>


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4392
Author(s):  
Jia Zhou ◽  
Hany Abdel-Khalik ◽  
Paul Talbot ◽  
Cristian Rabiti

This manuscript develops a workflow, driven by data analytics algorithms, to support the optimization of the economic performance of an Integrated Energy System. The goal is to determine the optimum mix of capacities from a set of different energy producers (e.g., nuclear, gas, wind and solar). A stochastic-based optimizer is employed, based on Gaussian Process Modeling, which requires numerous samples for its training. Each sample represents a time series describing the demand, load, or other operational and economic profiles for various types of energy producers. These samples are synthetically generated using a reduced order modeling algorithm that reads a limited set of historical data, such as demand and load data from past years. Numerous data analysis methods are employed to construct the reduced order models, including, for example, the Auto Regressive Moving Average, Fourier series decomposition, and the peak detection algorithm. All these algorithms are designed to detrend the data and extract features that can be employed to generate synthetic time histories that preserve the statistical properties of the original limited historical data. The optimization cost function is based on an economic model that assesses the effective cost of energy based on two figures of merit: the specific cash flow stream for each energy producer and the total Net Present Value. An initial guess for the optimal capacities is obtained using the screening curve method. The results of the Gaussian Process model-based optimization are assessed using an exhaustive Monte Carlo search, with the results indicating reasonable optimization results. The workflow has been implemented inside the Idaho National Laboratory’s Risk Analysis and Virtual Environment (RAVEN) framework. The main contribution of this study addresses several challenges in the current optimization methods of the energy portfolios in IES: First, the feasibility of generating the synthetic time series of the periodic peak data; Second, the computational burden of the conventional stochastic optimization of the energy portfolio, associated with the need for repeated executions of system models; Third, the inadequacies of previous studies in terms of the comparisons of the impact of the economic parameters. The proposed workflow can provide a scientifically defendable strategy to support decision-making in the electricity market and to help energy distributors develop a better understanding of the performance of integrated energy systems.


Sign in / Sign up

Export Citation Format

Share Document