Uncertainty Quantification in Vehicle Content Optimization for General Motors

2020 ◽  
Vol 50 (4) ◽  
pp. 225-238
Author(s):  
Eunhye Song ◽  
Peiling Wu-Smith ◽  
Barry L. Nelson

A vehicle content portfolio refers to a complete set of combinations of vehicle features offered while satisfying certain restrictions for the vehicle model. Vehicle Content Optimization (VCO) is a simulation-based decision support system at General Motors (GM) that helps to optimize a vehicle content portfolio to improve GM’s business performance and customers’ satisfaction. VCO has been applied to most major vehicle models at GM. VCO consists of several steps that demand intensive computing power, thus requiring trade-offs between the estimation error of the simulated performance measures and the computation time. Given VCO’s substantial influence on GM’s content decisions, questions were raised regarding the business risk caused by uncertainty in the simulation results. This paper shows how we successfully established an uncertainty quantification procedure for VCO that can be applied to any vehicle model at GM. With this capability, GM can not only quantify the overall uncertainty in its performance measure estimates but also identify the largest source of uncertainty and reduce it by allocating more targeted simulation effort. Moreover, we identified several opportunities to improve the efficiency of VCO by reducing its computational overhead, some of which were adopted in the development of the next generation of VCO.

2021 ◽  
Vol 5 (1) ◽  
Author(s):  
K. Hemming ◽  
M. Taljaard

AbstractClinical prediction models are developed with the ultimate aim of improving patient outcomes, and are often turned into prediction rules (e.g. classifying people as low/high risk using cut-points of predicted risk) at some point during the development stage. Prediction rules often have reasonable ability to either rule-in or rule-out disease (or another event), but rarely both. When a prediction model is intended to be used as a prediction rule, conveying its performance using the C-statistic, the most commonly reported model performance measure, does not provide information on the magnitude of the trade-offs. Yet, it is important that these trade-offs are clear, for example, to health professionals who might implement the prediction rule. This can be viewed as a form of knowledge translation. When communicating information on trade-offs to patients and the public there is a large body of evidence that indicates natural frequencies are most easily understood, and one particularly well-received way of depicting the natural frequency information is to use population diagrams. There is also evidence that health professionals benefit from information presented in this way.Here we illustrate how the implications of the trade-offs associated with prediction rules can be more readily appreciated when using natural frequencies. We recommend that the reporting of the performance of prediction rules should (1) present information using natural frequencies across a range of cut-points to inform the choice of plausible cut-points and (2) when the prediction rule is recommended for clinical use at a particular cut-point the implications of the trade-offs are communicated using population diagrams. Using two existing prediction rules, we illustrate how these methods offer a means of effectively and transparently communicating essential information about trade-offs associated with prediction rules.


Author(s):  
Tianxiang Liu ◽  
Li Mao ◽  
Mats-Erik Pistol ◽  
Craig Pryor

Abstract Calculating the electronic structure of systems involving very different length scales presents a challenge. Empirical atomistic descriptions such as pseudopotentials or tight-binding models allow one to calculate the effects of atomic placements, but the computational burden increases rapidly with the size of the system, limiting the ability to treat weakly bound extended electronic states. Here we propose a new method to connect atomistic and quasi-continuous models, thus speeding up tight-binding calculations for large systems. We divide a structure into blocks consisting of several unit cells which we diagonalize individually. We then construct a tight-binding Hamiltonian for the full structure using a truncated basis for the blocks, ignoring states having large energy eigenvalues and retaining states with an energy close to the band edge energies. A numerical test using a GaAs/AlAs quantum well shows the computation time can be decreased to less than 5% of the full calculation with errors of less than 1%. We give data for the trade-offs between computing time and loss of accuracy. We also tested calculations of the density of states for a GaAs/AlAs quantum well and find a ten times speedup without much loss in accuracy.


2014 ◽  
Vol 13 (6) ◽  
pp. 1261
Author(s):  
Francois Van Dyk ◽  
Gary Van Vuuren ◽  
Andre Heymans

The Sharpe ratio is widely used as a performance measure for traditional (i.e., long only) investment funds, but because it is based on mean-variance theory, it only considers the first two moments of a return distribution. It is, therefore, not suited for evaluating funds characterised by complex, asymmetric, highly-skewed return distributions such as hedge funds. It is also susceptible to manipulation and estimation error. These drawbacks have demonstrated the need for new and additional fund performance metrics. The monthly returns of 184 international long/short (equity) hedge funds from four geographical investment mandates were examined over an 11-year period.This study contributes to recent research on alternative performance measures to the Sharpe ratio and specifically assesses whether a scaled-version of the classic Sharpe ratio should augment the use of the Sharpe ratio when evaluating hedge fund risk and in the investment decision-making process. A scaled Treynor ratio is also compared to the traditional Treynor ratio. The classic and scaled versions of the Sharpe and Treynor ratios were estimated on a 36-month rolling basis to ascertain whether the scaled ratios do indeed provide useful additional information to investors to that provided solely by the classic, non-scaled ratios.


2019 ◽  
Vol 10 (1) ◽  
pp. 118 ◽  
Author(s):  
In-Ho Song ◽  
Jun-Woo Kim ◽  
Jeong-Seo Koo ◽  
Nam-Hyoung Lim

As the operating speed of a train increases, there is a growing interest in reducing damage caused by derailment and collision accidents. Since a collision with the surrounding structure after a derailment accident causes a great damage, protective facilities like a barrier wall or derailment containment provision (DCP) are installed to reduce the damage due to the secondary collision accident. However, the criteria to design a protective facility such as locations and design loads are not clear because of difficulties in predicting post-derailment behaviors. In this paper, we derived a simplified frame model that can predict post derailment behaviors in the design phase of the protective facilities. The proposed vehicle model can simplify for various frames to reduce the computation time. Also, the actual derailment tests were conducted on a real test track to verify the reliability of the model. The simulation results of the proposed model showed reasonable agreement to the test results.


2014 ◽  
Vol 23 (2) ◽  
pp. 155-170
Author(s):  
Zedjiga Yacine ◽  
Dalil Ichalal ◽  
Naima Ait Oufroukh ◽  
Said Mammar ◽  
Said Djennoune

AbstractThe present article deals with an observer design for nonlinear vehicle lateral dynamics. The contributions of the article concern the nonconsideration of any force model and the consideration that the longitudinal velocity is time varying, which is more realistic than the assumption that it is constant. The vehicle model is then represented by an exact Takagi–Sugeno (TS) model via the sector nonlinearity transformation. A proportional multiple integral (PMI) observer based on the TS model is designed to estimate simultaneously the state vector and the unknown input (lateral forces and road curvature). The convergence conditions of the estimation error are expressed under LMI formulation using the Lyapunov theory, which guaranties a bounded error. Simulations are carried out for comparison between the conventional PI observer, the enhanced PI observer, and the PMI observer. Finally, experimental results are provided to illustrate the performances of the proposed PMI observer.


2013 ◽  
Vol 26 (22) ◽  
pp. 9194-9205 ◽  
Author(s):  
Marc H. Taylor ◽  
Martin Losch ◽  
Manfred Wenzel ◽  
Jens Schröter

Abstract Empirical orthogonal function (EOF) analysis is commonly used in the climate sciences and elsewhere to describe, reconstruct, and predict highly dimensional data fields. When data contain a high percentage of missing values (i.e., gappy), alternate approaches must be used in order to correctly derive EOFs. The aims of this paper are to assess the accuracy of several EOF approaches in the reconstruction and prediction of gappy data fields, using the Galapagos Archipelago as a case study example. EOF approaches included least squares estimation via a covariance matrix decomposition [least squares EOF (LSEOF)], data interpolating empirical orthogonal functions (DINEOF), and a novel approach called recursively subtracted empirical orthogonal functions (RSEOF). Model-derived data of historical surface chlorophyll-a concentrations and sea surface temperature, combined with a mask of gaps from historical remote sensing estimates, allowed for the creation of true and observed fields by which to gauge the performance of EOF approaches. Only DINEOF and RSEOF were found to be appropriate for gappy data reconstruction and prediction. DINEOF proved to be the superior approach in terms of accuracy, especially for noisy data with a high estimation error, although RSEOF may be preferred for larger data fields because of its relatively faster computation time.


2013 ◽  
Vol 807-809 ◽  
pp. 991-997
Author(s):  
Hai Yan Zhang ◽  
Yun Fei Shao ◽  
Bing Jie Wang

This paper discusses a firms performance of proactive environmental technology innovation strategy and reactive strategy based on the model. It is proved that a firm can get better payoff if the manager interpret environmental issues as opportunities, comparing the performance of achieving these two strategies. A firm can profit under better communications with the public by managerial interpretations of environmental issues as opportunities trading off the increasing costs. In conclusion, the shareholder should motivate the manager to interpret environmental issues as opportunities for better payoff.


2013 ◽  
Vol 336-338 ◽  
pp. 361-366
Author(s):  
Chun Xiao Jian ◽  
Wei Yang ◽  
Pei Guo Liu

The sensor control is concerned in this paper to exploit the multi-object filtering capability of the sensor system. The proposed control algorithm is formulated in the framework of partially observed Markov decision processes as previous work, while it adopts a new reward function (RF). Multi-object miss-distance can jointly capture detection and estimation error in a mathematically consistent manner and is generally employed as the final performance measure for the multi-object filtering; therefore, the predicted multi-object miss-distance can be naturally selected as a RF. However, there is no analytical expression of the predicted multi-object miss-distance generally. The computation of this predicted miss-distance is discussed in detail. Future work will concentrate on providing a complete comparison of different sensor control schemes.


Symmetry ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 1710
Author(s):  
Mojisola Grace Asogbon ◽  
Oluwarotimi Williams Samuel ◽  
Yanbing Jiang ◽  
Lin Wang ◽  
Yanjuan Geng ◽  
...  

The constantly rising number of limb stroke survivors and amputees has motivated the development of intelligent prosthetic/rehabilitation devices for their arm function restoration. The device often integrates a pattern recognition (PR) algorithm that decodes amputees’ limb movement intent from electromyogram (EMG) signals, characterized by neural information and symmetric distribution. However, the control performance of the prostheses mostly rely on the interrelations among multiple dynamic factors of feature set, windowing parameters, and signal conditioning that have rarely been jointly investigated to date. This study systematically investigated the interaction effects of these dynamic factors on the performance of EMG-PR system towards constructing optimal parameters for accurately robust movement intent decoding in the context of prosthetic control. In this regard, the interaction effects of various features across window lengths (50 ms~300 ms), increments (50 ms~125 ms), robustness to external interferences and sensor channels (2 ch~6 ch), were examined using EMG signals obtained from twelve subjects through a symmetrical movement elicitation protocol. Compared to single features, multiple features consistently achieved minimum decoding error below 10% across optimal windowing parameters of 250 ms/100 ms. Also, the multiple features showed high robustness to additive noise with obvious trade-offs between accuracy and computation time. Consequently, our findings may provide proper insight for appropriate parameter selection in the context of robust PR-based control strategy for intelligent rehabilitation device.


2020 ◽  
Vol 10 (13) ◽  
pp. 4630
Author(s):  
Kijung Park ◽  
Gayeon Kim ◽  
Heena No ◽  
Hyun Woo Jeon ◽  
Gül E. Okudan Kremer

Fused filament fabrication (FFF) has been proven to be an effective additive manufacturing technique for carbon fiber reinforced polyether–ether–ketone (CFR-PEEK) due to its practicality in use. However, the relationships between the process parameters and their trade-offs in manufacturing performance have not been extensively studied for CFR-PEEK although they are essential to identify the optimal parameter settings. This study therefore investigates the impact of critical FFF parameters (i.e., layer thickness, build orientation, and printing speed) on the manufacturing performance (i.e., printing time, dimensional accuracy, and material cost) of CFR-PEEK outputs. A full factorial design of the experiments is performed for each of the three sample designs to identify the optimal parameter combinations for each performance measure. In addition, multiple response optimization was used to derive optimal parameter settings for the overall performance. The results show that the optimal parameter settings depend on the performance measures regardless of the designs, and that the layer thickness plays a critical role in the performance trade-offs. In addition, lower layer thickness, horizontal orientation, and higher speed form the optimal settings to maximize the overall performance. The findings from this study indicate that FFF parameter settings for CFR-PEEK should be identified through multi-objective decision making that involves conflicts between the operational objectives for the parameter settings.


Sign in / Sign up

Export Citation Format

Share Document