scholarly journals Quantifying the Model Risk Inherent in the Calibration and Recalibration of Option Pricing Models

Risks ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 13
Author(s):  
Yu Feng ◽  
Ralph Rudd ◽  
Christopher Baker ◽  
Qaphela Mashalaba ◽  
Melusi Mavuso ◽  
...  

We focus on two particular aspects of model risk: the inability of a chosen model to fit observed market prices at a given point in time (calibration error) and the model risk due to the recalibration of model parameters (in contradiction to the model assumptions). In this context, we use relative entropy as a pre-metric in order to quantify these two sources of model risk in a common framework, and consider the trade-offs between them when choosing a model and the frequency with which to recalibrate to the market. We illustrate this approach by applying it to the seminal Black/Scholes model and its extension to stochastic volatility, while using option data for Apple (AAPL) and Google (GOOG). We find that recalibrating a model more frequently simply shifts model risk from one type to another, without any substantial reduction of aggregate model risk. Furthermore, moving to a more complicated stochastic model is seen to be counterproductive if one requires a high degree of robustness, for example, as quantified by a 99% quantile of aggregate model risk.

Author(s):  
G. T. Alckmin ◽  
L. Kooistra ◽  
A. Lucieer ◽  
R. Rawnsley

<p><strong>Abstract.</strong> Vegetation indices (VIs) have been extensively employed as a feature for dry matter (DM) estimation. During the past five decades more than a hundred vegetation indices have been proposed. Inevitably, the selection of the optimal index or subset of indices is not trivial nor obvious. This study, performed on a year-round observation of perennial ryegrass (n&amp;thinsp;=&amp;thinsp;900), indicates that for this response variable (i.e. kg.DM.ha<sup>&amp;minus;1</sup>), more than 80% of indices present a high degree of collinearity (correlation&amp;thinsp;&amp;gt;&amp;thinsp;|0.8|.) Additionally, the absence of an established workflow for feature selection and modelling is a handicap when trying to establish meaningful relations between spectral data and biophysical/biochemical features. Within this case study, an unsupervised and supervised filtering process is proposed to an initial dataset of 97 VIs. This research analyses the effects of the proposed filtering and feature selection process to the overall stability of final models. Consequently, this analysis provides a straightforward framework to filter and select VIs. This approach was able to provide a reduced feature set for a robust model and to quantify trade-offs between optimal models (i.e. lowest root mean square error &amp;ndash; RMSE&amp;thinsp;=&amp;thinsp;412.27&amp;thinsp;kg.DM.ha<sup>&amp;minus;1</sup>) and tolerable models (with a smaller number of features &amp;ndash; 4 VIs and within 10% of the lowest RMSE.)</p>


2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


2021 ◽  
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (actual) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This fuzzification of simulated results yields improvements in targeted performance metrics relative to a length scale parameter, at the expense of decreases in opposing metrics (e.g. less false negatives results in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated, and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision making from simulated data.


2013 ◽  
Vol 860-863 ◽  
pp. 1725-1728
Author(s):  
Fan Biao Bao

This document focus on the car's dynamic performance characteristics.Because MATLAB has many advantages such as intuitive, clear physical meaning, a small amount of programming, data visualization and high degree of merit. This paper Computes and analysis with the introduction of an instance practice vehicle models.In light of the specific model parameters, this paper has analyzed car driver and driving resistance balance, power balance and power factor based on the application of Mat Lab's data analysis and graphics, and drawn the relevant graph, according to the mapping feature maps.The paper analysis of the car comprehensive power the car's dynamic graphing features calculation and research method are provided. The paper has provided new ideas of vehicle parameter selection and design.It has some practical value.


Author(s):  
Tomáš Gedeon ◽  
Lisa Davis ◽  
Katelyn Weber ◽  
Jennifer Thorenson

In this paper, we study the limitations imposed on the transcription process by the presence of short ubiquitous pauses and crowding. These effects are especially pronounced in highly transcribed genes such as ribosomal genes (rrn) in fast growing bacteria. Our model indicates that the quantity and duration of pauses reported for protein-coding genes is incompatible with the average elongation rate observed in rrn genes. When maximal elongation rate is high, pause-induced traffic jams occur, increasing promoter occlusion, thereby lowering the initiation rate. This lowers average transcription rate and increases average transcription time. Increasing maximal elongation rate in the model is insufficient to match the experimentally observed average elongation rate in rrn genes. This suggests that there may be rrn-specific modifications to RNAP, which then experience fewer pauses, or pauses of shorter duration than those in protein-coding genes. We identify model parameter triples (maximal elongation rate, mean pause duration time, number of pauses) which are compatible with experimentally observed elongation rates. Average transcription time and average transcription rate are the model outputs investigated as proxies for cell fitness. These fitness functions are optimized for different parameter choices, opening up a possibility of differential control of these aspects of the elongation process, with potential evolutionary consequences. As an example, a gene’s average transcription time may be crucial to fitness when the surrounding medium is prone to abrupt changes. This paper demonstrates that a functional relationship among the model parameters can be estimated using a standard statistical analysis, and this functional relationship describes the various trade-offs that must be made in order for the gene to control the elongation process and achieve a desired average transcription time. It also demonstrates the robustness of the system when a range of maximal elongation rates can be balanced with transcriptional pause data in order to maintain a desired fitness.


2019 ◽  
pp. 027836491985944 ◽  
Author(s):  
David Surovik ◽  
Kun Wang ◽  
Massimo Vespignani ◽  
Jonathan Bruce ◽  
Kostas E Bekris

Tensegrity robots, which are prototypical examples of hybrid soft–rigid robots, exhibit dynamical properties that provide ruggedness and adaptability. They also bring about, however, major challenges for locomotion control. Owing to high dimensionality and the complex evolution of contact states, data-driven approaches are appropriate for producing viable feedback policies for tensegrities. Guided policy search (GPS), a sample-efficient hybrid framework for optimization and reinforcement learning, has previously been applied to generate periodic, axis-constrained locomotion by an icosahedral tensegrity on flat ground. Varying environments and tasks, however, create a need for more adaptive and general locomotion control that actively utilizes an expanded space of robot states. This implies significantly higher needs in terms of sample data and setup effort. This work mitigates such requirements by proposing a new GPS -based reinforcement learning pipeline, which exploits the vehicle’s high degree of symmetry and appropriately learns contextual behaviors that are sustainable without periodicity. Newly achieved capabilities include axially unconstrained rolling, rough terrain traversal, and rough incline ascent. These tasks are evaluated for a small variety of key model parameters in simulation and tested on the NASA hardware prototype, SUPERball. Results confirm the utility of symmetry exploitation and the adaptability of the vehicle. They also shed light on numerous strengths and limitations of the GPS framework for policy design and transfer to real hybrid soft–rigid robots.


Author(s):  
Amitabh Chandra ◽  
Craig Garthwaite

In this article, we develop an economic framework for Medicare reform that highlights trade-offs that reform proposals should grapple with, but often ignore. Central to our argument is a tension in administratively set prices, which may improve short-term efficiency but do so at the expense of dynamic efficiency (slowing innovations in new treatments). The smaller the Medicare program is relative to the commercial market, the less important this is; but in a world where there are no market prices or the private sector is very small, the task of setting prices that are dynamically correct becomes more complex. Reforming Medicare should focus on greater incentives to increase competition between Medicare Advantage plans, which necessitates a role for government in ensuring competition; premium support; less use of regulated prices; and less appetite for countless “pay for performance” schemes. We apply this framework to evaluate Medicare for All proposals.


1997 ◽  
Vol 20 (2) ◽  
pp. 279-303 ◽  
Author(s):  
Réjean Plamondon ◽  
Adel M. Alimi

This target article presents a critical survey of the scientific literature dealing with the speed/accuracy trade-offs in rapid-aimed movements. It highlights the numerous mathematical and theoretical interpretations that have been proposed in recent decades. Although the variety of points of view reflects the richness of the field and the high degree of interest that such basic phenomena attract in the understanding of human movements, it calls into question the ability of many models to explain the basic observations consistently reported in the field. This target article summarizes the kinematic theory of rapid human movements, proposed recently by R. Plamondon (1993b; 1993c; 1995a; 1995b), and analyzes its predictions in the context of speed/accuracy trade-offs. Data from human movement literature are reanalyzed and reinterpreted in the context of the new theory. It is shown that the various aspects of speed/ accuracy trade-offs can be taken into account by considering the asymptotic behavior of a large number of coupled linear systems, from which a delta-lognormal law can be derived to describe the velocity profile of an end-effector driven by a neuromuscular synergy. This law not only describes velocity profiles almost perfectly, it also predicts the kinematic properties of simple rapid movements and provides a consistent framework for the analysis of different types of speed/accuracy trade-offs using a quadratic (or power) law that emerges from the model.


1988 ◽  
Vol 5 (1) ◽  
pp. 51-55
Author(s):  
Theodore E. Howard ◽  
John T. Walkowiak

Abstract Existing Christmas tree production cost and return models do not link the number and average price of merchantable trees to management practices. This model simulates number and price as functions of specific management practices accomplished within a rotation. The model allows evaluation of the trade-offs between additional management costs and resulting revenues. Using average production costs, investment of typical and intensive management scenarios in Northern New England were simulated. Real rates of return from 6 to 18% were obtained depending on the management regime and market prices. North. J. Appl. For. 5:51-55, March 1988.


2006 ◽  
Vol 53 (12) ◽  
pp. 247-256 ◽  
Author(s):  
A. Marquot ◽  
A.-E. Stricker ◽  
Y. Racault

Activated sludge models, and ASM1 in particular, are well recognised and useful mathematical representations of the macroscopic processes involved in the biological degradation of the pollution carried by wastewater. Nevertheless, the use of these models through simulation software requires a careful methodology for their calibration (determination of the model parameters' values) and the validation step (verification with an independent data set). This paper presents the methodology and the results of dynamic calibration and validation tasks as a prior work to a modelling project for defining a reference guideline destined to French designers and operators. To reach these goals, a biological nutrient removal (BNR) wastewater treatment plant (WWTP) with intermittent aeration was selected and monitored for 2 years. Two sets of calibrated parameters are given and discussed. The results of the long-term validation task are presented through a 2-month simulation with lots of operation changes. Finally, it is concluded that, even if calibrating ASM1 with a high degree of confidence with a single set of parameters was not possible, the results of the calibration are sufficient to obtain satisfactory results over long-term dynamic simulation. However, simulating long periods reveals specific calibration issues such as the variation of the nitrification capacity due to external events.


Sign in / Sign up

Export Citation Format

Share Document