Vega-Vanna-Volga Pricing of Claims on Exponential Quadratic Variation

2019 ◽  
Author(s):  
Frido Rolloos
Keyword(s):  
2013 ◽  
Vol 20 (5) ◽  
pp. 415-449 ◽  
Author(s):  
S. T. Tse ◽  
P. A. Forsyth ◽  
J. S. Kennedy ◽  
H. Windcliff

2002 ◽  
Vol 17 (5) ◽  
pp. 457-477 ◽  
Author(s):  
Ole E. Barndorff-Nielsen ◽  
Neil Shephard

2020 ◽  
Vol 34 (04) ◽  
pp. 5199-5206
Author(s):  
Siddharth Mitra ◽  
Aditya Gopalan

We study how to adapt to smoothly-varying (‘easy’) environments in well-known online learning problems where acquiring information is expensive. For the problem of label efficient prediction, which is a budgeted version of prediction with expert advice, we present an online algorithm whose regret depends optimally on the number of labels allowed and Q* (the quadratic variation of the losses of the best action in hindsight), along with a parameter-free counterpart whose regret depends optimally on Q (the quadratic variation of the losses of all the actions). These quantities can be significantly smaller than T (the total time horizon), yielding an improvement over existing, variation-independent results for the problem. We then extend our analysis to handle label efficient prediction with bandit (partial) feedback, i.e., label efficient bandits. Our work builds upon the framework of optimistic online mirror descent, and leverages second order corrections along with a carefully designed hybrid regularizer that encodes the constrained information structure of the problem. We then consider revealing action-partial monitoring games – a version of label efficient prediction with additive information costs – which in general are known to lie in the hard class of games having minimax regret of order T2/3. We provide a strategy with an O((Q*T)1/3 bound for revealing action games, along with one with a O((QT)1/3) bound for the full class of hard partial monitoring games, both being strict improvements over current bounds.


Sign in / Sign up

Export Citation Format

Share Document