scholarly journals A Comparative Analysis of Parsimonious Yield Curve Models with Focus on the Nelson-Siegel, Svensson and Bliss Versions

Author(s):  
Ranik Raaen Wahlstrøm ◽  
Florentina Paraschiv ◽  
Michael Schürle

AbstractWe shed light on computational challenges when fitting the Nelson-Siegel, Bliss and Svensson parsimonious yield curve models to observed US Treasury securities with maturities up to 30 years. As model parameters have a specific financial meaning, the stability of their estimated values over time becomes relevant when their dynamic behavior is interpreted in risk-return models. Our study is the first in the literature that compares the stability of estimated model parameters among different parsimonious models and for different approaches for predefining initial parameter values. We find that the Nelson-Siegel parameter estimates are more stable and conserve their intrinsic economical interpretation. Results reveal in addition the patterns of confounding effects in the Svensson model. To obtain the most stable and intuitive parameter estimates over time, we recommend the use of the Nelson-Siegel model by taking initial parameter values derived from the observed yields. The implications of excluding Treasury bills, constraining parameters and reducing clusters across time to maturity are also investigated.

2018 ◽  
Author(s):  
Sebastian Gluth ◽  
Nachshon Meiran

AbstractIt has become a key goal of model-based neuroscience to estimate trial-by-trial fluctuations of cognitive model parameters for linking these fluctuations to brain signals. However, previously developed methods were limited by being difficulty to implement, time-consuming, or model-specific. Here, we propose an easy, efficient and general approach to estimating trial-wise changes in parameters: Leave-One-Trial-Out (LOTO). The rationale behind LOTO is that the difference between the parameter estimates for the complete dataset and for the dataset with one omitted trial reflects the parameter value in the omitted trial. We show that LOTO is superior to estimating parameter values from single trials and compare it to previously proposed approaches. Furthermore, the method allows distinguishing true variability in a parameter from noise and from variability in other parameters. In our view, the practicability and generality of LOTO will advance research on tracking fluctuations in latent cognitive variables and linking them to neural data.


2021 ◽  
Author(s):  
Oliver Lüdtke ◽  
Alexander Robitzsch ◽  
Esther Ulitzsch

The bivariate Stable Trait, AutoRegressive Trait, and State (STARTS) model provides a general approach for estimating reciprocal effects between constructs over time. However, previous research has shown that this model is difficult to estimate using the maximum likelihood (ML) method (e.g., nonconvergence). In this article, we introduce a Bayesian approach for estimating the bivariate STARTS model and implement it in the software Stan. We discuss issues of model parameterization and show how appropriate prior distributions for model parameters can be selected. Specifically, we propose the four-parameter beta distribution as a flexible prior distribution for the autoregressive and cross-lagged effects. Using a simulation study, we show that the proposed Bayesian approach provides more accurate estimates than ML estimation in challenging data constellations. An example is presented to illustrate how the Bayesian approach can be used to stabilize the parameter estimates of the bivariate STARTS model.


Author(s):  
Alan N. Rechtschaffen

This chapter begins with a discussion of the purpose and goals of treasury securities. Treasury securities are a type of debt instrument providing limited credit risk. U.S. Treasury bills, notes, and bonds are issued by the Treasury Department and represent direct obligations of the U.S. government. Treasury securities are used to meet the needs of investors who wish to “loan” money to the federal government and in return receive a fixed or floating interest rate. The Treasury yield curve is a benchmark for fixed income securities across the spectrum of debt securities. The remainder of the chapter covers types of treasury securities, pricing, bond auctions and their effect on price, interest rates, and STRIPS (separate trading of registered interest and principal securities).


2018 ◽  
Vol 141 (1) ◽  
Author(s):  
Alyssa T. Liem ◽  
J. Gregory McDaniel ◽  
Andrew S. Wixom

A method is presented to improve the estimates of material properties, dimensions, and other model parameters for linear vibrating systems. The method improves the estimates of a single model parameter of interest by finding parameter values that bring model predictions into agreement with experimental measurements. A truncated Neumann series is used to approximate the inverse of the dynamic stiffness matrix. This approximation avoids the need to directly solve the equations of motion for each parameter variation. The Neumman series is shown to be equivalent to a Taylor series expansion about nominal parameter values. A recursive scheme is presented for computing the associated derivatives, which are interpreted as sensitivities of displacements to parameter variations. The convergence of the Neumman series is studied in the context of vibrating systems, and it is found that the spectral radius is strongly dependent on system resonances. A homogeneous viscoelastic bar in longitudinal vibration is chosen as a test specimen, and the complex-valued Young's modulus is chosen as an uncertain parameter. The method is demonstrated on simulated experimental measurements computed from the model. These demonstrations show that parameter values estimated by the method agree with those used to simulate the experiment when enough terms are included in the Neumann series. Similar results are obtained for the case of an elastic plate with clamped boundary conditions. The method is also demonstrated on experimental data, where it produces improved parameter estimates that bring the model predictions into agreement with the measured response to within 1% at a point on the bar across a frequency range that includes three resonance frequencies.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Sebastian Gluth ◽  
Nachshon Meiran

A key goal of model-based cognitive neuroscience is to estimate the trial-by-trial fluctuations of cognitive model parameters in order to link these fluctuations to brain signals. However, previously developed methods are limited by being difficult to implement, time-consuming, or model-specific. Here, we propose an easy, efficient and general approach to estimating trial-wise changes in parameters: Leave-One-Trial-Out (LOTO). The rationale behind LOTO is that the difference between parameter estimates for the complete dataset and for the dataset with one omitted trial reflects the parameter value in the omitted trial. We show that LOTO is superior to estimating parameter values from single trials and compare it to previously proposed approaches. Furthermore, the method makes it possible to distinguish true variability in a parameter from noise and from other sources of variability. In our view, the practicability and generality of LOTO will advance research on tracking fluctuations in latent cognitive variables and linking them to neural data.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Jim J. Xiao

The objectives were to review available PK models for saturable FcRn-mediated IgG disposition, and to explore an alternative semimechanistic model. Most available empirical and mechanistic PK models assumed equal IgG concentrations in plasma and endosome in addition to other model-specific assumptions. These might have led to inappropriate parameter estimates and model interpretations. Some physiologically based PK (PBPK) models included FcRn-mediated IgG recycling. The nature of PBPK models requires borrowing parameter values from literature, and subtle differences in the assumptions may render dramatic changes in parameter estimates related to the IgG recycling kinetics. These models might have been unnecessarily complicated to address FcRn saturation and nonlinear IgG PK especially in the IVIG setting. A simple semimechanistic PK model (cutoff model) was developed that assumed a constant endogenous IgG production rate and a saturable FcRn-binding capacity. The FcRn-binding capacity was defined as MAX, and IgG concentrations exceeding MAX in endosome resulted in lysosomal degradation. The model parameters were estimated using simulated data from previously published models. The cutoff model adequately described the rat and mouse IgG PK data simulated from published models and allowed reasonable estimation of endogenous IgG turnover rates.


Author(s):  
Rafegh Aghamohammadi ◽  
Jorge Laval

This paper extends the Stochastic Method of Cuts (SMoC) to approximate of the Macroscopic Fundamental Diagram (MFD) of urban networks and uses Maximum Likelihood Estimation (MLE) method to estimate the model parameters based on empirical data from a corridor and 30 cities around the world. For the corridor case, the estimated values are in good agreement with the measured values of the parameters. For the network datasets, the results indicate that the method yields satisfactory parameter estimates and graphical fits for roughly 50\% of the studied networks, where estimations fall within the expected range of the parameter values. The satisfactory estimates are mostly for the datasets which (i) cover a relatively wider range of densities and (ii) the average flow values at different densities are approximately normally distributed similar to the probability density function of the SMoC. The estimated parameter values are compared to the real or expected values and any discrepancies and their potential causes are discussed in depth to identify the challenges in the MFD estimation both analytically and empirically. In particular, we find that the most important issues needing further investigation are: (i) the distribution of loop detectors within the links, (ii) the distribution of loop detectors across the network, and (iii) the treatment of unsignalized intersections and their impact on the block length.


2017 ◽  
Vol 48 (1) ◽  
pp. 339-374
Author(s):  
Greg Taylor

AbstractThe hierarchical credibility model was introduced, and extended, in the 70s and early 80s. It deals with the estimation of parameters that characterize the nodes of a tree structure. That model is limited, however, by the fact that its parameters are assumed fixed over time. This causes the model's parameter estimates to track the parameters poorly when the latter are subject to variation over time. This paper seeks to remove this limitation by assuming the parameters in question to follow a process akin to a random walk over time, producing an evolutionary hierarchical model. The specific form of the model is compatible with the use of the Kalman filter for parameter estimation and forecasting. The application of the Kalman filter is conceptually straightforward, but the tree structure of the model parameters can be extensive, and some effort is required to retain organization of the updating algorithm. This is achieved by suitable manipulation of the graph associated with the tree. The graph matrix then appears in the matrix calculations inherent in the Kalman filter. A numerical example is included to illustrate the application of the filter to the model.


2010 ◽  
Vol 31 (2) ◽  
pp. 68-73 ◽  
Author(s):  
María José Contreras ◽  
Víctor J. Rubio ◽  
Daniel Peña ◽  
José Santacreu

Individual differences in performance when solving spatial tasks can be partly explained by differences in the strategies used. Two main difficulties arise when studying such strategies: the identification of the strategy itself and the stability of the strategy over time. In the present study strategies were separated into three categories: segmented (analytic), holistic-feedback dependent, and holistic-planned, according to the procedure described by Peña, Contreras, Shih, and Santacreu (2008) . A group of individuals were evaluated twice on a 1-year test-retest basis. During the 1-year interval between tests, the participants were not able to prepare for the specific test used in this study or similar ones. It was found that 60% of the individuals kept the same strategy throughout the tests. When strategy changes did occur, they were usually due to a better strategy. These results prove the robustness of using strategy-based procedures for studying individual differences in spatial tasks.


Sign in / Sign up

Export Citation Format

Share Document