ESTIMATION OF HERITABILITY OF AN INDEX

1978 ◽  
Vol 20 (4) ◽  
pp. 485-487 ◽  
Author(s):  
C. Y. Lin

Two alternative ways of calculating heritability of an index are presented. The first method is the regression of genetic index on the selection index and the second method is the ordinary analysis of co-variance among relatives where the selection index is treated as a single trait measurement. It is shown that these two estimation methods are equivalent theoretically and yield similar estimates from experimental data.

2015 ◽  
Vol 3 (1-2) ◽  
pp. 52-87 ◽  
Author(s):  
Nori Jacoby ◽  
Naftali Tishby ◽  
Bruno H. Repp ◽  
Merav Ahissar ◽  
Peter E. Keller

Linear models have been used in several contexts to study the mechanisms that underpin sensorimotor synchronization. Given that their parameters are often linked to psychological processes such as phase correction and period correction, the fit of the parameters to experimental data is an important practical question. We present a unified method for parameter estimation of linear sensorimotor synchronization models that extends available techniques and enhances their usability. This method enables reliable and efficient analysis of experimental data for single subject and multi-person synchronization. In a previous paper (Jacoby et al., 2015), we showed how to significantly reduce the estimation error and eliminate the bias of parameter estimation methods by adding a simple and empirically justified constraint on the parameter space. By applying this constraint in conjunction with the tools of matrix algebra, we here develop a novel method for estimating the parameters of most linear models described in the literature. Through extensive simulations, we demonstrate that our method reliably and efficiently recovers the parameters of two influential linear models: Vorberg and Wing (1996), and Schulze et al. (2005), together with their multi-person generalization to ensemble synchronization. We discuss how our method can be applied to include the study of individual differences in sensorimotor synchronization ability, for example, in clinical populations and ensemble musicians.


Author(s):  
Ziyue Zhang ◽  
A. Adam Ding ◽  
Yunsi Fei

Guessing entropy (GE) is a widely adopted metric that measures the average computational cost needed for a successful side-channel analysis (SCA). However, with current estimation methods where the evaluator has to average the correct key rank over many independent side-channel leakage measurement sets, full-key GE estimation is impractical due to its prohibitive computing requirement. A recent estimation method based on posterior probabilities, although scalable, is not accurate.We propose a new guessing entropy estimation algorithm (GEEA) based on theoretical distributions of the ranking score vectors. By discovering the relationship of GE with pairwise success rates and utilizing it, GEEA uses a sum of many univariate Gaussian probabilities instead of multi-variate Gaussian probabilities, significantly improving the computation efficiency.We show that GEEA is more accurate and efficient than all current GE estimations. To the best of our knowledge, it is the only practical full-key GE evaluation on given experimental data sets which the evaluator has access to. Moreover, it can accurately predict the GE for larger sizes than the experimental data sets, providing comprehensive security evaluation.


2006 ◽  
Vol 100 (3) ◽  
pp. 1049-1058 ◽  
Author(s):  
Olivier Bernard ◽  
Olivier Alata ◽  
Marc Francaux

Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated V̇o2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant τ1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and τ1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant τ2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and τ2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.


2012 ◽  
Vol 108 (7) ◽  
pp. 2069-2081 ◽  
Author(s):  
Sungho Hong ◽  
Quinten Robberechts ◽  
Erik De Schutter

The phase-response curve (PRC), relating the phase shift of an oscillator to external perturbation, is an important tool to study neurons and their population behavior. It can be experimentally estimated by measuring the phase changes caused by probe stimuli. These stimuli, usually short pulses or continuous noise, have a much wider frequency spectrum than that of neuronal dynamics. This makes the experimental data high dimensional while the number of data samples tends to be small. Current PRC estimation methods have not been optimized for efficiently discovering the relevant degrees of freedom from such data. We propose a systematic and efficient approach based on a recently developed signal processing theory called compressive sensing (CS). CS is a framework for recovering sparsely constructed signals from undersampled data and is suitable for extracting information about the PRC from finite but high-dimensional experimental measurements. We illustrate how the CS algorithm can be translated into an estimation scheme and demonstrate that our CS method can produce good estimates of the PRCs with simulated and experimental data, especially when the data size is so small that simple approaches such as naive averaging fail. The tradeoffs between degrees of freedom vs. goodness-of-fit were systematically analyzed, which help us to understand better what part of the data has the most predictive power. Our results illustrate that finite sizes of neuroscientific data in general compounded by large dimensionality can hamper studies of the neural code and suggest that CS is a good tool for overcoming this challenge.


2021 ◽  
Vol 2090 (1) ◽  
pp. 012143
Author(s):  
Corneliu Barbulescu ◽  
Toma-Leonida Dragomir

Abstract The real capacitors’ behaviour in electric circuits modelled by a single capacity deviates from the ideal one. In order to find better compromises between precision and simplicity, different C-R-L models are used. In these models, C, R, L are called equivalent parameters and take constant values. Under these assumptions, the capacitors are modelled as lumped parameter subsystems although it is well known that the real capacitors are essentially distributed parameter systems. As highlighted in this paper, the capacitors are also time-variant subsystems. To prove this, we use two types of experimental data: data measured during the capacitor’s discharge process and data obtained from frequency characteristics. The article proposes two estimation methods of equivalent values for the model parameters C and R based on their time variance highlighted by the experimental data. The estimation methods use a system of equations associated with the discharging of capacitors, respectively, with the frequency characteristics via polynomial regression. The experiments were carried out with an electrolytic polymer capacitor rated 220 μF, 25 V, 2.5 A rms, 85 °C, designed mainly for energy storage and filtering, the results being confirmed by experiments performed on other similar capacitors.


2011 ◽  
Vol 23 (8) ◽  
pp. 1944-1966 ◽  
Author(s):  
Susanne Ditlevsen ◽  
Petr Lansky

A convenient and often used summary measure to quantify the firing variability in neurons is the coefficient of variation (CV), defined as the standard deviation divided by the mean. It is therefore important to find an estimator that gives reliable results from experimental data, that is, the estimator should be unbiased and have low estimation variance. When the CV is evaluated in the standard way (empirical standard deviation of interspike intervals divided by their average), then the estimator is biased, underestimating the true CV, especially if the distribution of the interspike intervals is positively skewed. Moreover, the estimator has a large variance for commonly used distributions. The aim of this letter is to quantify the bias and propose alternative estimation methods. If the distribution is assumed known or can be determined from data, parametric estimators are proposed, which not only remove the bias but also decrease the estimation errors. If no distribution is assumed and the data are very positively skewed, we propose to correct the standard estimator. When defining the corrected estimator, we simply use that it is more stable to work on the log scale for positively skewed distributions. The estimators are evaluated through simulations and applied to experimental data from olfactory receptor neurons in rats.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Jonna Tiainen ◽  
Ahti Jaatinen-Värri ◽  
Aki Grönman ◽  
Petri Sallinen ◽  
Juha Honkatukia ◽  
...  

The fast preliminary design and safe operation of turbomachines require a simple and accurate prediction of axial thrust. An underestimation of these forces may result in undersized bearings that can easily overload and suffer damage. While large safety margins are used in bearing design to avoid overloading, this leads to costly oversizing. In this study, the accuracy of currently available axial thrust estimation methods is analyzed by comparing them to each other and to theoretical pressure distribution, numerical simulations, and new experimental data. Available methods tend to underestimate the maximum axial thrust and require data that are unavailable during the preliminary design of turbomachines. This paper presents a new, simple axial thrust estimation method that requires only a few preliminary design parameters as the input data and combines the advantages of previously published methods, resulting in a more accurate axial thrust estimation. The method is validated against previously public data from a radial pump and new experimental data from a centrifugal compressor, the latter measured at Lappeenranta-Lahti University of Technology LUT, Finland, and two gas turbines measured at Aurelia Turbines Oy, Finland. The maximum deviation between the estimated axial thrust using the hybrid method and the measured one is less than 13%, while the other methods deviate by tens of percent.


1977 ◽  
Vol 17 (01) ◽  
pp. 57-64 ◽  
Author(s):  
R.G. Bentsen ◽  
J. Anli

Abstract Previously reported techniques for converting basic centrifuge data into a capillary-pressure curve have one serious drawback: they all involve the graphical or numerical differentiation of experimental data. The problems associated with the differentiation of experimental data can be avoided by using the parameter estimation techniques proposed here. The parameter estimation techniques proposed here. The purpose of this paper is to demonstrate The advantages purpose of this paper is to demonstrate The advantages of using parameter estimation techniques for obtaining a capillary-pressure curve from centrifuge data. Two parameter estimation methods for handling centrifuge data were investigated and compared with a modified form of Hassler's technique for interpreting such data. This investigation indicates that, while Hassler's method and the parameter estimation techniques were equally able parameter estimation techniques were equally able to generate the capillary-pressure curve from centrifuge data, the latter procedures are preferable since they use various integration preferable since they use various integration schemes and, hence, avoid the differentiation problems associated with previously reported problems associated with previously reported methods of data interpretation. Moreover, if the parameter estimation techniques are used, the data can be smoothed and the irreducible water saturation, displacement pressure, and capillary-pressure normalizing parameter can be estimated. Introduction The theory for converting experimental data obtained with a centrifuge into a capillary-pressure curve was developed by Hassler and Brunner. The basic equation used in the conversion is P2 a ci Sn(x) dxP S = cos ---- .....(1)ci n 2 × 21 - ----- sin aPci wherep 2P = ----- w (re2 − ri2), ci 2 p 2x = ------ w (re2 − r2), 2 and ricos a = ----- .re Hassler and Branner were unable to find an analytical solution to Eq. 1, but they demonstrated bow it could be solved by the method of successive approximations. Since this method is very tedious in application, Hassler and Brunner preferred using a simplifying assumption that amounts to setting ri equal to re. This assumption, which the authors considered to be reasonable provided the ratio ri/re was greater than 0.7, resulted in the equation P ciPci Sn = Sn(x)dx, 0 from which it follows that d Sn (Pci) = ------ (Pci Sn)................(2)dP Setting ri equal to re assumes that the length of the core is negligible compared with the radius of rotation of the core. Hoffman has shown that this assumption is unnecessary, and that Eq. 3 should be used to solve for the saturation at each speed level. 2 cos adSnS (P) = -------- (S + P ------)....(3)n ci 1+cos a n ci dPci Eqs. 2 and 3 can be solved by taking slopes of graphs of Sn and Pci Sn vs Pci. SPEJ P. 57


2021 ◽  
Author(s):  
Jan Boelts ◽  
Jan-Matthis Lueckmann ◽  
Richard Gao ◽  
Jakob H. Macke

Identifying parameters of computational models that capture experimental data is a central task in cognitive neuroscience. Bayesian statistical inference aims to not only find a single configuration of best-fitting parameters, but to recover all model parameters that are consistent with the data and prior knowledge. Statistical inference methods usually require the ability to evaluate the likelihood of the model—however, for many models of interest in cognitive neuroscience, the associated likelihoods cannot be computed efficiently. Simulation-based inference (SBI) offers a solution to this problem by only requiring access to simulations produced by the model. Here, we provide an efficient SBI method for models of decision-making. Our approach, Mixed Neural Likelihood Estimation (MNLE), trains neural density estimators on model simulations to emulate the simulator. The likelihoods of the emulator can then be used to perform Bayesian parameter inference on experimental data using standard approximate inference methods like Markov Chain Monte Carlo sampling. While most neural likelihood estimation methods target continuous data, MNLE works with mixed data types, as typically obtained in decision-making experiments (e.g., binary decisions and associated continuous reaction times). We demonstrate MNLE on the classical drift-diffusion model (DDM) and compare its performance to a recently proposed method for SBI on DDMs, called likelihood approximation networks (LAN, Fengler et al. 2021). We show that MNLE is substantially more efficient than LANs, requiring up to six orders of magnitudes fewer model simulations to achieve comparable likelihood accuracy and evaluation time while providing the same level of flexibility. We include an implementation of our algorithm in the user-friendly open source package sbi.


Sign in / Sign up

Export Citation Format

Share Document