scholarly journals A STATISTICAL APPROACH FOR RELIABLE MEASUREMENT WITH FAMILIARIZATION TRIALS

Author(s):  
Steven Kim ◽  
Christopher Essert

An accurate and reliable measurement is important in exercise science. The measurement tends to be less reliable when subjects are not professional athletes or are unfamiliar with a given task. These subjects need familiarization trials, but determination of the number of familiarization trials is challenging because it may be individual-specific and task-specific. Some participants may be eliminated because their results deviate from arbitrary ad hoc rules. We treat these challenges as a statistical problem, and we propose model-averaging to measure a subject’s familiarized performance without fixing the number of familiarization trials in advance. The method of model-averaging accounts for the uncertainty associated with the number of familiarization trials that a subject needs. Simulations show that model-averaging is useful when the familiarization phase is long or when the familiarization occurs at a fast rate relative to the amount of noise in the data. An applet is provided on the internet with a very brief User’s Guide included in the appendix to this article. Keywords: Familiarization; reliability; accuracy; model-averaging; Akaike Information Criterion

2021 ◽  
Vol 41 (4) ◽  
pp. 476-484
Author(s):  
Daniel Gallacher ◽  
Peter Kimani ◽  
Nigel Stallard

Previous work examined the suitability of relying on routine methods of model selection when extrapolating survival data in a health technology appraisal setting. Here we explore solutions to improve reliability of restricted mean survival time (RMST) estimates from trial data by assessing model plausibility and implementing model averaging. We compare our previous methods of selecting a model for extrapolation using the Akaike information criterion (AIC) and Bayesian information criterion (BIC). Our methods of model averaging include using equal weighting across models falling within established threshold ranges for AIC and BIC and using BIC-based weighted averages. We apply our plausibility assessment and implement model averaging to the output of our previous simulations, where 10,000 runs of 12 trial-based scenarios were examined. We demonstrate that removing implausible models from consideration reduces the mean squared error associated with the restricted mean survival time (RMST) estimate from each selection method and increases the percentage of RMST estimates that were within 10% of the RMST from the parameters of the sampling distribution. The methods of averaging were superior to selecting a single optimal extrapolation, aside from some of the exponential scenarios where BIC already selected the exponential model. The averaging methods with wide criterion-based thresholds outperformed BIC-weighted averaging in the majority of scenarios. We conclude that model averaging approaches should feature more widely in the appraisal of health technologies where extrapolation is influential and considerable uncertainty is present. Where data demonstrate complicated underlying hazard rates, funders should account for the additional uncertainty associated with these extrapolations in their decision making. Extended follow-up from trials should be encouraged and used to review prices of therapies to ensure a fair price is paid.


Genetics ◽  
2001 ◽  
Vol 157 (3) ◽  
pp. 1387-1395 ◽  
Author(s):  
Sudhir Kumar ◽  
Sudhindra R Gadagkar ◽  
Alan Filipski ◽  
Xun Gu

AbstractGenomic divergence between species can be quantified in terms of the number of chromosomal rearrangements that have occurred in the respective genomes following their divergence from a common ancestor. These rearrangements disrupt the structural similarity between genomes, with each rearrangement producing additional, albeit shorter, conserved segments. Here we propose a simple statistical approach on the basis of the distribution of the number of markers in contiguous sets of autosomal markers (CSAMs) to estimate the number of conserved segments. CSAM identification requires information on the relative locations of orthologous markers in one genome and only the chromosome number on which each marker resides in the other genome. We propose a simple mathematical model that can account for the effect of the nonuniformity of the breakpoints and markers on the observed distribution of the number of markers in different conserved segments. Computer simulations show that the number of CSAMs increases linearly with the number of chromosomal rearrangements under a variety of conditions. Using the CSAM approach, the estimate of the number of conserved segments between human and mouse genomes is 529 ± 84, with a mean conserved segment length of 2.8 cM. This length is <40% of that currently accepted for human and mouse genomes. This means that the mouse and human genomes have diverged at a rate of ∼1.15 rearrangements per million years. By contrast, mouse and rat are diverging at a rate of only ∼0.74 rearrangements per million years.


2005 ◽  
Vol 127 (3) ◽  
pp. 679-684 ◽  
Author(s):  
S. Charles ◽  
O. Bonneau ◽  
J. Fre^ne

The characteristics of hydrostatic bearings can be influenced by the compensating device they use, for example, a thin-walled orifice (diaphragm). The flow through the orifice is given by a law where an ad hoc discharge coefficient appears, and, in order to guarantee the characteristics of the hydrostatic bearing, this coefficient must be calibrated. The aim of this work is to provide an accurate estimation of the discharge coefficient under specific conditions. Therefore an experimental bench was designed and a numerical model was carried out. The results obtained then by the experimental and theoretical approach were compared with the values given by the literature. Finally, the influence of the discharge coefficient on the behavior of a thrust bearing is examined.


2020 ◽  
Author(s):  
Luis Valledor ◽  
Sara Guerrero ◽  
Lara García-Campa ◽  
Mónica Meijón

Abstract Bud maturation is a physiological process which implies a set of morphophysiological changes which lead to the transition of growth patterns from young to mature. This transition defines tree growth and architecture, and in consequence traits such as biomass production and wood quality. In Pinus pinaster, a conifer of great timber value, bud maturation is closely related to polycyclism (multiple growth periods per year). This process causes a lack of apical dominance, and consequently increased branching that reduces its timber quality and value. However, despite its importance, little is known about bud maturation. In this work, proteomics and metabolomics were employed to study apical and basal sections of young and mature buds in P. pinaster. Proteins and metabolites in samples were described and quantified using (n)UPLC-LTQ-Orbitrap. The datasets were analyzed employing an integrative statistical approach, which allowed the determination of the interactions between proteins and metabolites and the different bud sections and ages. Specific dynamics of proteins and metabolites such as HISTONE H3 and H4, RIBOSOMAL PROTEINS L15 and L12, CHAPERONIN TCP1, 14–3-3 protein gamma, gibberellins A1, A3, A8, strigolactones and ABA, involved in epigenetic regulation, proteome remodeling, hormonal signaling and abiotic stress pathways showed their potential role during bud maturation. Candidates and pathways were validated employing interaction databases and targeted transcriptomics. These results increase our understanding of the molecular processes behind bud maturation a key step towards improving timber production and natural pine forests management in a future scenario of climate change. However, further studies are necessary by using different P. pinaster populations that show contrasting wood quality and stress tolerance in order to generalize the results.


Vestnik MGSU ◽  
2015 ◽  
pp. 140-151 ◽  
Author(s):  
Aleksey Alekseevich Loktev ◽  
Daniil Alekseevich Loktev

In modern integrated monitoring systems and systems of automated control of technological processes there are several essential algorithms and procedures for obtaining primary information about an object and its behavior. The primary information is characteristics of static and moving objects: distance, speed, position in space etc. In order to obtain such information in the present work we proposed to use photos and video detectors that could provide the system with high-quality images of the object with high resolution. In the modern systems of video monitoring and automated control there are several ways of obtaining primary data on the behaviour and state of the studied objects: a multisensor approach (stereovision), building an image perspective, the use of fixed cameras and additional lighting of the object, and a special calibration of photo or video detector.In the present paper the authors develop a method of determining the distances to objects by analyzing a series of images using depth evaluation using defocusing. This method is based on the physical effect of the dependence of the determined distance to the object on the image from the focal length or aperture of the lens. When focusing the photodetector on the object at a certain distance, the other objects both closer and farther than a focal point, form a spot of blur depending on the distance to them in terms of images. Image blur of an object can be of different nature, it may be caused by the motion of the object or the detector, by the nature of the image boundaries of the object, by the object’s aggregate state, as well as by different settings of the photo-detector (focal length, shutter speed and aperture).When calculating the diameter of the blur spot it is assumed that blur at the point occurs equally in all directions. For more precise estimates of the geometrical parameters determination of the behavior and state of the object under study a statistical approach is used to determine the individual parameters and estimate their accuracy. A statistical approach is used to evaluate the deviation of the dependence of distance from the blur from different types of standard functions (logarithmic, exponential, linear). In the statistical approach the evaluation method of least squares and the method of least modules are included, as well as the Bayesian estimation, for which it is necessary to minimize the risks under different loss functions (quadratic, rectangular, linear) with known probability density (we consider normal, lognormal, Laplace, uniform distribution). As a result of the research it was established that the error variance of a function, the parameters of which are estimated using the least squares method, will be less than the error variance of the method of least modules, that is, the evaluation method of least squares is more stable. Also the errors’ estimation when using the method of least squares is unbiased, whereas the mathematical expectation when using the method of least modules is not zero, which indicates the displacement of error estimations. Therefore it is advisable to use the least squares method in the determination of the parameters of the function.In order to smooth out the possible outliers we use the Kalman filter to process the results of the initial observations and evaluation analysis, the method of least squares and the method of least three standard modules for the functions after applying the filter with different coefficients.


FLORESTA ◽  
2019 ◽  
Vol 50 (1) ◽  
pp. 1063
Author(s):  
João Everthon da Silva Ribeiro ◽  
Francisco Romário Andrade Figueiredo ◽  
Ester Dos Santos Coêlho ◽  
Walter Esfrain Pereira ◽  
Manoel Bandeira de Albuquerque

The determination of leaf area is of fundamental importance in studies involving ecological and ecophysiological aspects of forest species. The objective of this research was to adjust an equation to determine the leaf area of Ceiba glaziovii as a function of linear measurements of leaves. Six hundred healthy leaf limbs were collected in different matrices, with different shapes and sizes, in the Mata do Pau-Ferro State Park, Areia, Paraíba state, Northeast Brazil. The maximum length (L), maximum width (W), product between length and width (L.W), and leaf area of the leaf limbs were calculated. The regression models used to construct equations were: linear, linear without intercept, quadratic, cubic, power and exponential. The criteria for choosing the best equation were based on the coefficient of determination (R²), Akaike information criterion (AIC), root mean square error (RMSE), Willmott concordance index (d) and BIAS index. All the proposed equations satisfactorily estimate the leaf area of C. glaziovii, due to their high determination coefficients (R² ≥ 0.851). The linear model without intercept, using the product between length and width (L.W), presented the best criteria to estimate the leaf area of the species, using the equation 0.4549*LW.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Mohammad Reza Mahmoudi ◽  
Marzieh Rahmati ◽  
Zulkefli Mansor ◽  
Amirhosein Mosavi ◽  
Shahab S. Band

The productivity of researchers and the impact of the work they do are a preoccupation of universities, research funding agencies, and sometimes even researchers themselves. The h-index (h) is the most popular of different metrics to measure these activities. This research deals with presenting a practical approach to model the h-index based on the total number of citations (NC) and the duration from the publishing of the first article (D1). To determine the effect of every factor (NC and D1) on h, we applied a set of simple nonlinear regression. The results indicated that both NC and D1 had a significant effect on h ( p  < 0.001). The determination of coefficient for these equations to estimate the h-index was 93.4% and 39.8%, respectively, which verified that the model based on NC had a better fit. Then, to record the simultaneous effects of NC and D1 on h, multiple nonlinear regression was applied. The results indicated that NC and D1 had a significant effect on h ( p  < 0.001). Also, the determination of coefficient for this equation to estimate h was 93.6%. Finally, to model and estimate the h-index, as a function of NC and D1, multiple nonlinear quartile regression was used. The goodness of the fitted model was also assessed.


Author(s):  
Craig L. Symonds

‘An ad hoc navy: the Revolutionary War (1775–1783)’ describes the Patriots’ response to the British Royal Navy strongholds in Boston and New York and the role of armed vessels during the Revolutionary War. It begins with George Washington’s attempts to threaten the British supply line using boats. The Continental Navy was founded on October 13, 1775, but the new program could hardly challenge the Royal Navy. With the exception of John Paul Jones, the Continental Navy proved mostly disappointing. The United States won its independence largely because the determination of the Patriot forces outlasted the British willingness to fight—and to pay for—a war three thousand miles away.


Sign in / Sign up

Export Citation Format

Share Document