Transforming Parameter Estimates to a Specified Coordinate System

Author(s):  
Mark D. Reckase
1975 ◽  
Vol 26 ◽  
pp. 87-92
Author(s):  
P. L. Bender

AbstractFive important geodynamical quantities which are closely linked are: 1) motions of points on the Earth’s surface; 2)polar motion; 3) changes in UT1-UTC; 4) nutation; and 5) motion of the geocenter. For each of these we expect to achieve measurements in the near future which have an accuracy of 1 to 3 cm or 0.3 to 1 milliarcsec.From a metrological point of view, one can say simply: “Measure each quantity against whichever coordinate system you can make the most accurate measurements with respect to”. I believe that this statement should serve as a guiding principle for the recommendations of the colloquium. However, it also is important that the coordinate systems help to provide a clear separation between the different phenomena of interest, and correspond closely to the conceptual definitions in terms of which geophysicists think about the phenomena.In any discussion of angular motion in space, both a “body-fixed” system and a “space-fixed” system are used. Some relevant types of coordinate systems, reference directions, or reference points which have been considered are: 1) celestial systems based on optical star catalogs, distant galaxies, radio source catalogs, or the Moon and inner planets; 2) the Earth’s axis of rotation, which defines a line through the Earth as well as a celestial reference direction; 3) the geocenter; and 4) “quasi-Earth-fixed” coordinate systems.When a geophysicists discusses UT1 and polar motion, he usually is thinking of the angular motion of the main part of the mantle with respect to an inertial frame and to the direction of the spin axis. Since the velocities of relative motion in most of the mantle are expectd to be extremely small, even if “substantial” deep convection is occurring, the conceptual “quasi-Earth-fixed” reference frame seems well defined. Methods for realizing a close approximation to this frame fortunately exist. Hopefully, this colloquium will recommend procedures for establishing and maintaining such a system for use in geodynamics. Motion of points on the Earth’s surface and of the geocenter can be measured against such a system with the full accuracy of the new techniques.The situation with respect to celestial reference frames is different. The various measurement techniques give changes in the orientation of the Earth, relative to different systems, so that we would like to know the relative motions of the systems in order to compare the results. However, there does not appear to be a need for defining any new system. Subjective figures of merit for the various system dependon both the accuracy with which measurements can be made against them and the degree to which they can be related to inertial systems.The main coordinate system requirement related to the 5 geodynamic quantities discussed in this talk is thus for the establishment and maintenance of a “quasi-Earth-fixed” coordinate system which closely approximates the motion of the main part of the mantle. Changes in the orientation of this system with respect to the various celestial systems can be determined by both the new and the conventional techniques, provided that some knowledge of changes in the local vertical is available. Changes in the axis of rotation and in the geocenter with respect to this system also can be obtained, as well as measurements of nutation.


1975 ◽  
Vol 26 ◽  
pp. 21-26

An ideal definition of a reference coordinate system should meet the following general requirements:1. It should be as conceptually simple as possible, so its philosophy is well understood by the users.2. It should imply as few physical assumptions as possible. Wherever they are necessary, such assumptions should be of a very general character and, in particular, they should not be dependent upon astronomical and geophysical detailed theories.3. It should suggest a materialization that is dynamically stable and is accessible to observations with the required accuracy.


1999 ◽  
Vol 15 (2) ◽  
pp. 91-98 ◽  
Author(s):  
Lutz F. Hornke

Summary: Item parameters for several hundreds of items were estimated based on empirical data from several thousands of subjects. The logistic one-parameter (1PL) and two-parameter (2PL) model estimates were evaluated. However, model fit showed that only a subset of items complied sufficiently, so that the remaining ones were assembled in well-fitting item banks. In several simulation studies 5000 simulated responses were generated in accordance with a computerized adaptive test procedure along with person parameters. A general reliability of .80 or a standard error of measurement of .44 was used as a stopping rule to end CAT testing. We also recorded how often each item was used by all simulees. Person-parameter estimates based on CAT correlated higher than .90 with true values simulated. For all 1PL fitting item banks most simulees used more than 20 items but less than 30 items to reach the pre-set level of measurement error. However, testing based on item banks that complied to the 2PL revealed that, on average, only 10 items were sufficient to end testing at the same measurement error level. Both clearly demonstrate the precision and economy of computerized adaptive testing. Empirical evaluations from everyday uses will show whether these trends will hold up in practice. If so, CAT will become possible and reasonable with some 150 well-calibrated 2PL items.


Methodology ◽  
2005 ◽  
Vol 1 (2) ◽  
pp. 81-85 ◽  
Author(s):  
Stefan C. Schmukle ◽  
Jochen Hardt

Abstract. Incremental fit indices (IFIs) are regularly used when assessing the fit of structural equation models. IFIs are based on the comparison of the fit of a target model with that of a null model. For maximum-likelihood estimation, IFIs are usually computed by using the χ2 statistics of the maximum-likelihood fitting function (ML-χ2). However, LISREL recently changed the computation of IFIs. Since version 8.52, IFIs reported by LISREL are based on the χ2 statistics of the reweighted least squares fitting function (RLS-χ2). Although both functions lead to the same maximum-likelihood parameter estimates, the two χ2 statistics reach different values. Because these differences are especially large for null models, IFIs are affected in particular. Consequently, RLS-χ2 based IFIs in combination with conventional cut-off values explored for ML-χ2 based IFIs may lead to a wrong acceptance of models. We demonstrate this point by a confirmatory factor analysis in a sample of 2449 subjects.


Methodology ◽  
2015 ◽  
Vol 11 (3) ◽  
pp. 89-99 ◽  
Author(s):  
Leslie Rutkowski ◽  
Yan Zhou

Abstract. Given a consistent interest in comparing achievement across sub-populations in international assessments such as TIMSS, PIRLS, and PISA, it is critical that sub-population achievement is estimated reliably and with sufficient precision. As such, we systematically examine the limitations to current estimation methods used by these programs. Using a simulation study along with empirical results from the 2007 cycle of TIMSS, we show that a combination of missing and misclassified data in the conditioning model induces biases in sub-population achievement estimates, the magnitude and degree to which can be readily explained by data quality. Importantly, estimated biases in sub-population achievement are limited to the conditioning variable with poor-quality data while other sub-population achievement estimates are unaffected. Findings are generally in line with theory on missing and error-prone covariates. The current research adds to a small body of literature that has noted some of the limitations to sub-population estimation.


Marketing ZFP ◽  
2019 ◽  
Vol 41 (4) ◽  
pp. 21-32
Author(s):  
Dirk Temme ◽  
Sarah Jensen

Missing values are ubiquitous in empirical marketing research. If missing data are not dealt with properly, this can lead to a loss of statistical power and distorted parameter estimates. While traditional approaches for handling missing data (e.g., listwise deletion) are still widely used, researchers can nowadays choose among various advanced techniques such as multiple imputation analysis or full-information maximum likelihood estimation. Due to the available software, using these modern missing data methods does not pose a major obstacle. Still, their application requires a sound understanding of the prerequisites and limitations of these methods as well as a deeper understanding of the processes that have led to missing values in an empirical study. This article is Part 1 and first introduces Rubin’s classical definition of missing data mechanisms and an alternative, variable-based taxonomy, which provides a graphical representation. Secondly, a selection of visualization tools available in different R packages for the description and exploration of missing data structures is presented.


Sign in / Sign up

Export Citation Format

Share Document