scholarly journals Differences in parameter estimates derived from various methods for the ORYZA (v3) Model

2022 ◽  
Vol 21 (2) ◽  
pp. 375-388
Author(s):  
Jun-wei TAN ◽  
Qing-yun DUAN ◽  
Wei GONG ◽  
Zhen-hua DI
Keyword(s):  
1999 ◽  
Vol 15 (2) ◽  
pp. 91-98 ◽  
Author(s):  
Lutz F. Hornke

Summary: Item parameters for several hundreds of items were estimated based on empirical data from several thousands of subjects. The logistic one-parameter (1PL) and two-parameter (2PL) model estimates were evaluated. However, model fit showed that only a subset of items complied sufficiently, so that the remaining ones were assembled in well-fitting item banks. In several simulation studies 5000 simulated responses were generated in accordance with a computerized adaptive test procedure along with person parameters. A general reliability of .80 or a standard error of measurement of .44 was used as a stopping rule to end CAT testing. We also recorded how often each item was used by all simulees. Person-parameter estimates based on CAT correlated higher than .90 with true values simulated. For all 1PL fitting item banks most simulees used more than 20 items but less than 30 items to reach the pre-set level of measurement error. However, testing based on item banks that complied to the 2PL revealed that, on average, only 10 items were sufficient to end testing at the same measurement error level. Both clearly demonstrate the precision and economy of computerized adaptive testing. Empirical evaluations from everyday uses will show whether these trends will hold up in practice. If so, CAT will become possible and reasonable with some 150 well-calibrated 2PL items.


Methodology ◽  
2005 ◽  
Vol 1 (2) ◽  
pp. 81-85 ◽  
Author(s):  
Stefan C. Schmukle ◽  
Jochen Hardt

Abstract. Incremental fit indices (IFIs) are regularly used when assessing the fit of structural equation models. IFIs are based on the comparison of the fit of a target model with that of a null model. For maximum-likelihood estimation, IFIs are usually computed by using the χ2 statistics of the maximum-likelihood fitting function (ML-χ2). However, LISREL recently changed the computation of IFIs. Since version 8.52, IFIs reported by LISREL are based on the χ2 statistics of the reweighted least squares fitting function (RLS-χ2). Although both functions lead to the same maximum-likelihood parameter estimates, the two χ2 statistics reach different values. Because these differences are especially large for null models, IFIs are affected in particular. Consequently, RLS-χ2 based IFIs in combination with conventional cut-off values explored for ML-χ2 based IFIs may lead to a wrong acceptance of models. We demonstrate this point by a confirmatory factor analysis in a sample of 2449 subjects.


Methodology ◽  
2015 ◽  
Vol 11 (3) ◽  
pp. 89-99 ◽  
Author(s):  
Leslie Rutkowski ◽  
Yan Zhou

Abstract. Given a consistent interest in comparing achievement across sub-populations in international assessments such as TIMSS, PIRLS, and PISA, it is critical that sub-population achievement is estimated reliably and with sufficient precision. As such, we systematically examine the limitations to current estimation methods used by these programs. Using a simulation study along with empirical results from the 2007 cycle of TIMSS, we show that a combination of missing and misclassified data in the conditioning model induces biases in sub-population achievement estimates, the magnitude and degree to which can be readily explained by data quality. Importantly, estimated biases in sub-population achievement are limited to the conditioning variable with poor-quality data while other sub-population achievement estimates are unaffected. Findings are generally in line with theory on missing and error-prone covariates. The current research adds to a small body of literature that has noted some of the limitations to sub-population estimation.


Marketing ZFP ◽  
2019 ◽  
Vol 41 (4) ◽  
pp. 21-32
Author(s):  
Dirk Temme ◽  
Sarah Jensen

Missing values are ubiquitous in empirical marketing research. If missing data are not dealt with properly, this can lead to a loss of statistical power and distorted parameter estimates. While traditional approaches for handling missing data (e.g., listwise deletion) are still widely used, researchers can nowadays choose among various advanced techniques such as multiple imputation analysis or full-information maximum likelihood estimation. Due to the available software, using these modern missing data methods does not pose a major obstacle. Still, their application requires a sound understanding of the prerequisites and limitations of these methods as well as a deeper understanding of the processes that have led to missing values in an empirical study. This article is Part 1 and first introduces Rubin’s classical definition of missing data mechanisms and an alternative, variable-based taxonomy, which provides a graphical representation. Secondly, a selection of visualization tools available in different R packages for the description and exploration of missing data structures is presented.


2020 ◽  
pp. 34-43
Author(s):  
N. R. Memetov ◽  
◽  
A. V. Gerasimova ◽  
A. E. Kucherova ◽  
◽  
...  

The paper evaluates the effectiveness of the use of graphene nanostructures in the purification of lead (II) ions to improve the ecological situation of water bodies. The mechanisms and characteristic parameters of the adsorption process were analyzed using empirical models of isotherms at temperatures of 298, 303, 313 and 323 K, which correspond to the following order (based on the correlation coefficient): Langmuir (0.99) > Temkin (0.97) > Dubinin – Radushkevich (0.90). The maximum adsorption capacity of the material corresponds to the range from 230 to 260 mg/g. We research the equilibrium at the level of thermodynamic parameter estimates, which indicates the spontaneity of the process, the endothermic nature and structure change of graphene modified with phenol-formaldehyde resin during the adsorption of lead (II) ions, leading to an increase in the disorder of the system.


2020 ◽  
Author(s):  
Zhaokai Dong ◽  
Daniel Bain ◽  
Murat Akcakaya ◽  
Carla Ng

A high-quality parameter set is essential for reliable stormwater models. Model performance can be improved by optimizing initial parameter estimates. Parameter sensitivity analysis is a robust way to distinguish the influence of parameters on model output and efficiently target the most important parameters to modify. This study evaluates efficient construction of a sewershed model using relatively low-resolution (e.g., 30 meter DEM) data and explores model sensitivity to parameters and regional characteristics using the EPA’s Storm Water Management Model (SWMM). A SWMM model was developed for a sewershed in the City of Pittsburgh, where stormwater management is a critical concern. We assumed uniform or log-normal distributions for parameters and used Monte Carlo simulations to explore and rank the influence of parameters on predicted surface runoff, peak flow, maximum pipe flow and model performance, as measured using the Nash–Sutcliffe efficiency metric. By using the Thiessen polygon approach for sub-catchment delineations, we substantially simplified the parameterization of the areas and hydraulic parameters. Despite this simplification, our approach provided good agreement with monitored pipe flow (Nash–Sutcliffe efficiency: 0.41 – 0.85). Total runoff and peak flow were very sensitive to the model discretization. The size of the polygons (modeled subcatchment areas) and imperviousness had the most influence on both outputs. The imperviousness, infiltration and Manning’s roughness (in the pervious area) contributed strongly to the Nash-Sutcliffe efficiency (70%), as did pipe geometric parameters (92%). Parameter rank sets were compared by using kappa statistics between any two model elements to identify generalities. Within our relatively large (9.7 km^2) sewershed, optimizing parameters for the highly impervious (>50%) areas and larger pipes lower in the network contributed most to improving Nash–Sutcliffe efficiency. The geometric parameters influence the water quantity distribution and flow conveyance, while imperviousness determines the subcatchment subdivision and influences surface water generation. Application of the Thiessen polygon approach can simplify the construction of large-scale urban storm water models, but the model is sensitive to the sewer network configuration and care must be taken in parameterizing areas (polygons) with heterogenous land uses.


1971 ◽  
Vol 6 (1) ◽  
pp. 249-272
Author(s):  
P.B. Melynk ◽  
J.D. Norman ◽  
A.W. Wilson

Abstract It is postulated that the mixing conditions in a flow-through reactor can be characterized as having either completely mixed, completely plug flow, or some network of completely mixed and plug flow component vessels. A frequency-response technique is used to obtain an experimental Bodé plot for arbitrarily mixed vessels. The interpretation of the Bodé plot is discussed, and , in light of this interpretation, a network of plug flow and completely mixed components is specified as a flow model. A Rosenbrock search routine is used to improve the parameter estimates of the model. To verify the model, a second order reaction was run through the vessel and the experimentally measured conversion was compared to that predicted by the model. It is shown that the modeling technique, in addition to describing the mixing in the system, will indicate inactive volume, as well as measure the extent of any channeling or short circuiting in the reactor.


Sign in / Sign up

Export Citation Format

Share Document