scholarly journals The Decomposition of Between and Within Effects in Contextual Models

2021 ◽  
Vol 12 ◽  
Author(s):  
Siwen Guo ◽  
Richard T. Houang ◽  
William H. Schmidt

In contextual studies, group compositions are often extracted from individual data in the sample, in order to estimate the group compositional effects [e.g., school socioeconomic status (SES) effect] controlling for interindividual differences in multilevel models. As the same variable is used at both group level and individual level, an appropriate decomposition of between and within effects is a key to providing a clearer picture of these organizational and individual processes. The current study developed a new approach with within-group finite population correction (fpc). Its performances were compared with the manifest and latent aggregation approaches in the decomposition of between and within effects. Under a moderate within-group sampling ratio, the between effect estimates from the new approach had a lesser degree of bias and higher observed coverage rates compared with those from the manifest and latent aggregation approaches. A real data application was also used to illustrate the three analysis approaches.

PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259960
Author(s):  
Sabz Ali ◽  
Said Ali Shah ◽  
Seema Zubair ◽  
Sundas Hussain

Multilevel Models are widely used in organizational research, educational research, epidemiology, psychology, biology and medical fields. In this paper, we recommend the situations where Bootstrap procedures through Minimum Norm Quadratic Unbiased Estimator (MINQUE) can be extremely handy than that of Restricted Maximum Likelihood (REML) in multilevel level linear regression models. In our simulation study the bootstrap by means of MINQUE is superior to REML in conditions where normality does not hold. Moreover, the real data application also supports our findings in terms of accuracy of estimates and their standard errors.


2019 ◽  
Author(s):  
Rumen Manolov

The lack of consensus regarding the most appropriate analytical techniques for single-case experimental designs data requires justifying the choice of any specific analytical option. The current text mentions some of the arguments, provided by methodologists and statisticians, in favor of several analytical techniques. Additionally, a small-scale literature review is performed in order to explore if and how applied researchers justify the analytical choices that they make. The review suggests that certain practices are not sufficiently explained. In order to improve the reporting regarding the data analytical decisions, it is proposed to choose and justify the data analytical approach prior to gathering the data. As a possible justification for data analysis plan, we propose using as a basis the expected the data pattern (specifically, the expectation about an improving baseline trend and about the immediate or progressive nature of the intervention effect). Although there are multiple alternatives for single-case data analysis, the current text focuses on visual analysis and multilevel models and illustrates an application of these analytical options with real data. User-friendly software is also developed.


2020 ◽  
Author(s):  
Keith Payne ◽  
Heidi A. Vuletich ◽  
Kristjen B. Lundberg

The Bias of Crowds model (Payne, Vuletich, & Lundberg, 2017) argues that implicit bias varies across individuals and across contexts. It is unreliable and weakly associated with behavior at the individual level. But when aggregated to measure context-level effects, the scores become stable and predictive of group-level outcomes. We concluded that the statistical benefits of aggregation are so powerful that researchers should reconceptualize implicit bias as a feature of contexts, and ask new questions about how implicit biases relate to systemic racism. Connor and Evers (2020) critiqued the model, but their critique simply restates the core claims of the model. They agreed that implicit bias varies across individuals and across contexts; that it is unreliable and weakly associated with behavior at the individual level; and that aggregating scores to measure context-level effects makes them more stable and predictive of group-level outcomes. Connor and Evers concluded that implicit bias should be considered to really be noisily measured individual construct because the effects of aggregation are merely statistical. We respond to their specific arguments and then discuss what it means to really be a feature of persons versus situations, and multilevel measurement and theory in psychological science more broadly.


2021 ◽  
pp. 073563312110308
Author(s):  
Fan Ouyang ◽  
Si Chen ◽  
Yuqin Yang ◽  
Yunqing Chen

Group-level metacognitive scaffolding is critical for productive knowledge building. However, previous research mainly focuses on the individual-level metacognitive scaffoldings in helping learners improve knowledge building, and little effort has been made to develop group-level metacognitive scaffolding (GMS) for knowledge building. This research designed three group-level metacognitive scaffoldings of general, task-oriented, and idea-oriented scaffoldings to facilitate in-service teachers’ knowledge building in small groups. A mixed method is used to examine the effects of the GMSs on groups’ knowledge building processes, performances, and perceptions. Results indicate a complication of the effects of GMSs on knowledge building. The idea-oriented scaffolding has potential to facilitate question-asking and perspective-proposing inquiry through peer interactions; the general scaffolding does not necessarily lessen teachers’ idea-centered explanation and elaboration on the individual level; the task-oriented scaffolding has the worst effect. Pedagogical and research implications are discussed to foster knowledge building with the support of GMSs.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 726
Author(s):  
Lamya A. Baharith ◽  
Wedad H. Aljuhani

This article presents a new method for generating distributions. This method combines two techniques—the transformed—transformer and alpha power transformation approaches—allowing for tremendous flexibility in the resulting distributions. The new approach is applied to introduce the alpha power Weibull—exponential distribution. The density of this distribution can take asymmetric and near-symmetric shapes. Various asymmetric shapes, such as decreasing, increasing, L-shaped, near-symmetrical, and right-skewed shapes, are observed for the related failure rate function, making it more tractable for many modeling applications. Some significant mathematical features of the suggested distribution are determined. Estimates of the unknown parameters of the proposed distribution are obtained using the maximum likelihood method. Furthermore, some numerical studies were carried out, in order to evaluate the estimation performance. Three practical datasets are considered to analyze the usefulness and flexibility of the introduced distribution. The proposed alpha power Weibull–exponential distribution can outperform other well-known distributions, showing its great adaptability in the context of real data analysis.


Author(s):  
Alice R. Carter ◽  
Eleanor Sanderson ◽  
Gemma Hammerton ◽  
Rebecca C. Richmond ◽  
George Davey Smith ◽  
...  

AbstractMediation analysis seeks to explain the pathway(s) through which an exposure affects an outcome. Traditional, non-instrumental variable methods for mediation analysis experience a number of methodological difficulties, including bias due to confounding between an exposure, mediator and outcome and measurement error. Mendelian randomisation (MR) can be used to improve causal inference for mediation analysis. We describe two approaches that can be used for estimating mediation analysis with MR: multivariable MR (MVMR) and two-step MR. We outline the approaches and provide code to demonstrate how they can be used in mediation analysis. We review issues that can affect analyses, including confounding, measurement error, weak instrument bias, interactions between exposures and mediators and analysis of multiple mediators. Description of the methods is supplemented by simulated and real data examples. Although MR relies on large sample sizes and strong assumptions, such as having strong instruments and no horizontally pleiotropic pathways, our simulations demonstrate that these methods are unaffected by confounders of the exposure or mediator and the outcome and non-differential measurement error of the exposure or mediator. Both MVMR and two-step MR can be implemented in both individual-level MR and summary data MR. MR mediation methods require different assumptions to be made, compared with non-instrumental variable mediation methods. Where these assumptions are more plausible, MR can be used to improve causal inference in mediation analysis.


2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S175-S175
Author(s):  
Shannon Hunter ◽  
Diana Garbinsky ◽  
Elizabeth M La ◽  
Sara Poston ◽  
Cosmina Hogea

Abstract Background Previous studies on adult vaccination coverage found inter-state variability that persists after adjusting for individual demographic factors. Assessing the impact of state-level factors may help improve uptake strategies. This study aimed to: • Update previous estimates of state-level, model-adjusted coverage rates for influenza; pneumococcal; tetanus, diphtheria, and acellular pertussis (Tdap); and herpes zoster (HZ) vaccines (individually and in compliance with all age-appropriate recommended vaccinations) • Evaluate effects of individual and state-level factors on adult vaccination coverage using a multilevel modeling framework. Methods Behavioral Risk Factor Surveillance System (BRFSS) survey data (2015–2017) were retrospectively analyzed. Multivariable logistic regression models estimated state vaccination coverage and compliance using predicted marginal proportions. BRFSS data were then combined with external state-level data to estimate multilevel models evaluating effects of state-level factors on coverage. Weighted odds ratios and measures of cluster variation were estimated. Results Adult vaccination coverage and compliance varied by state, even after adjusting for individual characteristics, with coverage ranging as follows: • Influenza (2017): 35.1–48.1% • Pneumococcal (2017): 68.2–80.8% • Tdap (2016): 21.9–46.5% • HZ (2017): 30.5–50.9% Few state-level variables were retained in final multilevel models, and measures of cluster variation suggested substantial residual variation unexplained by individual and state-level variables. Key state-level variables positively associated with vaccination included health insurance coverage rates (influenza/HZ), pharmacists’ vaccination authority (HZ), presence of childhood vaccination exemptions (pneumococcal/Tdap), and adult immunization information system participation (Tdap/HZ). Conclusion Adult vaccination coverage and compliance continue to show substantial variation by state even after adjusting for individual and state-level characteristics associated with vaccination. Further research is needed to assess additional state or local factors impacting vaccination disparities. Funding GlaxoSmithKline Biologicals SA (study identifier: HO-18-19794) Disclosures Shannon Hunter, MS, GSK (Other Financial or Material Support, Ms. Hunter is an employee of RTI Health Solutions, who received consultancy fees from GSK for conduct of the study. Ms. Hunter received no direct compensation from the Sponsor.) Diana Garbinsky, MS, GSK (Other Financial or Material Support, The study was conducted by RTI Health Solutions, which received consultancy fees from GSK. I am a salaried employee at RTI Health Solutions and received no direct compensation from GSK for the conduct of this study..) Elizabeth M. La, PhD, RTI Health Solutions (Employee) Sara Poston, PharmD, The GlaxoSmithKline group of companies (Employee, Shareholder) Cosmina Hogea, PhD, GlaxoSmithKline (Employee, Shareholder)


Biometrika ◽  
2021 ◽  
Author(s):  
Juhyun Park ◽  
Jeongyoun Ahn ◽  
Yongho Jeon

Abstract Functional linear discriminant analysis offers a simple yet efficient method for classification, with the possibility of achieving a perfect classification. Several methods are proposed in the literature that mostly address the dimensionality of the problem. On the other hand, there is a growing interest in interpretability of the analysis, which favors a simple and sparse solution. In this work, we propose a new approach that incorporates a type of sparsity that identifies nonzero sub-domains in the functional setting, offering a solution that is easier to interpret without compromising performance. With the need to embed additional constraints in the solution, we reformulate the functional linear discriminant analysis as a regularization problem with an appropriate penalty. Inspired by the success of ℓ1-type regularization at inducing zero coefficients for scalar variables, we develop a new regularization method for functional linear discriminant analysis that incorporates an L1-type penalty, ∫ |f|, to induce zero regions. We demonstrate that our formulation has a well-defined solution that contains zero regions, achieving a functional sparsity in the sense of domain selection. In addition, the misclassification probability of the regularized solution is shown to converge to the Bayes error if the data are Gaussian. Our method does not presume that the underlying function has zero regions in the domain, but produces a sparse estimator that consistently estimates the true function whether or not the latter is sparse. Numerical comparisons with existing methods demonstrate this property in finite samples with both simulated and real data examples.


Sign in / Sign up

Export Citation Format

Share Document