scholarly journals Ordinal Data Models for No-Opinion Responses in Attitude Survey

2018 ◽  
Vol 49 (1) ◽  
pp. 250-276 ◽  
Author(s):  
Maria Iannario ◽  
Marica Manisera ◽  
Domenico Piccolo ◽  
Paola Zuccolotto

In analyzing data from attitude surveys, it is common to consider the “don’t know” responses as missing values. In this article, we present a statistical model commonly used for the analysis of responses/evaluations expressed on Likert scales and extended to take into account the presence of don’t know responses. The main objective is to offer an alternative to the usual custom to treat them as missing values by considering them as a source of uncertainty. The original proposal in this article is the introduction of the relevant covariates in order to discriminate subpopulations that can show different behaviors in choosing between a substantive response and the don’t know option.

2020 ◽  
pp. 47-63
Author(s):  
Bendix Carstensen

This chapter examines prevalence data, using a dataset which contains the number of diabetes patients and the total number of persons in Denmark as of January 1, 2010, classified by age and sex. Prevalence of a disease condition in a population is merely the proportion of affected people. The chapter uses prevalence to illustrate core modelling concepts: the model itself, the likelihood, the maximum likelihood estimation principle, and the properties of the results, all of which underlies most modern epidemiological methods. It also explains the concept of a statistical model leading to the distinction between empirical and theoretical prevalences. The chapter then focuses on the task of comparing different models for the same data, models that describe data in various degrees of detail.


2009 ◽  
Vol 3 (0) ◽  
pp. 912-931
Author(s):  
Chuanwen Chen ◽  
Arthur Cohen ◽  
Harold B. Sackrowitz

2021 ◽  
pp. 004912412098617
Author(s):  
Maria Iannario ◽  
Claudia Tarantola

This contribution deals with effect measures for covariates in ordinal data models to address the interpretation of the results on the extreme categories of the scales, evaluate possible response styles, and motivate collapsing of extreme categories. It provides a simpler interpretation of the influence of the covariates on the probability of the response categories both in standard cumulative link models under the proportional odds assumption and in the recent extension of the Combination of Uncertainty and Preference of the respondents models, the mixture models introduced to account for uncertainty in rating systems. The article shows by means of marginal effect measures that the effects of the covariates are underestimated when the uncertainty component is neglected. Visualization tools for the effect of covariates are proposed, and measures of relative size and partial effect based on rates of change are evaluated by the use of real data sets.


2014 ◽  
Vol 11 (3) ◽  
pp. 150-160
Author(s):  
David G. Moore

Summary The Author first discusses generally the employee attitude survey, describing the techniques commonly used, evaluating the ordinary questionnaire technique with its many drawbacks and limitations; these, however, can be — and have been — gradually corrected with time, and one of them has been refined into an instrument called the SRA Employee Inventory. The rest of the article is spent describing and assessing the Inventory, and finally giving the results and trends in employee attitudes which it has yielded.


Methodology ◽  
2021 ◽  
Vol 17 (3) ◽  
pp. 205-230
Author(s):  
Kristian Kleinke ◽  
Markus Fritsch ◽  
Mark Stemmler ◽  
Jost Reinecke ◽  
Friedrich Lösel

Quantile regression (QR) is a valuable tool for data analysis and multiple imputation (MI) of missing values – especially when standard parametric modelling assumptions are violated. Yet, Monte Carlo simulations that systematically evaluate QR-based MI in a variety of different practically relevant settings are still scarce. In this paper, we evaluate the method regarding the imputation of ordinal data and compare the results with other standard and robust imputation methods. We then apply QR-based MI to an empirical dataset, where we seek to identify risk factors for corporal punishment of children by their fathers. We compare the modelling results with previously published findings based on complete cases. Our Monte Carlo results highlight the advantages of QR-based MI over fully parametric imputation models: QR-based MI yields unbiased statistical inferences across large parts of the conditional distribution, when parametric modelling assumptions, such as normal and homoscedastic error terms, are violated. Regarding risk factors for corporal punishment, our MI results support previously published findings based on complete cases. Our empirical results indicate that the identified “missing at random” processes in the investigated dataset are negligible.


2016 ◽  
Vol 32 (2) ◽  
pp. 596 ◽  
Author(s):  
Urbano Lorenzo-Seva ◽  
Joost R. Van Ginkel

<p>Researchers frequently have to analyze scales in which some participants have failed to respond to some items. In this paper we focus on the exploratory factor analysis of multidimensional scales (i.e., scales that consist of a number of subscales) where each subscale is made up of a number of Likert-type items, and the aim of the analysis is to estimate participants’ scores on the corresponding latent traits. Our approach uses the following steps: (1) multiple imputation creates several copies of the data, in which the missing values are imputed; (2) each copy of the data is subject to independent factor analysis, and the same number of factors is extracted from all copies; (3) all factor solutions are simultaneously orthogonally (or obliquely) rotated so that they are both (a) factorially simple, and (b) as similar to one another as possible; (4) latent trait scores are estimated for ordinal data in each copy; and (5) participants’ scores on the latent traits are estimated as the average of the estimates of the latent traits obtained in the copies. We applied the approach in a real dataset where missing responses were artificially introduced following a real pattern of non-responses and a simulation study based on artificial datasets. The results show that our approach was able to compute factor score estimates even for participants that have missing data.</p>


Methodology ◽  
2008 ◽  
Vol 4 (3) ◽  
pp. 132-138 ◽  
Author(s):  
Michael Höfler

A standardized index for effect intensity, the translocation relative to range (TRR), is discussed. TRR is defined as the difference between the expectations of an outcome under two conditions (the absolute increment) divided by the maximum possible amount for that difference. TRR measures the shift caused by a factor relative to the maximum possible magnitude of that shift. For binary outcomes, TRR simply equals the risk difference, also known as the inverse number needed to treat. TRR ranges from –1 to 1 but is – unlike a correlation coefficient – a measure for effect intensity, because it does not rely on variance parameters in a certain population as do effect size measures (e.g., correlations, Cohen’s d). However, the use of TRR is restricted on outcomes with fixed and meaningful endpoints given, for instance, for meaningful psychological questionnaires or Likert scales. The use of TRR vs. Cohen’s d is illustrated with three examples from Psychological Science 2006 (issues 5 through 8). It is argued that, whenever TRR applies, it should complement Cohen’s d to avoid the problems related to the latter. In any case, the absolute increment should complement d.


Sign in / Sign up

Export Citation Format

Share Document