Social Desirability Bias and Polling Errors in the 2016 Presidential Election

2017 ◽  
Author(s):  
Andy Brownback ◽  
Aaron M. Novotny
The Forum ◽  
2016 ◽  
Vol 14 (4) ◽  
Author(s):  
Samara Klar ◽  
Christopher R. Weber ◽  
Yanna Krupnikov

AbstractPartisanship is a stable trait but expressions of partisan preferences can vary according to social context. When particular preferences become socially undesirable, some individuals refrain from expressing them in public, even in relatively anonymous settings such as surveys and polls. In this study, we rely on the psychological trait of self-monitoring to show that Americans who are more likely to adjust their behaviors to comply with social norms (i.e. high self-monitors) were less likely to express support for Donald Trump during the 2016 Presidential Election. In turn, as self-monitoring decreases, we find that the tendency to express support for Trump increases. This study suggests that – at least for some individuals – there may have been a tendency in 2016 to repress expressed support for Donald Trump in order to mask socially undesirable attitudes.


2017 ◽  
Vol 8 (1) ◽  
Author(s):  
Alexander Coppock

AbstractExplanations for the failure to predict Donald Trump’s win in the 2016 Presidential election sometimes include the “Shy Trump Supporter” hypothesis, according to which some Trump supporters succumb to social desirability bias and hide their vote preference from pollsters. I evaluate this hypothesis by comparing direct question and list experimental estimates of Trump support in a nationally representative survey of 5290 American adults fielded from September 2 to September 13, 2016. Of these, 32.5% report supporting Trump’s candidacy. A list experiment conducted on the same respondents yields an estimate 29.6%, suggesting that Trump’s poll numbers were not artificially deflated by social desirability bias as the list experiment estimate is actually lower than direct question estimate. I further investigate differences across measurement modes for relevant demographic and political subgroups and find no evidence in support of the “Shy Trump Supporter” hypothesis.


Author(s):  
Mary Kay Gugerty ◽  
Dean Karlan

Without high-quality data, even the best-designed monitoring and evaluation systems will collapse. Chapter 7 introduces some the basics of collecting high-quality data and discusses how to address challenges that frequently arise. High-quality data must be clearly defined and have an indicator that validly and reliably measures the intended concept. The chapter then explains how to avoid common biases and measurement errors like anchoring, social desirability bias, the experimenter demand effect, unclear wording, long recall periods, and translation context. It then guides organizations on how to find indicators, test data collection instruments, manage surveys, and train staff appropriately for data collection and entry.


2021 ◽  
pp. 1-18
Author(s):  
Endra Iraman ◽  
Yoshikuni Ono ◽  
Makoto Kakinaka

Abstract Identifying taxpayers who engage in noncompliant behaviour is crucial for tax authorities to determine appropriate taxation schemes. However, because taxpayers have an incentive to conceal their true income, it is difficult for tax authorities to uncover such behaviour (social desirability bias). Our study mitigates the bias in responses to sensitive questions by employing the list experiment technique, which allows us to identify the characteristics of taxpayers who engage in tax evasion. Using a dataset obtained from a tax office in Jakarta, Indonesia, we conducted a computer-assisted telephone interviewing survey in 2019. Our results revealed that 13% of the taxpayers, old, male, corporate employees, and members of a certain ethnic group had reported lower income than their true income on their tax returns. These findings suggest that our research design can be a useful tool for understanding tax evasion and for developing effective taxation schemes that promote tax compliance.


2021 ◽  
pp. 073112142110019
Author(s):  
Emma Mishel ◽  
Tristan Bridges ◽  
Mónica L. Caudillo

It is difficult to gauge people’s acceptance about same-sex sexualities, as responses to questionnaires are prone to social desirability bias. We offer a new proxy for understanding popular concern surrounding same-sex sexualities: prevalence of Google searches demonstrating concern over gay/lesbian sexual identities. Using Google Trends data, we find that Google searches about whether a specific person is gay or lesbian show patterned bias toward masculine searches, in that such searches are much more frequently conducted about boys and men compared with girls and women. We put these findings into context by comparing search frequencies with other popular Google searches about sexuality and otherwise. We put forth that the patterned bias toward masculine searches illustrates support for the enduring relationship between masculinity and heterosexuality and that it does so on a larger scale than previous research has been able to establish.


Sign in / Sign up

Export Citation Format

Share Document