backfire effect
Recently Published Documents


TOTAL DOCUMENTS

52
(FIVE YEARS 28)

H-INDEX

9
(FIVE YEARS 3)

2021 ◽  
Vol 12 ◽  
Author(s):  
Luc Rousseau

Neuromyths are misconceptions about the brain and learning, for instance Tailoring instruction to students' preferred “learning styles” (e.g., visual, auditory, kinesthetic) promotes learning. Recent reviews indicate that the high prevalence of beliefs in neuromyths among educators did not decline over the past decade. Potential adverse effects of neuromyth beliefs on teaching practices prompted researchers to develop interventions to dispel these misconceptions in educational settings. This paper provides a critical review of current intervention approaches. The following questions are examined: Does neuroscience training protect against neuromyths? Are refutation-based interventions effective at dispelling neuromyths, and are corrective effects enduring in time? Why refutation-based interventions are not enough? Do reduced beliefs in neuromyths translate in the adoption of more evidence-based teaching practices? Are teacher professional development workshops and seminars on the neuroscience of learning effective at instilling neuroscience in the classroom? Challenges, issues, controversies, and research gaps in the field are highlighted, notably the so-called “backfire effect,” the social desirability bias, and the powerful intuitive thinking mode. Future directions are outlined.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0256922
Author(s):  
Xi Chen ◽  
Panayiotis Tsaparas ◽  
Jefrey Lijffijt ◽  
Tijl De Bie

The democratization of AI tools for content generation, combined with unrestricted access to mass media for all (e.g. through microblogging and social media), makes it increasingly hard for people to distinguish fact from fiction. This raises the question of how individual opinions evolve in such a networked environment without grounding in a known reality. The dominant approach to studying this problem uses simple models from the social sciences on how individuals change their opinions when exposed to their social neighborhood, and applies them on large social networks. We propose a novel model that incorporates two known social phenomena: (i) Biased Assimilation: the tendency of individuals to adopt other opinions if they are similar to their own; (ii) Backfire Effect: the fact that an opposite opinion may further entrench people in their stances, making their opinions more extreme instead of moderating them. To the best of our knowledge, this is the first DeGroot-type opinion formation model that captures the Backfire Effect. A thorough theoretical and empirical analysis of the proposed model reveals intuitive conditions for polarization and consensus to exist, as well as the properties of the resulting opinions.


2021 ◽  
Author(s):  
Briony Swire-Thompson ◽  
Nicholas Miklaucic ◽  
John Wihbey ◽  
David Lazer ◽  
Joseph DeGutis

The backfire effect is when a correction increases belief in the very misconception it is attempting to correct, and it is often used as a reason not to correct misinformation. The current study aimed to test whether correcting misinformation increases belief more than a no-correction control. Furthermore, we aimed to examine whether item-level differences in backfire rates were associated with test-retest reliability or theoretically meaningful factors. These factors included worldview-related attributes, namely perceived importance and strength of pre-correction belief, and familiarity-related attributes, namely perceived novelty and the illusory truth effect. In two nearly identical experiments, we conducted a longitudinal pre/post design with N = 388 and 532 participants. Participants rated 21 misinformation items and were assigned to a correction condition or test-retest control. We found that no items backfired more in the correction condition compared to test-retest control or initial belief ratings. Item backfire rates were strongly negatively correlated with item reliability (⍴ = -.61 / -.73) and did not correlate with worldview-related attributes. Familiarity-related attributes were significantly correlated with backfire rate, though they did not consistently account for unique variance beyond reliability. While there have been previous papers highlighting the non-replicable nature of backfire effects, the current findings provide a potential mechanism for this poor replicability. It is crucial for future research into backfire effects to use reliable measures, report the reliability of their measures, and take reliability into account in analyses. Furthermore, fact-checkers and communicators should not avoid giving corrective information due to backfire concerns.


2021 ◽  
pp. 001041402110242
Author(s):  
Justin Schon ◽  
David Leblang

What, if any, effect do physical barriers have on cross-border population movements? The foundational claim that barriers reduce migration flows remains unsupported. We conceptualize barriers as a tool of immigration enforcement, which we contend is one form of state repression. State repression could reduce mobilization (reduce immigration), have no effect on mobilization (barriers as symbolic political tools), or increase mobilization (backfire). We evaluate the relationship between barriers and cross-border population movements using a global directed dyad-year dataset for the 1990–2016 time period of all contiguous dyads and nearby non-contiguous dyads. Using instrumental variables, we find that physical barriers actually increase refugee flows, consistent with the “backfire effect” identified in research on United States immigration enforcement policies on its Mexican border. Furthermore, we find that state repression (immigration enforcement) creates this “backfire effect” via a “sunk costs” problem that reduces movements of people and increases movement of status from migrant to refugee.


2021 ◽  
Vol 118 (15) ◽  
pp. e1912440117 ◽  
Author(s):  
Brendan Nyhan

Previous research indicated that corrective information can sometimes provoke a so-called “backfire effect” in which respondents more strongly endorsed a misperception about a controversial political or scientific issue when their beliefs or predispositions were challenged. I show how subsequent research and media coverage seized on this finding, distorting its generality and exaggerating its role relative to other factors in explaining the durability of political misperceptions. To the contrary, an emerging research consensus finds that corrective information is typically at least somewhat effective at increasing belief accuracy when received by respondents. However, the research that I review suggests that the accuracy-increasing effects of corrective information like fact checks often do not last or accumulate; instead, they frequently seem to decay or be overwhelmed by cues from elites and the media promoting more congenial but less accurate claims. As a result, misperceptions typically persist in public opinion for years after they have been debunked. Given these realities, the primary challenge for scientific communication is not to prevent backfire effects but instead, to understand how to target corrective information better and to make it more effective. Ultimately, however, the best approach is to disrupt the formation of linkages between group identities and false claims and to reduce the flow of cues reinforcing those claims from elites and the media. Doing so will require a shift from a strategy focused on providing information to the public to one that considers the roles of intermediaries in forming and maintaining belief systems.


2021 ◽  
Vol 8 (2) ◽  
pp. 205316802110149
Author(s):  
Vignesh Chockalingam ◽  
Victor Wu ◽  
Nicolas Berlinski ◽  
Zoe Chandra ◽  
Amy Hu ◽  
...  

The spread of COVID-19 misinformation highlights the need to correct misperceptions about health and science. Research on climate change suggests that informing people about a scientific consensus can reduce misinformation endorsement, but these studies often fail to isolate the effects of consensus messaging and may not translate to other issues. We therefore conduct a survey experiment comparing standard corrections with those citing a scientific consensus for three issues: COVID-19 threat, climate change threat, and vaccine efficacy. We find that consensus corrections are never more effective than standard corrections at countering misperceptions and generally fail to reduce them with only one exception. We also find that consensus corrections endorsed by co-partisans do not reduce misperceptions relative to standard corrections, while those endorsed by opposition partisans are viewed as less credible and can potentially even provoke a backfire effect. These results indicate that corrections citing a scientific consensus, including corrective messages from partisans, are less effective than previous research suggests when compared with appropriate baseline messages.


2021 ◽  
Author(s):  
Philip Warren Stirling Newall ◽  
Leonardo Weiss-Cohen ◽  
Henrik Singmann ◽  
Lukasz Walasek ◽  
Elliot Andrew Ludvig

Safer gambling messages are a common freedom-preserving method of protecting individuals from gambling-related harm. Yet, there is a striking lack of independent and rigorous evidence to attest to the effectiveness of safer gambling messages. This study presents results from three large (N ≈ 3,000), preregistered, and incentivized experimental tests of the UK’s commonly used “When the fun stops, stop” gambling message. Variants of this message were tested on gamblers’ propensity to accept bets on soccer events (Experiment 1), behavior on a realistic online roulette table (Experiment 2), and betting patterns on an online soccer betting platform (Experiment 3). All three experiments find either no beneficial effect or a small backfire effect on gamblers’ behaviour. We conclude that independent evaluations should inform policy-makers’ decisions on how to best implement improved safer gambling messages.


Sign in / Sign up

Export Citation Format

Share Document