scholarly journals New explainability method for BERT-based model in fake news detection

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mateusz Szczepański ◽  
Marek Pawlicki ◽  
Rafał Kozik ◽  
Michał Choraś

AbstractThe ubiquity of social media and their deep integration in the contemporary society has granted new ways to interact, exchange information, form groups, or earn money—all on a scale never seen before. Those possibilities paired with the widespread popularity contribute to the level of impact that social media display. Unfortunately, the benefits brought by them come at a cost. Social Media can be employed by various entities to spread disinformation—so called ‘Fake News’, either to make a profit or influence the behaviour of the society. To reduce the impact and spread of Fake News, a diverse array of countermeasures were devised. These include linguistic-based approaches, which often utilise Natural Language Processing (NLP) and Deep Learning (DL). However, as the latest advancements in the Artificial Intelligence (AI) domain show, the model’s high performance is no longer enough. The explainability of the system’s decision is equally crucial in real-life scenarios. Therefore, the objective of this paper is to present a novel explainability approach in BERT-based fake news detectors. This approach does not require extensive changes to the system and can be attached as an extension for operating detectors. For this purposes, two Explainable Artificial Intelligence (xAI) techniques, Local Interpretable Model-Agnostic Explanations (LIME) and Anchors, will be used and evaluated on fake news data, i.e., short pieces of text forming tweets or headlines. This focus of this paper is on the explainability approach for fake news detectors, as the detectors themselves were part of previous works of the authors.

2019 ◽  
Vol 11 (2) ◽  
pp. 373-392 ◽  
Author(s):  
Sam Gregory

Abstract Pessimism currently prevails around human rights globally, as well as about the impact of digital technology and social media in supporting rights. However, there have been key successes in the use of these tools for documentation and advocacy in the past decade, including greater participation, more documentation, and growth of new fields around citizen evidence and fact-finding. Governments and others antagonistic to human rights have caught up in terms of weaponizing the affordances of the internet and pushing back on rights actors. Key challenges to be grappled with are consistent with ones that have existed for a decade but are exacerbated now—how to protect and enhance safety of vulnerable people and provide agency over visibility and anonymity; how to ensure and improve trust and credibility of human rights documentation and advocacy campaigning; and how to identify and use new strategies that optimize for a climate of volume of media, declining trust in traditional sources, and active strategies of distraction and misinformation. All of these activities take place primarily within a set of platforms that are governed by commercial imperatives and attention-based algorithms, and that increasingly use unaccountable content moderation processes driven by artificial intelligence. The article argues for a pragmatic approach to harm reduction within the platforms and tools that are used by a diverse range of human rights defenders, and for a proactive engagement on ensuring that an inclusive human rights perspective is centred in responses to new challenges at a global level within a multipolar world as well as specific areas of challenge and opportunity such as fake news and authenticity, deepfakes, use of artificial intelligence to find and make sense of information, virtual reality, and how we ensure effective solidarity activism. Solutions and usages in these areas must avoid causing inadvertent as well as deliberate harms to already marginalized people.


2018 ◽  
Author(s):  
Andrea Pereira ◽  
Jay Joseph Van Bavel ◽  
Elizabeth Ann Harris

Political misinformation, often called “fake news”, represents a threat to our democracies because it impedes citizens from being appropriately informed. Evidence suggests that fake news spreads more rapidly than real news—especially when it contains political content. The present article tests three competing theoretical accounts that have been proposed to explain the rise and spread of political (fake) news: (1) the ideology hypothesis— people prefer news that bolsters their values and worldviews; (2) the confirmation bias hypothesis—people prefer news that fits their pre-existing stereotypical knowledge; and (3) the political identity hypothesis—people prefer news that allows their political in-group to fulfill certain social goals. We conducted three experiments in which American participants read news that concerned behaviors perpetrated by their political in-group or out-group and measured the extent to which they believed the news (Exp. 1, Exp. 2, Exp. 3), and were willing to share the news on social media (Exp. 2 and 3). Results revealed that Democrats and Republicans were both more likely to believe news about the value-upholding behavior of their in-group or the value-undermining behavior of their out-group, supporting a political identity hypothesis. However, although belief was positively correlated with willingness to share on social media in all conditions, we also found that Republicans were more likely to believe and want to share apolitical fake new. We discuss the implications for theoretical explanations of political beliefs and application of these concepts in in polarized political system.


2021 ◽  
Vol 13 (7) ◽  
pp. 4043 ◽  
Author(s):  
Jesús López Baeza ◽  
Jens Bley ◽  
Kay Hartkopf ◽  
Martin Niggemann ◽  
James Arias ◽  
...  

The research presented in this paper describes an evaluation of the impact of spatial interventions in public spaces, measured by social media data. This contribution aims at observing the way a spatial intervention in an urban location can affect what people talk about on social media. The test site for our research is Domplatz in the center of Hamburg, Germany. In recent years, several actions have taken place there, intending to attract social activity and spotlight the square as a landmark of cultural discourse in the city of Hamburg. To evaluate the impact of this strategy, textual data from the social networks Twitter and Instagram (i.e., tweets and image captions) are collected and analyzed using Natural Language Processing intelligence. These analyses identify and track the cultural topic or “people talking about culture” in the city of Hamburg. We observe the evolution of the cultural topic, and its potential correspondence in levels of activity, with certain intervention actions carried out in Domplatz. Two analytic methods of topic clustering and tracking are tested. The results show a successful topic identification and tracking with both methods, the second one being more accurate. This means that it is possible to isolate and observe the evolution of the city’s cultural discourse using NLP. However, it is shown that the effects of spatial interventions in our small test square have a limited local scale, rather than a city-wide relevance.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 556
Author(s):  
Thaer Thaher ◽  
Mahmoud Saheb ◽  
Hamza Turabieh ◽  
Hamouda Chantar

Fake or false information on social media platforms is a significant challenge that leads to deliberately misleading users due to the inclusion of rumors, propaganda, or deceptive information about a person, organization, or service. Twitter is one of the most widely used social media platforms, especially in the Arab region, where the number of users is steadily increasing, accompanied by an increase in the rate of fake news. This drew the attention of researchers to provide a safe online environment free of misleading information. This paper aims to propose a smart classification model for the early detection of fake news in Arabic tweets utilizing Natural Language Processing (NLP) techniques, Machine Learning (ML) models, and Harris Hawks Optimizer (HHO) as a wrapper-based feature selection approach. Arabic Twitter corpus composed of 1862 previously annotated tweets was utilized by this research to assess the efficiency of the proposed model. The Bag of Words (BoW) model is utilized using different term-weighting schemes for feature extraction. Eight well-known learning algorithms are investigated with varying combinations of features, including user-profile, content-based, and words-features. Reported results showed that the Logistic Regression (LR) with Term Frequency-Inverse Document Frequency (TF-IDF) model scores the best rank. Moreover, feature selection based on the binary HHO algorithm plays a vital role in reducing dimensionality, thereby enhancing the learning model’s performance for fake news detection. Interestingly, the proposed BHHO-LR model can yield a better enhancement of 5% compared with previous works on the same dataset.


2021 ◽  
Vol 7 (2) ◽  
pp. 205630512110197
Author(s):  
Chesca Ka Po Wong ◽  
Runping Zhu ◽  
Richard Krever ◽  
Alfred Siu Choi

While the impact of fake news on viewers, particularly marginalized media users, has been a cause of growing concern, there has been little attention paid to the phenomenon of deliberately “manipulated” news published on social media by mainstream news publishers. Using qualitative content analysis and quantitative survey research, this study showed that consciously biased animated news videos released in the midst of the Umbrella Movement protests in Hong Kong impacted on both the attitudes of students and their participation in the protests. The findings raise concerns over potential use of the format by media owners to promote their preferred ideologies.


2021 ◽  
Author(s):  
Christopher Marshall ◽  
Kate Lanyi ◽  
Rhiannon Green ◽  
Georgie Wilkins ◽  
Fiona Pearson ◽  
...  

BACKGROUND There is increasing need to explore the value of soft-intelligence, leveraged using the latest artificial intelligence (AI) and natural language processing (NLP) techniques, as a source of analysed evidence to support public health research activity and decision-making. OBJECTIVE The aim of this study was to further explore the value of soft-intelligence analysed using AI through a case study, which examined a large collection of UK tweets relating to mental health during the COVID-19 pandemic. METHODS A search strategy comprising a list of terms related to mental health, COVID-19, and lockdown restrictions was developed to prospectively collate relevant tweets via Twitter’s advanced search application programming interface over a 24-week period. We deployed a specialist NLP platform to explore tweet frequency and sentiment across the UK and identify key topics of discussion. A series of keyword filters were used to clean the initial data retrieved and also set up to track specific mental health problems. Qualitative document analysis was carried out to further explore and expand upon the results generated by the NLP platform. All collated tweets were anonymised RESULTS We identified and analysed 286,902 tweets posted from UK user accounts from 23 July 2020 to 6 January 2021. The average sentiment score was 50%, suggesting overall neutral sentiment across all tweets over the study period. Major fluctuations in volume and sentiment appeared to coincide with key changes to any local and/or national social-distancing measures. Tweets around mental health were polarising, discussed with both positive and negative sentiment. Key topics of consistent discussion over the study period included the impact of the pandemic on people’s mental health (both positively and negatively), fear and anxiety over lockdowns, and anger and mistrust toward the government. CONCLUSIONS Through the primary use of an AI-based NLP platform, we were able to rapidly mine and analyse emerging health-related insights from UK tweets into how the pandemic may be impacting people’s mental health and well-being. This type of real-time analysed evidence could act as a useful intelligence source that agencies, local leaders, and health care decision makers can potentially draw from, particularly during a health crisis.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Khudejah Ali ◽  
Cong Li ◽  
Khawaja Zain-ul-abdin ◽  
Muhammad Adeel Zaffar

PurposeAs the epidemic of online fake news is causing major concerns in contexts such as politics and public health, the current study aimed to elucidate the effect of certain “heuristic cues,” or key contextual features, which may increase belief in the credibility and the subsequent sharing of online fake news.Design/methodology/approachThis study employed a 2 (news veracity: real vs fake) × 2 (social endorsements: low Facebook “likes” vs high Facebook “likes”) between-subjects experimental design (N = 239).FindingsThe analysis revealed that a high number of Facebook “likes” accompanying fake news increased the perceived credibility of the material compared to a low number of “likes.” In addition, the mediation results indicated that increased perceptions of news credibility may create a situation in which readers feel that it is necessary to cognitively elaborate on the information present in the news, and this active processing finally leads to sharing.Practical implicationsThe results from this study help explicate what drives increased belief and sharing of fake news and can aid in refining interventions aimed at combating fake news for both communities and organizations.Originality/valueThe current study expands upon existing literature, linking the use of social endorsements to perceived credibility of fake news and information, and sheds light on the causal mechanisms through which people make the decision to share news articles on social media.


Author(s):  
Anna Grazulis ◽  
Ryan Rogers

Beyond the spread of fake news, the term “fake news” has been used by people on social media and by people in the Trump administration to discredit reporting and show disagreement with the content of a story. This study offers a series of defining traits of fake news and a corresponding experiment testing its impact. Overall, this study shows that fake news, or at least labeling fake news can impact the gratifications people derive from news. Further, this study provides evidence that the impact of fake news might, in some cases, rely on whether or not the fake news complies with preexisting beliefs.


2011 ◽  
pp. 24-36 ◽  
Author(s):  
Kimiz Dalkir

This chapter focuses on a method, social network analysis (SNA) that can be used to assess the quantity and quality of connection, communication and collaboration mediated by social tools in an organization. An organization, in the Canadian public sector, is used as a real-life case study to illustrate how SNA can be used in a pre-test/post-test evaluation design to conduct a comparative assessment of methods that can be used before, during and after the implementation of organizational change in work processes. The same evaluation method can be used to assess the impact of introducing new social media such as wikis, expertise locator systems, blogs, Twitter and so on. In other words, while traditional pre-test/post-test designs can be easily applied to social media, the social media tools themselves can be added to the assessment toolkit. Social network analysis in particular is a good candidate to analyze the connections between people and content as well as people with other people.


Author(s):  
Shatakshi Singh ◽  
Kanika Gautam ◽  
Prachi Singhal ◽  
Sunil Kumar Jangir ◽  
Manish Kumar

The recent development in artificial intelligence is quite astounding in this decade. Especially, machine learning is one of the core subareas of AI. Also, ML field is an incessantly growing along with evolution and becomes a rise in its demand and importance. It transmogrified the way data is extracted, analyzed, and interpreted. Computers are trained to get in a self-training mode so that when new data is fed they can learn, grow, change, and develop themselves without explicit programming. It helps to make useful predictions that can guide better decisions in a real-life situation without human interference. Selection of ML tool is always a challenging task, since choosing an appropriate tool can end up saving time as well as making it faster and easier to provide any solution. This chapter provides a classification of various machine learning tools on the following aspects: for non-programmers, for model deployment, for Computer vision, natural language processing, and audio for reinforcement learning and data mining.


Sign in / Sign up

Export Citation Format

Share Document