scholarly journals Digital Deceit: Fake News, Artificial Intelligence, and Censorship in Educational Research

2020 ◽  
Vol 08 (07) ◽  
pp. 71-88
Author(s):  
Joanna Black ◽  
Cody Fullerton
2021 ◽  
Vol 13 (2) ◽  
pp. 1-12
Author(s):  
Sumit Das ◽  
Manas Kumar Sanyal ◽  
Sarbajyoti Mallik

There is a lot of fake news roaming around various mediums, which misleads people. It is a big issue in this advanced intelligent era, and there is a need to find some solution to this kind of situation. This article proposes an approach that analyzes fake and real news. This analysis is focused on sentiment, significance, and novelty, which are a few characteristics of this news. The ability to manipulate daily information mathematically and statistically is allowed by expressing news reports as numbers and metadata. The objective of this article is to analyze and filter out the fake news that makes trouble. The proposed model is amalgamated with the web application; users can get real data and fake data by using this application. The authors have used the AI (artificial intelligence) algorithms, specifically logistic regression and LSTM (long short-term memory), so that the application works well. The results of the proposed model are compared with existing models.


2019 ◽  
Vol 30 (2) ◽  
pp. 205-235 ◽  
Author(s):  
Mutlu Cukurova ◽  
Rosemary Luckin ◽  
Carmel Kent

AbstractArtificial Intelligence (AI) is attracting a great deal of attention and it is important to investigate the public perceptions of AI and their impact on the perceived credibility of research evidence. In the literature, there is evidence that people overweight research evidence when framed in neuroscience findings. In this paper, we present the findings of the first investigation of the impact of an AI frame on the perceived credibility of educational research evidence. In an experimental study, we allocated 605 participants including educators to one of three conditions in which the same educational research evidence was framed within one of: AI, neuroscience, or educational psychology. The results demonstrate that when educational research evidence is framed within AI research, it is considered as less credible in comparison to when it is framed instead within neuroscience or educational psychology. The effect is still evident when the subjects’ familiarity with the framing discipline is controlled for. Furthermore, our results indicate that the general public perceives AI to be: less helpful in assisting us to understand how children learn, lacking in adherence to scientific methods, and to be less prestigious compared to neuroscience and educational psychology. Considering the increased use of AI technologies in Educational settings, we argue that there should be significant attempts to recover the public image of AI being less scientifically robust and less prestigious than educational psychology and neuroscience. We conclude the article suggesting that AI in Education community should attempt to be more actively engaged with key stakeholders of AI and Education to help mitigate such effects.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mateusz Szczepański ◽  
Marek Pawlicki ◽  
Rafał Kozik ◽  
Michał Choraś

AbstractThe ubiquity of social media and their deep integration in the contemporary society has granted new ways to interact, exchange information, form groups, or earn money—all on a scale never seen before. Those possibilities paired with the widespread popularity contribute to the level of impact that social media display. Unfortunately, the benefits brought by them come at a cost. Social Media can be employed by various entities to spread disinformation—so called ‘Fake News’, either to make a profit or influence the behaviour of the society. To reduce the impact and spread of Fake News, a diverse array of countermeasures were devised. These include linguistic-based approaches, which often utilise Natural Language Processing (NLP) and Deep Learning (DL). However, as the latest advancements in the Artificial Intelligence (AI) domain show, the model’s high performance is no longer enough. The explainability of the system’s decision is equally crucial in real-life scenarios. Therefore, the objective of this paper is to present a novel explainability approach in BERT-based fake news detectors. This approach does not require extensive changes to the system and can be attached as an extension for operating detectors. For this purposes, two Explainable Artificial Intelligence (xAI) techniques, Local Interpretable Model-Agnostic Explanations (LIME) and Anchors, will be used and evaluated on fake news data, i.e., short pieces of text forming tweets or headlines. This focus of this paper is on the explainability approach for fake news detectors, as the detectors themselves were part of previous works of the authors.


2021 ◽  
Vol 6 ◽  
Author(s):  
Johannes Langguth ◽  
Konstantin Pogorelov ◽  
Stefan Brenner ◽  
Petra Filkuková ◽  
Daniel Thilo Schroeder

We review the phenomenon of deepfakes, a novel technology enabling inexpensive manipulation of video material through the use of artificial intelligence, in the context of today’s wider discussion on fake news. We discuss the foundation as well as recent developments of the technology, as well as the differences from earlier manipulation techniques and investigate technical countermeasures. While the threat of deepfake videos with substantial political impact has been widely discussed in recent years, so far, the political impact of the technology has been limited. We investigate reasons for this and extrapolate the types of deepfake videos we are likely to see in the future.


Sign in / Sign up

Export Citation Format

Share Document