scholarly journals Adjusting the Drafter for COVID19: Re-designing our society’s understanding of misinformation

2021 ◽  
Author(s):  
Ajay Agarwal

The pandemic of COVID19 illuminated the presence of our society’s cognition in a low-ceiling, inhabitable room, with almost little to no illumination of truth. Such a low-ceiling doesn’t only restrict the freedom of our cognition but also inhibits its healthy growth. Subsequently, our society feels a pushing sense, which is often exaggerated by the dark periods of misinformation, disinformation, and fake news. Hence, it becomes essential to rethink the interior designs of our cognition – How can we look at these periods of misinformation from a different lens? Can we use them to our advantage to make our room looks spacious enough for the growth of our cognition? Despite the limitations imposed to theceiling length by our existing cognitive biases, there exist multiple, unconventional interdisciplinary approaches from the fields of epistemology, phenomenology, evolutionary psychology, and finally, the mathematics that we, as researchers, can leverage to broaden our understanding of the existing “misinfodemic” that presents as a ripple effect of COVID19 on our society’s cognition. The aim of this paper shall be the same – to present a noble discourse regarding the “dark period of misinformation” – why misinformation is NOT a pandemic but a widely-used misnomer, how the source of truthful information acts as a source of misinformation, why misinformation is needed for the development of a better cognitive heuristic framework for our society, and finally, why such unconventional approaches fail to see the light of research. While the existing approaches to deal with misinformation spiral around machine-learning models competing with each other for better detection accuracy, this paper will take the reader right to the epicenter of “misinfodemic” using a variety of routes. Towards the end, the author provides how the mentioned approaches not only widen our understanding regarding the universal phenomenon of misinformation but also can be leveraged and scaled for irrational human behaviors like suicide, partisanship, and even student gun violence in the USA.Keywords: misinformation; psychology; interdisciplinary research; society; evolution

2021 ◽  
pp. 37-47
Author(s):  
Ajay Agarwal

The pandemic of COVID19 illuminated the presence of our society’s cognition in a low-ceiling, inhabitable room, with almost little to none illumination of truth. Such a low-ceiling doesn’t only restrict the freedom of our cognition but also inhibits its healthy growth. Subsequently, our society feels a pushing sense, which is often exaggerated by the dark periods of misinformation, disinformation, and fake news. Hence, it becomes essential to rethink the interior designs of our cognition – How can we look at these periods of misinformation from a different lens? Can we use them to our advantage to make our room looks spacious enough for the growth of our cognition? Despite the limitations imposed to the ceiling length by our existing cognitive biases, there exist multiple, unconventional interdisciplinary approaches from the fields of epistemology, phenomenology, evolutionary psychology, and finally, the mathematics that we, as researchers, can leverage to broaden our understanding about the existing “misinfodemic” that presents as a ripple effect of COVID19 on our society’s cognition. The aim of this paper shall be the same – to present a noble discourse regarding the “dark period of misinformation” – why misinformation is NOT a pandemic but a widely-used misnomer, how the source of truthful information acts a source of misinformation, why misinformation is needed for the development of a better cognitive heuristic framework for our society, and finally, why such unconventional approaches fail to see the light of research. While the existing approaches to deal with misinformation spiral around machine-learning models competing with each other for better detection accuracy, this paper will take the reader right to the epicenter of “misinfodemic” using a variety of routes. Towards the end, the author provides how the mentioned approaches not only widen our understanding regarding the universal phenomenon of misinformation but also can be leveraged and scaled for irrational human behaviors like suicide, partisanship, and even student gun violence in the USA.


Universitas ◽  
2021 ◽  
pp. 87-108
Author(s):  
Víctor Castillo-Riquelme ◽  
Patricio Hermosilla-Urrea ◽  
Juan P. Poblete-Tiznado ◽  
Christian Durán-Anabalón

The dissemination of fake news embodies a pressing problem for democracy that is exacerbated by theubiquity of information available on the Internet and by the exploitation of those who, appealing to theemotionality of audiences, have capitalized on the injection of falsehoods into the social fabric. In thisstudy, through a cross-sectional, correlational and non-experimental design, the relationship betweencredibility in the face of fake news and some types of dysfunctional thoughts was explored in a sampleof Chilean university students. The results reveal that greater credibility in fake news is associated withhigher scores of magical, esoteric and naively optimistic thinking, beliefs that would be the meetingpoint for a series of cognitive biases that operate in the processing of information. The highest correlationis found with the paranormal beliefs facet and, particularly, with ideas about the laws of mentalattraction, telepathy and clairvoyance. Significant differences were also found in credibility in fake newsas a function of the gender of the participants, with the female gender scoring higher on average thanthe male gender. These findings highlight the need to promote critical thinking, skepticism and scientificattitude in all segments of society.


2022 ◽  
pp. 181-194
Author(s):  
Bala Krishna Priya G. ◽  
Jabeen Sultana ◽  
Usha Rani M.

Mining Telugu news data and categorizing based on public sentiments is quite important since a lot of fake news emerged with rise of social media. Identifying whether news text is positive, negative, or neutral and later classifying the data in which areas they fall like business, editorial, entertainment, nation, and sports is included throughout this research work. This research work proposes an efficient model by adopting machine learning classifiers to perform classification on Telugu news data. The results obtained by various machine-learning models are compared, and an efficient model is found, and it is observed that the proposed model outperformed with reference to accuracy, precision, recall, and F1-score.


2020 ◽  
pp. 009365022092132
Author(s):  
Mufan Luo ◽  
Jeffrey T. Hancock ◽  
David M. Markowitz

This article focuses on message credibility and detection accuracy of fake and real news as represented on social media. We developed a deception detection paradigm for news headlines and conducted two online experiments to examine the extent to which people (1) perceive news headlines as credible, and (2) accurately distinguish fake and real news across three general topics (i.e., politics, science, and health). Both studies revealed that people often judged news headlines as fake, suggesting a deception-bias for news in social media. Across studies, we observed an average detection accuracy of approximately 51%, a level consistent with most research using this deception detection paradigm with equal lie-truth base-rates. Study 2 evaluated the effects of endorsement cues in social media (e.g., Facebook likes) on message credibility and detection accuracy. Results showed that headlines associated with a high number of Facebook likes increased credibility, thereby enhancing detection accuracy for real news but undermining accuracy for fake news. These studies introduce truth-default theory to the context of news credibility and advance our understanding of how biased processing of news information can impact detection accuracy with social media endorsement cues.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Jian Xing ◽  
Shupeng Wang ◽  
Xiaoyu Zhang ◽  
Yu Ding

Fake news can cause widespread and tremendous political and social influence in the real world. The intentional misleading of fake news makes the automatic detection of fake news an important and challenging problem, which has not been well understood at present. Meanwhile, fake news can contain true evidence imitating the true news and present different degrees of falsity, which further aggravates the difficulty of detection. On the other hand, the fake news speaker himself provides rich social behavior information, which provides unprecedented opportunities for advanced fake news detection. In this study, we propose a new hybrid deep model based on behavior information (HMBI), which uses the social behavior information of the speaker to detect fake news more accurately. Specifically, we model news content and social behavior information simultaneously to detect the degrees of falsity of news. The experimental analysis on real-world data shows that the detection accuracy of HMBI is increased by 10.41% on average, which is the highest of the existing model. The detection accuracy of fake news exceeds 50% for the first time.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Arvin Hansrajh ◽  
Timothy T. Adeliyi ◽  
Jeanette Wing

The exponential growth in fake news and its inherent threat to democracy, public trust, and justice has escalated the necessity for fake news detection and mitigation. Detecting fake news is a complex challenge as it is intentionally written to mislead and hoodwink. Humans are not good at identifying fake news. The detection of fake news by humans is reported to be at a rate of 54% and an additional 4% is reported in the literature as being speculative. The significance of fighting fake news is exemplified during the present pandemic. Consequently, social networks are ramping up the usage of detection tools and educating the public in recognising fake news. In the literature, it was observed that several machine learning algorithms have been applied to the detection of fake news with limited and mixed success. However, several advanced machine learning models are not being applied, although recent studies are demonstrating the efficacy of the ensemble machine learning approach; hence, the purpose of this study is to assist in the automated detection of fake news. An ensemble approach is adopted to help resolve the identified gap. This study proposed a blended machine learning ensemble model developed from logistic regression, support vector machine, linear discriminant analysis, stochastic gradient descent, and ridge regression, which is then used on a publicly available dataset to predict if a news report is true or not. The proposed model will be appraised with the popular classical machine learning models, while performance metrics such as AUC, ROC, recall, accuracy, precision, and f1-score will be used to measure the performance of the proposed model. Results presented showed that the proposed model outperformed other popular classical machine learning models.


2018 ◽  
Vol 7 (3.2) ◽  
pp. 778 ◽  
Author(s):  
SY. Yuliani ◽  
Shahrin Sahib ◽  
Mohd Faizal Abdollah ◽  
Mohammed Nasser Al-Mhiqani ◽  
Aldy Rialdy Atmadja

Hoax on email is one form of attack in the cyber world where an email account will be sent with fake news that has many goals to take advantage or raise the rating of sales of a product. A Hoax can affect many people by damaging the credibility of the image of a person or group. The phenomenon of this hoax would cause anxiety in the community and even more bad effects because of the potential for the wrong power of the news or information. In this paper we review the Hoax detection systems, Types of Hoax, and machine learning models that has been used to detect the Hoax. This work serves as a basis for further studies on Hoax detection systems.  


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Andrés Abeliuk ◽  
Daniel M. Benjamin ◽  
Fred Morstatter ◽  
Aram Galstyan

Abstract Crowdsourcing human forecasts and machine learning models each show promise in predicting future geopolitical outcomes. Crowdsourcing increases accuracy by pooling knowledge, which mitigates individual errors. On the other hand, advances in machine learning have led to machine models that increase accuracy due to their ability to parameterize and adapt to changing environments. To capitalize on the unique advantages of each method, recent efforts have shown improvements by “hybridizing” forecasts—pairing human forecasters with machine models. This study analyzes the effectiveness of such a hybrid system. In a perfect world, independent reasoning by the forecasters combined with the analytic capabilities of the machine models should complement each other to arrive at an ultimately more accurate forecast. However, well-documented biases describe how humans often mistrust and under-utilize such models in their forecasts. In this work, we present a model that can be used to estimate the trust that humans assign to a machine. We use forecasts made in the absence of machine models as prior beliefs to quantify the weights placed on the models. Our model can be used to uncover other aspects of forecasters’ decision-making processes. We find that forecasters trust the model rarely, in a pattern that suggests they treat machine models similarly to expert advisors, but only the best forecasters trust the models when they can be expected to perform well. We also find that forecasters tend to choose models that conform to their prior beliefs as opposed to anchoring on the model forecast. Our results suggest machine models can improve the judgment of a human pool but highlight the importance of accounting for trust and cognitive biases involved in the human judgment process.


Sign in / Sign up

Export Citation Format

Share Document