scholarly journals Ai4Truth: An In-depth Analysis on Misinformation using Machine Learning and Data Science

2021 ◽  
Author(s):  
Kevin Qu ◽  
Yu Sun

A number of social issues have been grown due to the increasing amount of “fake news”. With the inevitable exposure to this misinformation, it has become a real challenge for the public to process the correct truth and knowledge with accuracy. In this paper, we have applied machine learning to investigate the correlations between the information and the way people treat it. With enough data, we are able to safely and accurately predict which groups are most vulnerable to misinformation. In addition, we realized that the structure of the survey itself could help with future studies, and the method by which the news articles are presented, and the news articles itself also contributes to the result.

2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110267
Author(s):  
Sara Dahlman ◽  
Ib T Gulbrandsen ◽  
Sine N Just

Building on critical approaches that understand algorithms in terms of communication, culture and organization, this paper offers the supplementary conceptualization of algorithms as organizational figuration, defined as material and meaningful sociotechnical arrangements that develop in spatiotemporal processes and are shaped by multiple enactments of affordance–agency relations. We develop this conceptualization through a case study of a Danish fintech start-up that uses machine learning to create opportunities for sustainable pensions investments. By way of ethnographic and literary methodology, we provide an in-depth analysis of the dynamic trajectory in and through which the organization gives shape to and takes shape from its key algorithmic tool, mapping the shifting sociotechnical arrangements of the start-up, from its initial search for a viable business model through the development of the algorithm to the public launch of its product. On this basis, we argue that conceptualizing algorithms as organizational figuration enables us to detail not only what algorithms do but also what they are.


As the internet is becoming part of our daily routine there is sudden growth and popularity of online news reading. This news can become a major issue to the public and government bodies (especially politically) if its fake hence authentication is necessary. It is essential to flag the fake news before it goes viral and misleads the society. In this paper, various Natural Language Processing techniques along with the number of classifiers are used to identify news content for its credibility.Further this technique can be used for various applications like plagiarismcheck , checking for criminal records.


2021 ◽  
Vol 09 (02) ◽  
pp. 536-556
Author(s):  
Panagiota Pampouktsi ◽  
Spyridon Avdimiotis ◽  
Manolis Μaragoudakis ◽  
Markos Avlonitis

2020 ◽  
Vol 65 (2) ◽  
pp. 21-39
Author(s):  
Ştefana Ciortea-Neamţiu

"Fake news are a big concern for media, audiences and governments. Some journalists are engaged in finding fake news and disclose them. Fake news is also a concern to the researchers and journalism professors, but they should not focus only on the way fake news work, or how to teach future journalists about them, a big challenge would be to teach the audiences, the public to make the right choices and identify fake news. Tackling this problem of the popularization of science and teaching the public should actually be one of the key-concerns of the journalism professors today in Romania. It is the purpose of this paper to propose a list of criteria to identify fake news, by using critical thinking, a list that could be easily explained to people from the public, so they can make good choices. The core notion used hereby will be quality. A large discussion on quality in journalism raised at the end of the 1990s in Western Europe, not so in Romania. Therefore, it seems more than appropriate to start it now. Keywords: fake news, media, critical thinking, education, public, criteria. "


In real world, twitter sentimental analysis (TSA) acting a major role in observing the public opinion about customer side. TSA is complex compared to general sentiment analysis due to pre-processing of text on Twitter. The maximum limit on the number of characters allowed on Twitter is 280. In this article we discuss the influence of the text pre-processing technique on the classification efficiency of emotions in two kinds of classification problems and summarize the classification efficiency of the four pre-processing methods. This paper contributes to the consumer satisfaction classification sentiment analysis and is useful in evaluating the details in the context of the amount of tweets where views are somewhat unstructured and are either positive or negative, or somewhere in between. We first pre-processed the dataset, then extracted the adjective from the dataset with some meaning called the feature vector, then selected the feature vector list and subsequently applied machine learning based classification algorithms namely: Naive Bayes, Random Forest and SVM along with WordNet based Semantic Orientation which extracts synonyms and similarity for the features of content. Experiments display that the accuracy (Acc) and average F1-measure (F1-M) of the classification classifier on Twitter are enhanced by using methods of pre-processing the extension of acronyms and swapping negation, but barely deleting numbers or stop words


2020 ◽  
Vol 12 ◽  
pp. 270-279
Author(s):  
Robin Kabha ◽  
Ahmed Mostafa Kamel ◽  
Moataz Elbahi ◽  
Abdu Mohamed Dawood Hafiz ◽  
Wided Dafri

With the advent of the internet and the subsequent increase in use and accessibility, the social media networks have particularly emphasised in terms of the news being shred online. However, this has caused a drastic change in the assessment and obtaining of the real information. Hence, this paper has aimed to assess the impacts of fake news and myths regarding the novel Covid-19 pandemic. Through the systematic review of the related studies and support through relevant literature, the findings of the research include various harmful impacts of the notion. This ranges from small impacts such as spread of misinformation to more sinister impacts such as the wrongful utilisation of drugs for curing the disease. Moreover, the paper also mentions the various motives behind the spread of such false information, primarily fuelled by collecting monetary benefits in terms of digital marketing, etc. Overall, the study concludes the impacts of spread of fake news and myths are generally harmful to the public at large. In addition, some recommendations for future hedging against the notion and future studies have also been make.


Author(s):  
Lois Gander ◽  
Diane Rhyason

Universities can enhance the return on the public investment that they represent by collaborating with their natural allies in addressing pressing social issues. That work can be further enhanced by harnessing appropriate digital technologies. In this chapter, the authors profile a current example of a community-led, multi-layered partnership that was formed to strengthen the infrastructure of the charitable sector in Canada. In particular, the chapter demonstrates that the “habit of partnerships” combined with the “habit of technology” is a potent strategy for addressing community needs. The authors argue that no single partnership or technology will transform the academic enterprise, but rather that the widespread adoption of technologies among universities’ allies, competitors, students, and faculty that characterizes the electronically-defined era will compel universities to adopt both the habit of partnerships and the habit of technology. That, in turn, will transform the way universities do their business and those with whom they do it.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Qiang Liu ◽  
Qiannan Liu

Data analysis and machine learning are the backbones of the current era. Human society has entered machine learning and data science that increases the data capacity. It has been widely acknowledged that not only does the number of information increase exponentially, but also the way of human information management and processing is completed to be changed from manual to computer, mainly depending on the transformation of information technology including a computer, network, and communication. This paper is aimed at a solution to the lag of the methods and means of volleyball technique prediction in China. Through field visits, it is found that the way of analysis and research of techniques and tactics in Chinese volleyball practice is relatively backward, which to a certain extent affected the rapid development of Chinese volleyball. Therefore, it is a necessary and urgent task to realize the reform of the methods and means of volleyball technical and tactical analysis in China. The data analysis and prediction are based on the machine learning and data mining algorithm applied to volleyball in this paper is an inevitable trend. The proposed model is applied to the data produced at the edges of the systems and thoroughly analyzed. The Apriori algorithm of the machine learning algorithm is utilized to process the data and provide a prediction about the strategies of a volleyball match. The Apriori algorithm of machine learning is also optimized to perform better data analysis. The effectiveness of the proposed model is also highlighted.


2021 ◽  
Author(s):  
Avi Choudhary

AbstractThe Covid-19 pandemic has taken a major toll on the health and state of our global population. With tough decisions for allocating resources(i.e. vaccines)[1] are being made, forecasting through machine learning has become more important than ever. Moreover, as vaccines are being brought to the public and cases are going down, it is time that we reflect on where the pandemic has taken the most toll:for the purpose of future reform. This research illustrates two different models and algorithms for COVID-19 forecasting: Auto Regressive models and Recurrent Neural Networks(RNNs). The results show the true potential of RNNs to work with sequential and time-series data to forecast future cases and deaths in different states. As the paper utilizes the tanh activation function and multiple LSTM layers, the research will show the importance of machine learning and its ability to help politicians make decisions when it comes to helping states during the pandemic and future reform. The data will also pre-process the time-series data, using rolling statistics and will clean the data for the auto-regressive model and RNN layers. Thus, we show that along with Recurrent Neural Network layers, activation functions also play a crucial role in the accuracy of the forecast.


Sign in / Sign up

Export Citation Format

Share Document