General public’s attitude toward governments implementing digital contact tracing to curb COVID-19 – a study based on natural language processing

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Praveen S.V. ◽  
Rajesh Ittamalla

Purpose Governments worldwide are taking various measures to prevent the spreading of COVID virus. One such effort is digital contact tracing. However, the aspect of digital contact tracing was met with criticism, as many critics view this as an attempt of the government to control people and a fundamental breach of privacy. Using machine learning techniques, this study aims to deal with understanding the general public’s emotions toward contact tracing and determining whether there is a change in the attitude of the general public toward digital contact tracing in various months of crises. This study also analyzes the significant concerns voiced out by the general public regarding digital contact tracing. Design/methodology/approach For the analysis, data were collected from Reddit. Reddit posts discussing the digital contact tracing during COVID-19 crises were collected from February 2020 to July 2020. A total of 5,025 original Reddit posts were used for this study. Natural language processing, which is a part of machine learning, was used for this study to understand the sentiments of the general public about contact tracing. Latent Dirichlet allocation was used to understand the significant issues voiced out by the general public while discussing contact tracing. Findings This study was conducted in two parts. Study 1 results show that the percentage of general public viewing the aspect of contact tracing positively had not changed throughout the time period of Data frame (March 2020 to July 2020). However, compared to the initial month of the crises, the later months saw a considerable increase in negative sentiments and a decrease in neutral sentiments regarding the digital contact tracing. Study 2 finds out the significant issues public voices out in their negative sentiments are a violation of privacy, fear of safety and lack of trust in government. Originality/value Although numerous studies were conducted on how to implement contact tracing effectively, to the best of the authors’ knowledge, this is the first study conducted with an objective of understanding the general public’s perception of contact tracing.

2020 ◽  
Vol 30 (2) ◽  
pp. 155-174
Author(s):  
Tim Hutchinson

Purpose This study aims to provide an overview of recent efforts relating to natural language processing (NLP) and machine learning applied to archival processing, particularly appraisal and sensitivity reviews, and propose functional requirements and workflow considerations for transitioning from experimental to operational use of these tools. Design/methodology/approach The paper has four main sections. 1) A short overview of the NLP and machine learning concepts referenced in the paper. 2) A review of the literature reporting on NLP and machine learning applied to archival processes. 3) An overview and commentary on key existing and developing tools that use NLP or machine learning techniques for archives. 4) This review and analysis will inform a discussion of functional requirements and workflow considerations for NLP and machine learning tools for archival processing. Findings Applications for processing e-mail have received the most attention so far, although most initiatives have been experimental or project based. It now seems feasible to branch out to develop more generalized tools for born-digital, unstructured records. Effective NLP and machine learning tools for archival processing should be usable, interoperable, flexible, iterative and configurable. Originality/value Most implementations of NLP for archives have been experimental or project based. The main exception that has moved into production is ePADD, which includes robust NLP features through its named entity recognition module. This paper takes a broader view, assessing the prospects and possible directions for integrating NLP tools and techniques into archival workflows.


Author(s):  
Rashida Ali ◽  
Ibrahim Rampurawala ◽  
Mayuri Wandhe ◽  
Ruchika Shrikhande ◽  
Arpita Bhatkar

Internet provides a medium to connect with individuals of similar or different interests creating a hub. Since a huge hub participates on these platforms, the user can receive a high volume of messages from different individuals creating a chaos and unwanted messages. These messages sometimes contain a true information and sometimes false, which leads to a state of confusion in the minds of the users and leads to first step towards spam messaging. Spam messages means an irrelevant and unsolicited message sent by a known/unknown user which may lead to a sense of insecurity among users. In this paper, the different machine learning algorithms were trained and tested with natural language processing (NLP) to classify whether the messages are spam or ham.


Author(s):  
Marina Sokolova ◽  
Stan Szpakowicz

This chapter presents applications of machine learning techniques to traditional problems in natural language processing, including part-of-speech tagging, entity recognition and word-sense disambiguation. People usually solve such problems without difficulty or at least do a very good job. Linguistics may suggest labour-intensive ways of manually constructing rule-based systems. It is, however, the easy availability of large collections of texts that has made machine learning a method of choice for processing volumes of data well above the human capacity. One of the main purposes of text processing is all manner of information extraction and knowledge extraction from such large text. Machine learning methods discussed in this chapter have stimulated wide-ranging research in natural language processing and helped build applications with serious deployment potential.


10.2196/23957 ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. e23957
Author(s):  
Chengda Zheng ◽  
Jia Xue ◽  
Yumin Sun ◽  
Tingshao Zhu

Background During the COVID-19 pandemic in Canada, Prime Minister Justin Trudeau provided updates on the novel coronavirus and the government’s responses to the pandemic in his daily briefings from March 13 to May 22, 2020, delivered on the official Canadian Broadcasting Corporation (CBC) YouTube channel. Objective The aim of this study was to examine comments on Canadian Prime Minister Trudeau’s COVID-19 daily briefings by YouTube users and track these comments to extract the changing dynamics of the opinions and concerns of the public over time. Methods We used machine learning techniques to longitudinally analyze a total of 46,732 English YouTube comments that were retrieved from 57 videos of Prime Minister Trudeau’s COVID-19 daily briefings from March 13 to May 22, 2020. A natural language processing model, latent Dirichlet allocation, was used to choose salient topics among the sampled comments for each of the 57 videos. Thematic analysis was used to classify and summarize these salient topics into different prominent themes. Results We found 11 prominent themes, including strict border measures, public responses to Prime Minister Trudeau’s policies, essential work and frontline workers, individuals’ financial challenges, rental and mortgage subsidies, quarantine, government financial aid for enterprises and individuals, personal protective equipment, Canada and China’s relationship, vaccines, and reopening. Conclusions This study is the first to longitudinally investigate public discourse and concerns related to Prime Minister Trudeau’s daily COVID-19 briefings in Canada. This study contributes to establishing a real-time feedback loop between the public and public health officials on social media. Hearing and reacting to real concerns from the public can enhance trust between the government and the public to prepare for future health emergencies.


Author(s):  
Tamanna Sharma ◽  
Anu Bajaj ◽  
Om Prakash Sangwan

Sentiment analysis is computational measurement of attitude, opinions, and emotions (like positive/negative) with the help of text mining and natural language processing of words and phrases. Incorporation of machine learning techniques with natural language processing helps in analysing and predicting the sentiments in more precise manner. But sometimes, machine learning techniques are incapable in predicting sentiments due to unavailability of labelled data. To overcome this problem, an advanced computational technique called deep learning comes into play. This chapter highlights latest studies regarding use of deep learning techniques like convolutional neural network, recurrent neural network, etc. in sentiment analysis.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Krishnadas Nanath ◽  
Supriya Kaitheri ◽  
Sonia Malik ◽  
Shahid Mustafa

Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.


Sign in / Sign up

Export Citation Format

Share Document