scholarly journals Texas Public Agencies’ Tweets and Public Engagement During the COVID-19 Pandemic: Natural Language Processing Approach (Preprint)

2020 ◽  
Author(s):  
Lu Tang ◽  
Wenlin Liu ◽  
Benjamin Thomas ◽  
Hong Thoai Nga Tran ◽  
Wenxue Zou ◽  
...  

BACKGROUND The ongoing COVID-19 pandemic is characterized by different morbidity and mortality rates across different states, cities, rural areas, and diverse neighborhoods. The absence of a national strategy for battling the pandemic also leaves state and local governments responsible for creating their own response strategies and policies. OBJECTIVE This study examines the content of COVID-19–related tweets posted by public health agencies in Texas and how content characteristics can predict the level of public engagement. METHODS All COVID-19–related tweets (N=7269) posted by Texas public agencies during the first 6 months of 2020 were classified in terms of each tweet’s functions (whether the tweet provides information, promotes action, or builds community), the preventative measures mentioned, and the health beliefs discussed, by using natural language processing. Hierarchical linear regressions were conducted to explore how tweet content predicted public engagement. RESULTS The information function was the most prominent function, followed by the action or community functions. Beliefs regarding susceptibility, severity, and benefits were the most frequently covered health beliefs. Tweets that served the information or action functions were more likely to be retweeted, while tweets that served the action and community functions were more likely to be liked. Tweets that provided susceptibility information resulted in the most public engagement in terms of the number of retweets and likes. CONCLUSIONS Public health agencies should continue to use Twitter to disseminate information, promote action, and build communities. They need to improve their strategies for designing social media messages about the benefits of disease prevention behaviors and audiences’ self-efficacy.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Stefano Landi ◽  
Antonio Costantini ◽  
Marco Fasan ◽  
Michele Bonazzi

PurposeThe purpose of this exploratory study is to investigate why and how public health agencies employed social media during coronavirus disease 2019 (COVID-19) outbreak to foster public engagement and dialogic accounting.Design/methodology/approachThe authors analysed the official Facebook pages of the leading public agencies for health crisis in Italy, United Kingdom and New Zealand and they collected data on the number of posts, popularity, commitment and followers before and during the outbreak. The authors also performed a content analysis to identify the topics covered by the posts.FindingsEmpirical results suggest that social media has been extensively used as a public engagement tool in all three countries under analysis but – because of legitimacy threats and resource scarcity – it has also been used as a dialogic accounting tool only in New Zealand. Findings suggest that fake news developed more extensively in contexts where the public body did not foster dialogic accounting.Practical implicationsPublic agencies may be interested in knowing the pros and cons of using social media as a public engagement and dialogic accounting tool. They may also leverage on dialogic accounting to limit fake news.Originality/valueThis study is one of the first to look at the nature and role of social media as an accountability tool during public health crises. In many contexts, COVID-19 forced for the first time public health agencies to heavily engage with the public and to develop new skills, so this study paves the way for numerous future research ideas.


2011 ◽  
Vol 8 (s1) ◽  
pp. S116-S124 ◽  
Author(s):  
Jennifer Dill ◽  
Deborah Howe

Background:Research has established that built environments, including street networks, bicycle and pedestrian infrastructure, and land uses, can positively affect the frequency and duration of daily physical activity. Attention is now being given to policy frameworks such as zoning codes that set the standards and expectations for this built environment.Methods:We examined the adoption and implementation of mixed-use and related zoning provisions with specific attention to the role that physical activity serves as a motivation for such policies and to what extent public health agencies influence the adoption process. A sample of planning directors from 53 communities with outstanding examples of mixed-use developments and 145 randomly selected midsized communities were surveyed.Results:Physical activity is not a dominant motivator in master plans and/or zoning codes and public health agencies played minor roles in policy adoption. However, physical activity as a motivation appears to be increasing in recent years and is associated with higher levels of policy innovation.Conclusions:Recommendations include framing the importance of physical activity in terms of other dominant concerns such as livability, dynamic centers, and economic development. Health agencies are encouraged to work in coalitions to focus arguments on behalf of physical activity.


2021 ◽  
pp. 69-86
Author(s):  
Claudio Fuentes Bravo ◽  
Julián Goñi Jerez

Through our experience during a large-scale public engagement exercise in Chile we draw conclusions to adapt and improve the Critical Debate Model to an online format. We highlight the importance of epistemic opposition and structured annotation for the execution of debates, while also exploring the possibilities of automated analysis using Natural Language Processing. We conclude by describing how an online version of the Critical Debate Model could be implemented.


2020 ◽  
Author(s):  
Patrick James Ward ◽  
April M Young

BACKGROUND Public health surveillance is critical to detecting emerging population health threats and improvements. Surveillance data has increased in size and complexity, posing challenges to data management and analysis. Natural language processing (NLP) and machine learning (ML) are valuable tools for analysis of unstructured data involving free-text and have been used in innovative ways to examine a variety of health outcomes. OBJECTIVE Given the cross-disciplinary applications of NLP and ML, research on their applications in surveillance have been disseminated in a variety of outlets. As such, the aim of this narrative review was to describe the current state of NLP and ML use in surveillance science and to identify directions in future research. METHODS Information was abstracted from articles describing the use of natural language processing and machine learning in public health surveillance identified through a PubMed search. RESULTS Twenty-two articles met review criteria, 12 involving traditional surveillance data sources and 10 involving online media sources for surveillance. Traditional surveillance sources analyzed with NLP and ML consisted primarily of death certificates (n=6), hospital data (n=5), and online media sources (e.g., Twitter) (n=8). CONCLUSIONS The reviewed articles demonstrate the potential of NLP and ML to enhance surveillance data through improving timeliness of surveillance, identifying cases in the absence of standardized case definitions, and enabling mining of social media for public health surveillance.


2015 ◽  
Vol 10 (5) ◽  
pp. 830-844 ◽  
Author(s):  
Kentaro Inui ◽  
◽  
Yotaro Watanabe ◽  
Kenshi Yamaguchi ◽  
Shingo Suzuki ◽  
...  

During times of disaster, local government departments and divisions need to communicate a broad range of information for disaster management to share the understating of the changing situation. This paper addresses the issues of how to effectively use a computer database system to communicate disaster management information and how to apply natural language processing technology to reduce the human labor for databasing a vast amount of information. The database schema was designed based on analyzing a collection of real-life disaster management information and the specifications of existing standardized systems. Our data analysis reveals that our database schema sufficiently covers the information exchanged in a local government during the Great East Earthquake. Our prototype system is designed so as to allow local governments to introduce it at a low cost: (i) the system’s user interface facilitates the operations for databasing given information, (ii) the system can be easily customized to each local municipality by simply replacing the dictionary and the sample data for training the system, and (iii) the system can be automatically adapted to each local municipality or each disaster incident through its capability of automatic learning from the user’s corrections to the system’s language processing outputs.


Author(s):  
Anton Ninkov ◽  
Kamran Sedig

This paper reports and describes VINCENT, a visual analytics system that is designed to help public health stakeholders (i.e., users) make sense of data from websites involved in the online debate about vaccines. VINCENT allows users to explore visualizations of data from a group of 37 vaccine-focused websites. These websites differ in their position on vaccines, topics of focus about vaccines, geographic location, and sentiment towards the efficacy and morality of vaccines, specific and general ones. By integrating webometrics, natural language processing of website text, data visualization, and human-data interaction, VINCENT helps users explore complex data that would be difficult to understand, and, if at all possible, to analyze without the aid of computational tools.The objectives of this paper are to explore A) the feasibility of developing a visual analytics system that integrates webometrics, natural language processing of website text, data visualization, and human-data interaction in a seamless manner; B) how a visual analytics system can help with the investigation of the online vaccine debate; and C) what needs to be taken into consideration when developing such a system. This paper demonstrates that visual analytics systems can integrate different computational techniques; that such systems can help with the exploration of public health online debates that are distributed across a set of websites; and that care should go into the design of the different components of such systems. 


2021 ◽  
Author(s):  
Jillian RYAN ◽  
Hamza Sellak ◽  
Emily Brindal

BACKGROUND Natural language processing is a machine learning technique that uses intelligent computer algorithms to detect patterns and themes in unstructured datasets commonly containing text data. Machine learning can aid with understanding the impacts of novel and disruptive events, and therefore offers myriad public health applications. OBJECTIVE This study aims to explore community sentiment towards COVID-19 and the nature of the impacts that COVID-19 has had on people using natural language processing on a linked research dataset. METHODS Stanford CoreNLP was used to analyse and detect sentiment in qualitative COVID-19 impact stories from 3,483 Australian adults. Common themes were categorised according to the Theoretical Life Domains framework and a multinomial regression analysis was conducted to identify psychological and demographic predictors of sentiment. RESULTS About one-third of participants (33%) expressed negative sentiment towards COVID-19, while a further 44% expressed neutral sentiment and 23% expressed positive sentiment. Of the Theoretical Life Domains, behavioural regulation was by far the most commonly impacted life domain, followed by environmental context and resources, emotion, and social influences. Negative sentiment was predicted by financial stress and lower subjective wellbeing. CONCLUSIONS COVID-19 and its containment measures have had dramatic impacts on Australian adults. Ability to regulate health and social behaviours were among the most common impacts and this raises concerns for the effects of public health crises on chronic health and mental health conditions. Positive effects of COVID-19, related to greater flexibility in working arrangements and reductions in life ‘busyness’ were also documented. CLINICALTRIAL N/A


2019 ◽  
Vol 18 ◽  
pp. 160940691988702 ◽  
Author(s):  
William Leeson ◽  
Adam Resnick ◽  
Daniel Alexander ◽  
John Rovers

Qualitative data-analysis methods provide thick, rich descriptions of subjects’ thoughts, feelings, and lived experiences but may be time-consuming, labor-intensive, or prone to bias. Natural language processing (NLP) is a machine learning technique from computer science that uses algorithms to analyze textual data. NLP allows processing of large amounts of data almost instantaneously. As researchers become conversant with NLP, it is becoming more frequently employed outside of computer science and shows promise as a tool to analyze qualitative data in public health. This is a proof of concept paper to evaluate the potential of NLP to analyze qualitative data. Specifically, we ask if NLP can support conventional qualitative analysis, and if so, what its role is. We compared a qualitative method of open coding with two forms of NLP, Topic Modeling, and Word2Vec to analyze transcripts from interviews conducted in rural Belize querying men about their health needs. All three methods returned a series of terms that captured ideas and concepts in subjects’ responses to interview questions. Open coding returned 5–10 words or short phrases for each question. Topic Modeling returned a series of word-probability pairs that quantified how well a word captured the topic of a response. Word2Vec returned a list of words for each interview question ordered by which words were predicted to best capture the meaning of the passage. For most interview questions, all three methods returned conceptually similar results. NLP may be a useful adjunct to qualitative analysis. NLP may be performed after data have undergone open coding as a check on the accuracy of the codes. Alternatively, researchers can perform NLP prior to open coding and use the results to guide their creation of their codebook.


2020 ◽  
Vol 16 (11) ◽  
pp. e1008277
Author(s):  
Auss Abbood ◽  
Alexander Ullrich ◽  
Rüdiger Busche ◽  
Stéphane Ghozzi

According to the World Health Organization (WHO), around 60% of all outbreaks are detected using informal sources. In many public health institutes, including the WHO and the Robert Koch Institute (RKI), dedicated groups of public health agents sift through numerous articles and newsletters to detect relevant events. This media screening is one important part of event-based surveillance (EBS). Reading the articles, discussing their relevance, and putting key information into a database is a time-consuming process. To support EBS, but also to gain insights into what makes an article and the event it describes relevant, we developed a natural language processing framework for automated information extraction and relevance scoring. First, we scraped relevant sources for EBS as done at the RKI (WHO Disease Outbreak News and ProMED) and automatically extracted the articles’ key data: disease, country, date, and confirmed-case count. For this, we performed named entity recognition in two steps: EpiTator, an open-source epidemiological annotation tool, suggested many different possibilities for each. We extracted the key country and disease using a heuristic with good results. We trained a naive Bayes classifier to find the key date and confirmed-case count, using the RKI’s EBS database as labels which performed modestly. Then, for relevance scoring, we defined two classes to which any article might belong: The article is relevant if it is in the EBS database and irrelevant otherwise. We compared the performance of different classifiers, using bag-of-words, document and word embeddings. The best classifier, a logistic regression, achieved a sensitivity of 0.82 and an index balanced accuracy of 0.61. Finally, we integrated these functionalities into a web application called EventEpi where relevant sources are automatically analyzed and put into a database. The user can also provide any URL or text, that will be analyzed in the same way and added to the database. Each of these steps could be improved, in particular with larger labeled datasets and fine-tuning of the learning algorithms. The overall framework, however, works already well and can be used in production, promising improvements in EBS. The source code and data are publicly available under open licenses.


Sign in / Sign up

Export Citation Format

Share Document