scholarly journals A Pilot Study of Valley Fever Tweets

2020 ◽  
Vol 41 (S1) ◽  
pp. s101-s101
Author(s):  
Nana Li ◽  
Gondy Leroy ◽  
Fariba Donovan ◽  
John Galgiani ◽  
Katherine Ellingson

Background: Twitter is used by officials to distribute public health messages and by the public to post information about ongoing afflictions. Because tweets originate from geographically and socially diverse sources, scholars have used this social media data to analyze the spread of diseases like flu [Alessio Signorini 2011], asthma [Philip Harber 2019] and mental health disorders [Chandler McClellan, 2017]. To our knowledge, no Twitter analysis has been performed for Valley fever. Valley fever is a fungal infection caused by the Coccidioides organism, mostly found in Arizona and California. Objective: We analyzed tweets concerning Valley fever to evaluate content, location, and timing. Methods: We collected tweets using the Twitter search application programming interface using the terms “Valley fever,” “valleyfever,” “cocci” or “‘Valleyfever” from August 6 to 16, 2019, and again from October 20 to 29, 2019. In total, 2,117 Tweets were retrieved. Tweets not focused on Valley fever were filtered out, including a tweet about “Rift valley fever” and tweets where “valley” and “fever” were separate and not one phrase. We excluded tweets not written in English. In total, 1,533 tweets remained; we grouped them into 3 categories: original tweets, hereafter labeled “normal” (N = 497), retweets (N = 811), and replies (N = 225). We converted all terms to lowercase, removed white space and punctuation, and tokenized the tweets. Informal messaging conventions (eg, hashtag, @user, RT, links) and stop words were removed, and terms were lemmatized. Finally, we analyzed the frequency of tweets by season, state, and co-occurring terms. Results: Tweet frequency was 228.5 per week in summer and 113.4 per week in the fall. Users tweeted from 40 different states; the most common were California (N = 401; 10.1 per 100,00 population) and Arizona (N = 216, 30.1 per 100,000 population), New York (N = 49), Florida (N = 21), and Washington, DC (N = 14). Term frequency analysis showed that for normal tweets, the 5 most frequent terms were “awareness,” “Arizona,” “disease,” “California,” and “people.” For retweets, the most common terms were “Gunner” (a dog name), “vet,” “prayer,” “cough,” and “family.” For replies, they were “dog,” “lung,” “vet,” “day,” and “result.” Several symptoms were mentioned: “cough” (normal: 8, retweets: 104, and replies: 7), “sick” (normal: 21, retweets: 42, replies: 7), “rash” (normal: 2, retweets: 6, replies: 1), and “headache” (normal: 1, retweets: 3, replies: 0). Conclusions: Valley fever tweets are potentially sufficient to track disease intensity, especially in Arizona and California. Data collection over longer intervals is needed to understand the utility of Twitter in this context.Disclosures: NoneFunding: None

Author(s):  
Anne Hardy

Over the past twenty years, social media has changed the ways in which we plan, travel and reflect on our travels. Tourists use social media while travelling to stay in touch with friends and family, enhance their social status (Guo et al., 2015); and assist others with decision making (Xiang and Gretzel, 2010; Yoo and Gretzel, 2010). They also use it to report back to their friends and family where they are. This can be done using a geotag function that provides a location for where a post is made. While little is known about why tourists choose to geotag their social media posts, Chung and Lee (2016) suggest that geotags may be used in an altruistic manner by tourists, in order to provide information, and because they elicit a sense of anticipated reward. What is known, however, is that the function offers researchers the ability to understand where tourists travel. There are two types of geotagged social media data. The first of these is discussed in this chapter and may be defined as single point geo-referenced data – geotagged social media posts whose release is chosen by the user. This includes data gathered from social media apps such as Facebook, Instagram, Twitter and WeiChat. The method of obtaining this data involves the collation of large numbers of discrete geotagged updates or photographs. Data can be collated via an application programming interface (API) provided by the app developer to researchers, by automated data scraping via computer programs, perhaps written in Python, or manually by researchers. The second type of data is continuous location-based data from applications that are designed to track movement constantly, such as Strava or MyFitnessPal. Tracking methods using this continuous location-based data are discussed in detail in the following chapter.


Author(s):  
Amir Manzoor

Over the last decade, social media use has gained much attention of scholarly researchers. One specific reason of this interest is the use of social media for communication; a trend that is gaining tremendous popularity. Every social media platform has developed its own set of application programming interface (API). Through these APIs, the data available on a particular social media platform can be accessed. However, the data available is limited and it is difficult to ascertain the possible conclusions that can be drawn about society on the basis of this data. This chapter explores the ways social researchers and scientists can use social media data to support their research and analysis.


2020 ◽  
Vol 12 (10) ◽  
pp. 4200 ◽  
Author(s):  
Thanh-Long Giang ◽  
Dinh-Tri Vo ◽  
Quan-Hoang Vuong

Using data from the WHO’s Situation Report on the COVID-19 pandemic from 21 January 2020 to 30 March 2020 along with other health, demographic, and macroeconomic indicators from the WHO’s Application Programming Interface and the World Bank’s Development Indicators, this paper explores the death rates of infected persons and their possible associated factors. Through the panel analysis, we found consistent results that healthcare system conditions, particularly the number of hospital beds and medical staff, have played extremely important roles in reducing death rates of COVID-19 infected persons. In addition, both the mortality rates due to different non-communicable diseases (NCDs) and rate of people aged 65 and over were significantly related to the death rates. We also found that controlling international and domestic travelling by air along with increasingly popular anti-COVID-19 actions (i.e., quarantine and social distancing) would help reduce the death rates in all countries. We conducted tests for robustness and found that the Driscoll and Kraay (1998) method was the most suitable estimator with a finite sample, which helped confirm the robustness of our estimations. Based on the findings, we suggest that preparedness of healthcare systems for aged populations need more attentions from the public and politicians, regardless of income level, when facing COVID-19-like pandemics.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Casper W. Andersen ◽  
Rickard Armiento ◽  
Evgeny Blokhin ◽  
Gareth J. Conduit ◽  
Shyam Dwaraknath ◽  
...  

AbstractThe Open Databases Integration for Materials Design (OPTIMADE) consortium has designed a universal application programming interface (API) to make materials databases accessible and interoperable. We outline the first stable release of the specification, v1.0, which is already supported by many leading databases and several software packages. We illustrate the advantages of the OPTIMADE API through worked examples on each of the public materials databases that support the full API specification.


2021 ◽  
Vol 11 (1) ◽  
pp. 20
Author(s):  
Mete Ercan Pakdil ◽  
Rahmi Nurhan Çelik

Geospatial data and related technologies have become an increasingly important aspect of data analysis processes, with their prominent role in most of them. Serverless paradigm have become the most popular and frequently used technology within cloud computing. This paper reviews the serverless paradigm and examines how it could be leveraged for geospatial data processes by using open standards in the geospatial community. We propose a system design and architecture to handle complex geospatial data processing jobs with minimum human intervention and resource consumption using serverless technologies. In order to define and execute workflows in the system, we also propose new models for both workflow and task definitions models. Moreover, the proposed system has new Open Geospatial Consortium (OGC) Application Programming Interface (API) Processes specification-based web services to provide interoperability with other geospatial applications with the anticipation that it will be more commonly used in the future. We implemented the proposed system on one of the public cloud providers as a proof of concept and evaluated it with sample geospatial workflows and cloud architecture best practices.


2020 ◽  
Vol 6 (3) ◽  
pp. 205630512094070 ◽  
Author(s):  
Moreno Mancosu ◽  
Federico Vegetti

In reaction to the Cambridge Analytica scandal, Facebook has restricted the access to its Application Programming Interface (API). This new policy has damaged the possibility for independent researchers to study relevant topics in political and social behavior. Yet, much of the public information that the researchers may be interested in is still available on Facebook, and can be still systematically collected through web scraping techniques. The goal of this article is twofold. First, we discuss some ethical and legal issues that researchers should consider as they plan their collection and possible publication of Facebook data. In particular, we discuss what kind of information can be ethically gathered about the users (public information), how published data should look like to comply with privacy regulations (like the GDPR), and what consequences violating Facebook’s terms of service may entail for the researcher. Second, we present a scraping routine for public Facebook posts, and discuss some technical adjustments that can be performed for the data to be ethically and legally acceptable. The code employs screen scraping to collect the list of reactions to a Facebook public post, and performs a one-way cryptographic hash function on the users’ identifiers to pseudonymize their personal information, while still keeping them traceable within the data. This article contributes to the debate around freedom of internet research and the ethical concerns that might arise by scraping data from the social web.


2020 ◽  
Author(s):  
Shubh Mohan Singh ◽  
Chaitanya Reddy

Abstract Objectives: A majority of patients suffering from acute COVID-19 are expected to recover symptomatically and functionally. However there are reports that some people continue to experience symptoms even beyond the stage of acute infection. This phenomenon has been called longcovid. Study design: This study attempted to analyse symptoms reported by users on twitter self-identifying as longcovid. Methods: The search was carried out using the twitter public streaming application programming interface using a relevant search term. Results: We could identify 89 users with usable data in the tweets posted by them. A majority of users described multiple symptoms the most common of which were fatigue, shortness of breath, pain and brainfog/concentration difficulties. The most common course of symptoms was episodic. Conclusions: Given the public health importance of this issue, the study suggests that there is a need to better study post acute-COVID symptoms.


2019 ◽  
Vol 35 (20) ◽  
pp. 4147-4155 ◽  
Author(s):  
Peter Selby ◽  
Rafael Abbeloos ◽  
Jan Erik Backlund ◽  
Martin Basterrechea Salido ◽  
Guillaume Bauchet ◽  
...  

Abstract Motivation Modern genomic breeding methods rely heavily on very large amounts of phenotyping and genotyping data, presenting new challenges in effective data management and integration. Recently, the size and complexity of datasets have increased significantly, with the result that data are often stored on multiple systems. As analyses of interest increasingly require aggregation of datasets from diverse sources, data exchange between disparate systems becomes a challenge. Results To facilitate interoperability among breeding applications, we present the public plant Breeding Application Programming Interface (BrAPI). BrAPI is a standardized web service API specification. The development of BrAPI is a collaborative, community-based initiative involving a growing global community of over a hundred participants representing several dozen institutions and companies. Development of such a standard is recognized as critical to a number of important large breeding system initiatives as a foundational technology. The focus of the first version of the API is on providing services for connecting systems and retrieving basic breeding data including germplasm, study, observation, and marker data. A number of BrAPI-enabled applications, termed BrAPPs, have been written, that take advantage of the emerging support of BrAPI by many databases. Availability and implementation More information on BrAPI, including links to the specification, test suites, BrAPPs, and sample implementations is available at https://brapi.org/. The BrAPI specification and the developer tools are provided as free and open source.


2019 ◽  
Vol 5 (2) ◽  
pp. 88-97
Author(s):  
M. Fuadi Aziz Muri ◽  
Hendrik Setyo Utomo ◽  
Rabini Sayyidati

Application Programming Interface (API) is a function concept that can be called by other programs. The API works as a link that unites various applications of various types of platforms, commonly known as API public names. The public API has been widely spread, while its users, programmers who want to search for public APIs, must browse through various methods such as general search engines, repository documentation or directly in web articles. The user does not yet have a system specifically for collecting public-public APIs, so that users have difficulty in performing API public link searches. The solution to these problems can be solved by building a web framework with a search engine interface that provides specific public-public searches for the API, so that users can search the API public more easily. Web Service is an API that is made to support the interaction between two or more different applications through a network. Representational State Transfer (ReST) is one of the rules.


Sign in / Sign up

Export Citation Format

Share Document