scholarly journals Inequality in Posting Behaviour Over Time

2019 ◽  
Vol 40 (s1) ◽  
pp. 31-49
Author(s):  
Anja Bechmann

AbstractThis study investigates the Facebook posting behaviour of 922 posting users over a time span of seven years (from 2007 to 2014), using an innovative combination of survey data and private profile feed post counts obtained through the Facebook Application Programming Interface (API) prior to the changes in 2015. A digital inequality lens is applied to study the effect of socio-demographic characteristics as well as time on posting behaviour. The findings indicate differences, for example in terms of gender and age, but some of this inequality is becoming smaller over time. The data set also shows inequality in the poster ratio in different age groups. Across all the demographic groups, the results show an increase in posting frequency in the time period observed, and limited evidence is found that young age groups have posted less on Facebook in more recent years.

2021 ◽  
Author(s):  
Tejas Desai ◽  
Arvind Conjeevaram

AbstractIn Situation Report #3 and 39 days before declaring COVID-19 a pandemic, the WHO declared a -19 infodemic. The volume of coronavirus tweets was far too great for one to find accurate or reliable information. Healthcare workers were flooded with which drowned the of valuable COVID-19 information. To combat the infodemic, physicians created healthcare-specific micro-communities to share scientific information with other providers. We analyzed the content of eight physician-created communities and categorized each message in one of five domains. We coded 1) an application programming interface to download tweets and their metadata in JavaScript Object Notation and 2) a reading algorithm using visual basic application in Excel to categorize the content. We superimposed the publication date of each tweet into a timeline of key pandemic events. Finally, we created NephTwitterArchive.com to help healthcare workers find COVID-19-related signal tweets when treating patients. We collected 21071 tweets from the eight hashtags studied. Only 9051 tweets were considered signal: tweets categorized into both a domain and subdomain. There was a trend towards fewer signal tweets as the pandemic progressed, with a daily median of 22% (IQR 0-42%. The most popular subdomain in Prevention was PPE (2448 signal tweets). In Therapeutics, Hydroxychloroquine/chloroquine wwo Azithromycin and Mechanical Ventilation were the most popular subdomains. During the active Infodemic phase (Days 0 to 49), a total of 2021 searches were completed in NephTwitterArchive.com, which was a 26% increase from the same time period before the pandemic was declared (Days −50 to −1). The COVID-19 Infodemic indicates that future endeavors must be undertaken to eliminate noise and elevate signal in all aspects of scientific discourse on Twitter. In the absence of any algorithm-based strategy, healthcare providers will be left with the nearly impossible task of manually finding high-quality tweets from amongst a tidal wave of noise.


Analysis of structured and consistent data has seen remarkable success in past decades. Whereas, the analysis of unstructured data in the form of multimedia format remains a challenging task. YouTube is one of the most popular and used social media tool. It reveals the community feedback through comments for published videos, number of likes, dislikes, number of subscribers for a particular channel. The main objective of this work is to demonstrate by using Hadoop concepts, how data generated from YouTube can be mined and utilized to make targeted, real time and informed decisions. In our paper, we analyze the data to identify the top categories in which the most number of videos are uploaded. This YouTube data is publicly available and the YouTube data set is described below under the heading Data Set Description. The dataset will be fetched from the Google using the YouTube API (Application Programming Interface) and going to be stored in Hadoop Distributed File System (HDFS). Using MapReduce we are going to analyze the dataset to identify the video categories in which most number of videos are uploaded. The objective of this paper is to demonstrate Apache Hadoop framework concepts and how to make targeted, real-time and informed decisions using data gathered from YouTube.


Author(s):  
Lei Zhu ◽  
Jacob R. Holden ◽  
Jeffrey D. Gonder

The green-routing strategy instructing a vehicle to select a fuel-efficient route benefits the current transportation system with fuel-saving opportunities. This paper introduces a navigation application programming interface (API), route fuel-saving evaluation framework for estimating fuel advantages of alternative API routes based on large-scale, real-world travel data for conventional vehicles (CVs) and hybrid electric vehicles (HEVs). Navigation APIs, such as Google Directions API, integrate traffic conditions and provide feasible alternative routes for origin–destination pairs. This paper develops two link-based fuel-consumption models stratified by link-level speed, road grade, and functional class (local/non-local), one for CVs and the other for HEVs. The link-based fuel-consumption models are built by assigning travel from many global positioning system driving traces to the links in TomTom MultiNet and road grade data from the U.S. Geological Survey elevation data set. Fuel consumption on a link is computed by the proposed model. This paper envisions two kinds of applications: (1) identifying alternate routes that save fuel, and (2) quantifying the potential fuel savings for large amounts of travel. An experiment based on a large-scale California Household Travel Survey global positioning system trajectory data set is conducted. The fuel consumption and savings of CVs and HEVs are investigated. At the same time, the trade-off between fuel saving and travel time due to choosing different routes is also examined for both powertrains.


2021 ◽  
pp. postgradmedj-2021-140685
Author(s):  
Robert Marcec ◽  
Robert Likic

IntroductionA worldwide vaccination campaign is underway to bring an end to the SARS-CoV-2 pandemic; however, its success relies heavily on the actual willingness of individuals to get vaccinated. Social media platforms such as Twitter may prove to be a valuable source of information on the attitudes and sentiment towards SARS-CoV-2 vaccination that can be tracked almost instantaneously.Materials and methodsThe Twitter academic Application Programming Interface was used to retrieve all English-language tweets mentioning AstraZeneca/Oxford, Pfizer/BioNTech and Moderna vaccines in 4 months from 1 December 2020 to 31 March 2021. Sentiment analysis was performed using the AFINN lexicon to calculate the daily average sentiment of tweets which was evaluated longitudinally and comparatively for each vaccine throughout the 4 months.ResultsA total of 701 891 tweets have been retrieved and included in the daily sentiment analysis. The sentiment regarding Pfizer and Moderna vaccines appeared positive and stable throughout the 4 months, with no significant differences in sentiment between the months. In contrast, the sentiment regarding the AstraZeneca/Oxford vaccine seems to be decreasing over time, with a significant decrease when comparing December with March (p<0.0000000001, mean difference=−0.746, 95% CI=−0.915 to −0.577).ConclusionLexicon-based Twitter sentiment analysis is a valuable and easily implemented tool to track the sentiment regarding SARS-CoV-2 vaccines. It is worrisome that the sentiment regarding the AstraZeneca/Oxford vaccine appears to be turning negative over time, as this may boost hesitancy rates towards this specific SARS-CoV-2 vaccine.


2020 ◽  
pp. 245-253 ◽  
Author(s):  
Alex H. Wagner ◽  
Susanna Kiwala ◽  
Adam C. Coffman ◽  
Joshua F. McMichael ◽  
Kelsy C. Cotto ◽  
...  

PURPOSE Precision oncology depends on the matching of tumor variants to relevant knowledge describing the clinical significance of those variants. We recently developed the Clinical Interpretations for Variants in Cancer (CIViC; civicdb.org ) crowd-sourced, expert-moderated, and open-access knowledgebase. CIViC provides a structured framework for evaluating genomic variants of various types (eg, fusions, single-nucleotide variants) for their therapeutic, prognostic, predisposing, diagnostic, or functional utility. CIViC has a documented application programming interface for accessing CIViC records: assertions, evidence, variants, and genes. Third-party tools that analyze or access the contents of this knowledgebase programmatically must leverage this application programming interface, often reimplementing redundant functionality in the pursuit of common analysis tasks that are beyond the scope of the CIViC Web application. METHODS To address this limitation, we developed CIViCpy ( civicpy.org ), a software development kit for extracting and analyzing the contents of the CIViC knowledgebase. CIViCpy enables users to query CIViC content as dynamic objects in Python. We assess the viability of CIViCpy as a tool for advancing individualized patient care by using it to systematically match CIViC evidence to observed variants in patient cancer samples. RESULTS We used CIViCpy to evaluate variants from 59,437 sequenced tumors of the American Association for Cancer Research Project GENIE data set. We demonstrate that CIViCpy enables annotation of > 1,200 variants per second, resulting in precise variant matches to CIViC level A (professional guideline) or B (clinical trial) evidence for 38.6% of tumors. CONCLUSION The clinical interpretation of genomic variants in cancers requires high-throughput tools for interoperability and analysis of variant interpretation knowledge. These needs are met by CIViCpy, a software development kit for downstream applications and rapid analysis. CIViCpy is fully documented, open-source, and available free online.


2018 ◽  
Vol 9 (1) ◽  
pp. 24-31
Author(s):  
Rudianto Rudianto ◽  
Eko Budi Setiawan

Availability the Application Programming Interface (API) for third-party applications on Android devices provides an opportunity to monitor Android devices with each other. This is used to create an application that can facilitate parents in child supervision through Android devices owned. In this study, some features added to the classification of image content on Android devices related to negative content. In this case, researchers using Clarifai API. The result of this research is to produce a system which has feature, give a report of image file contained in target smartphone and can do deletion on the image file, receive browser history report and can directly visit in the application, receive a report of child location and can be directly contacted via this application. This application works well on the Android Lollipop (API Level 22). Index Terms— Application Programming Interface(API), Monitoring, Negative Content, Children, Parent.


Sign in / Sign up

Export Citation Format

Share Document