The COVID-19 Infodemic (Preprint)

2021 ◽  
Author(s):  
TEJAS DESAI ◽  
Arvind Conjeevaram

BACKGROUND In Situation Report #13 and 39 days before declaring COVID-19 a pandemic, the WHO declared a “COVID-19 infodemic”. The volume of coronavirus tweets was far too great for one to find accurate or reliable information. Healthcare workers were flooded with “noise” which drowned the “signal” of valuable COVID-19 information. To combat the infodemic, physicians created healthcare-specific micro-communities to share scientific information with other providers. OBJECTIVE Our objective was to eliminate noise and elevate signal tweets related to COVID-19 and provide easy access to the most educational tweets for medical professionals who were searching for information. METHODS We analyzed the content of eight physician-created communities and categorized each message in one of five domains. We coded 1) an application programming interface to download tweets and their metadata in JavaScript Object Notation and 2) a reading algorithm using visual basic application in Excel to categorize the content. We superimposed the publication date of each tweet into a timeline of key pandemic events. Finally, we created NephTwitterArchive.com to help healthcare workers find COVID-19-related signal tweets when treating patients. RESULTS We collected 21071 tweets from the eight hashtags studied. Only 9051 tweets were considered signal: tweets categorized into both a domain and subdomain. There was a trend towards fewer signal tweets as the pandemic progressed, with a daily median of 22% (IQR 0-42%). The most popular subdomain in Prevention was PPE (2448 signal tweets). In Therapeutics, Hydroxychloroquine/chloroquine wwo Azithromycin and Mechanical Ventilation were the most popular subdomains. During the active Infodemic phase (Days 0 to 49), a total of 2021 searches were completed in NephTwitterArchive.com, which was a 26% increase from the same time period before the pandemic was declared (Days -50 to -1). CONCLUSIONS The COVID-19 Infodemic indicates that future endeavors must be undertaken to eliminate noise and elevate signal in all aspects of scientific discourse on Twitter. In the absence of any algorithm-based strategy, healthcare providers will be left with the nearly impossible task of manually finding high-quality tweets from amongst a tidal wave of noise. CLINICALTRIAL not applicable

2021 ◽  
Author(s):  
Tejas Desai ◽  
Arvind Conjeevaram

AbstractIn Situation Report #3 and 39 days before declaring COVID-19 a pandemic, the WHO declared a -19 infodemic. The volume of coronavirus tweets was far too great for one to find accurate or reliable information. Healthcare workers were flooded with which drowned the of valuable COVID-19 information. To combat the infodemic, physicians created healthcare-specific micro-communities to share scientific information with other providers. We analyzed the content of eight physician-created communities and categorized each message in one of five domains. We coded 1) an application programming interface to download tweets and their metadata in JavaScript Object Notation and 2) a reading algorithm using visual basic application in Excel to categorize the content. We superimposed the publication date of each tweet into a timeline of key pandemic events. Finally, we created NephTwitterArchive.com to help healthcare workers find COVID-19-related signal tweets when treating patients. We collected 21071 tweets from the eight hashtags studied. Only 9051 tweets were considered signal: tweets categorized into both a domain and subdomain. There was a trend towards fewer signal tweets as the pandemic progressed, with a daily median of 22% (IQR 0-42%. The most popular subdomain in Prevention was PPE (2448 signal tweets). In Therapeutics, Hydroxychloroquine/chloroquine wwo Azithromycin and Mechanical Ventilation were the most popular subdomains. During the active Infodemic phase (Days 0 to 49), a total of 2021 searches were completed in NephTwitterArchive.com, which was a 26% increase from the same time period before the pandemic was declared (Days −50 to −1). The COVID-19 Infodemic indicates that future endeavors must be undertaken to eliminate noise and elevate signal in all aspects of scientific discourse on Twitter. In the absence of any algorithm-based strategy, healthcare providers will be left with the nearly impossible task of manually finding high-quality tweets from amongst a tidal wave of noise.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 563 ◽  
Author(s):  
Guillaume Brysbaert ◽  
Théo Mauri ◽  
Marc F. Lensink

Residue interaction networks (RINs) have been shown to be relevant representations of the tertiary or quaternary structures of proteins, in particular thanks to network centrality analyses. We recently developed the RINspector 1.0.0 Cytoscape app, which couples centrality analyses with backbone flexibility predictions. This combined approach permits the identification of crucial residues for the folding or function of the protein that can constitute good targets for mutagenesis experiments. Here we present an application programming interface (API) for RINspector 1.1.0 that enables interplay between Cytoscape, RINspector and external languages, such as R or Python. This API provides easy access to batch centrality calculations and flexibility predictions, and allows for the easy comparison of results between different structures. These comparisons can lead to the identification of specific and conserved central residues, and show the impact of mutations to these and other residues on the flexibility of the proteins. We give two use cases to demonstrate the interest of these functionalities and provide the corresponding scripts: the first concerns NMR conformers, the second focuses on mutations in a structure.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 563 ◽  
Author(s):  
Guillaume Brysbaert ◽  
Théo Mauri ◽  
Marc F. Lensink

Residue interaction networks (RINs) have been shown to be relevant representations of the tertiary or quaternary structures of proteins, in particular thanks to network centrality analyses. We recently developed the RINspector Cytoscape app, which couples centrality analyses with backbone flexibility predictions. This combined approach permits the identification of crucial residues for the folding or function of the protein that can constitute good targets for mutagenesis experiments. Here we present an application programming interface (API) for RINspector that enables interplay between Cytoscape, RINspector and external languages, such as R or Python. This API provides easy access to batch centrality calculations and flexibility predictions, and allows for the easy comparison of results between different structures. These comparisons can lead to the identification of specific and conserved central residues, and show the impact of mutations to these and other residues on the flexibility of the proteins. We give two use cases to demonstrate the interest of these functionalities and provide the corresponding scripts: the first concerns NMR conformers, the second focuses on mutations in a structure.


2018 ◽  
Author(s):  
Alex Nunes ◽  
Damian Lidgard ◽  
Franziska Broell

In 2015, as part of the Ocean Tracking Network’s bioprobe initiative, 20 grey seals (Halichoerus grypus) were tagged with a high-resolution (> 30 Hz) inertial tags (> 30 Hz), a depth-temperature satellite tag (0.1 Hz), and an acoustic transceiver on Sable Island for 6 months. Comparable to similar large-scale studies in movement ecology, the unprecedented size of the data (gigabytes for a single seal) collected by these instruments raises new challenges in efficient database management. Here we propose the utility of Postgres and netCDF for storing the biotelemetry data and associated metadata. While it was possible to write the lower-resolution (acoustic and satellite) data to a Postgres database, netCDF was chosen as the format for the high-resolution movement (acceleration and inertial) records. Even without access to cluster computing, data could be efficiently (CPU time) recorded, as 920 million records were written in < 3 hours. ERDDAP was used to access and link the different datastreams with a user-friendly Application Programming Interface. This approach compresses the data to a fifth of its original size, and storing the data in a tree-like structure enables easy access and visualization for the end user.


2018 ◽  
Author(s):  
Alex Nunes ◽  
Damian Lidgard ◽  
Franziska Broell

In 2015, as part of the Ocean Tracking Network’s bioprobe initiative, 20 grey seals (Halichoerus grypus) were tagged with a high-resolution (> 30 Hz) inertial tags (> 30 Hz), a depth-temperature satellite tag (0.1 Hz), and an acoustic transceiver on Sable Island for 6 months. Comparable to similar large-scale studies in movement ecology, the unprecedented size of the data (gigabytes for a single seal) collected by these instruments raises new challenges in efficient database management. Here we propose the utility of Postgres and netCDF for storing the biotelemetry data and associated metadata. While it was possible to write the lower-resolution (acoustic and satellite) data to a Postgres database, netCDF was chosen as the format for the high-resolution movement (acceleration and inertial) records. Even without access to cluster computing, data could be efficiently (CPU time) recorded, as 920 million records were written in < 3 hours. ERDDAP was used to access and link the different datastreams with a user-friendly Application Programming Interface. This approach compresses the data to a fifth of its original size, and storing the data in a tree-like structure enables easy access and visualization for the end user.


2021 ◽  
Author(s):  
Liam Cresswell ◽  
Lisette Espín-Noboa ◽  
Malia Su-Qin Murphy ◽  
Serine Ramlawi ◽  
Mark C. Walker ◽  
...  

Background: Cannabis use has increased in Canada since its legalization in 2018, includingamong pregnant women who may be motivated to use cannabis to reduce symptoms ofnausea and vomiting. However, a growing body of research suggests that cannabis useduring pregnancy may harm the developing fetus. As a result, patients increasingly seekmedical advice from online sources, but these platforms may also spread anecdotaldescriptions or misinformation. Given the possible disconnect between online messaging andevidence-based research about the effects of cannabis use during pregnancy, there is apotential for advice taken from social media to cause harm.Objectives: To quantify the volume and tone of English-language posts related to cannabisuse in pregnancy from January 2012 to July 2021.Methods: Modelling published frameworks for scoping reviews, we will collect publiclyavailable posts from Twitter that mention cannabis use during pregnancy and employ theTwitter Application Programming Interface (API) for Academic Research to extract data fromtweets, including public metrics such as the number of likes, retweets and quotes, as well ashealth effect mentions, sentiment, location and users interests. These data will be used toquantify how cannabis use during pregnancy is discussed on Twitter and to build a qualitativeprofile of supportive and opposing posters.Results: The CHEO Research Ethics Board reviewed our project and granted an exemptionin May 2021. As of September 2021, we have gained approval to use the Twitter API forAcademic Research and have developed a preliminary search strategy that returns over 2million unique tweets posted between 2012 and 2020.Conclusions: Understanding how Twitter is being used to discuss cannabis use duringpregnancy will help public health agencies and healthcare providers assess the messagingpatients may be receiving and develop communication strategies to counter misinformation,especially in geographical regions where legalization is recent or imminent. Most importantly,we foresee that our findings will assist expecting families in making informed choices aboutwhere they choose to access advice about using cannabis during pregnancy.


2021 ◽  
Author(s):  
Daniel Santillan Pedrosa ◽  
Alexander Geiss ◽  
Isabell Krisch ◽  
Fabian Weiler ◽  
Peggy Fischer ◽  
...  

&lt;p&gt;&lt;span&gt;The VirES for Aeolus service (https://aeolus.services) has been successfully running &lt;/span&gt;&lt;span&gt;by EOX &lt;/span&gt;&lt;span&gt;since August 2018. The service &lt;/span&gt;&lt;span&gt;provides&lt;/span&gt;&lt;span&gt; easy access &lt;/span&gt;&lt;span&gt;and&lt;/span&gt;&lt;span&gt; analysis functions for the entire data archive of ESA's Aeolus Earth Explorer mission &lt;/span&gt;&lt;span&gt;through a web browser&lt;/span&gt;&lt;span&gt;.&lt;/span&gt;&lt;/p&gt;&lt;p&gt;&lt;span&gt;This &lt;/span&gt;free and open service &lt;span&gt;is being extended with a Virtual Research Environment (VRE). &lt;/span&gt;&lt;span&gt;The VRE &lt;/span&gt;&lt;span&gt;builds on the available data access capabilities of the service and provides &lt;/span&gt;&lt;span&gt;a &lt;/span&gt;&lt;span&gt;data access Application Programming Interface (API) a&lt;/span&gt;&lt;span&gt;s part of a &lt;/span&gt;&lt;span&gt;developing environment &lt;/span&gt;&lt;span&gt;i&lt;/span&gt;&lt;span&gt;n the cloud &lt;/span&gt;&lt;span&gt;using &lt;/span&gt;&lt;span&gt;JupyterHub and &lt;/span&gt;&lt;span&gt;JupyterLab&lt;/span&gt;&lt;span&gt; for processing and exploitation of the Aeolus data. &lt;/span&gt;In collaboration with Aeolus DISC user requirements are being collected, implemented and validated.&lt;/p&gt;&lt;p&gt;Jupyter Notebook templates, an extensive set of tutorials, and documentation are being made available to enable a quick start on how to use VRE in projects. &lt;span&gt;The VRE is intended to support and simplify &lt;/span&gt;&lt;span&gt;the &lt;/span&gt;&lt;span&gt;work of (citizen-) scientists &lt;/span&gt;&lt;span&gt;interested in&lt;/span&gt;&lt;span&gt; Aeolus data by being able to &lt;/span&gt;&lt;span&gt;quickly develop processes or algorithms that can be &lt;/span&gt;&lt;span&gt;shar&lt;/span&gt;&lt;span&gt;ed or used to create &lt;/span&gt;&lt;span&gt;visualizations&lt;/span&gt;&lt;span&gt; for publications. Having a unified constant platform could potentially also be very helpful for calibration and validation activities &lt;/span&gt;&lt;span&gt;by &lt;/span&gt;&lt;span&gt;allowing easier result comparisons. &lt;/span&gt;&lt;/p&gt;


Author(s):  
Thu T. Nguyen ◽  
Shaniece Criss ◽  
Pallavi Dwivedi ◽  
Dina Huang ◽  
Jessica Keralis ◽  
...  

Background: Anecdotal reports suggest a rise in anti-Asian racial attitudes and discrimination in response to COVID-19. Racism can have significant social, economic, and health impacts, but there has been little systematic investigation of increases in anti-Asian prejudice. Methods: We utilized Twitter’s Streaming Application Programming Interface (API) to collect 3,377,295 U.S. race-related tweets from November 2019–June 2020. Sentiment analysis was performed using support vector machine (SVM), a supervised machine learning model. Accuracy for identifying negative sentiments, comparing the machine learning model to manually labeled tweets was 91%. We investigated changes in racial sentiment before and following the emergence of COVID-19. Results: The proportion of negative tweets referencing Asians increased by 68.4% (from 9.79% in November to 16.49% in March). In contrast, the proportion of negative tweets referencing other racial/ethnic minorities (Blacks and Latinx) remained relatively stable during this time period, declining less than 1% for tweets referencing Blacks and increasing by 2% for tweets referencing Latinx. Common themes that emerged during the content analysis of a random subsample of 3300 tweets included: racism and blame (20%), anti-racism (20%), and daily life impact (27%). Conclusion: Social media data can be used to provide timely information to investigate shifts in area-level racial sentiment.


2004 ◽  
Vol 37 (1) ◽  
pp. 174-178 ◽  
Author(s):  
Jay Painter ◽  
Ethan A Merritt

ThePython Macromolecular Library(mmLib) is a software toolkit and library of routines for the analysis and manipulation of macromolecular structural models, implemented in the Python programming language. It is accessedviaa layered object-oriented application programming interface, and provides a range of useful software components for parsing mmCIF, PDB and MTZ files, a library of atomic elements and monomers, an object-oriented data structure describing biological macromolecules, and an OpenGL molecular viewer. The mmLib data model is designed to provide easy access to the various levels of detail needed to implement high-level application programs for macromolecular crystallography, NMR, modeling and visualization. We describe here the establishment ofmmLibas a collaborative open-source code base, and the use of mmLib to implement several simple illustrative application programs.


2019 ◽  
Vol 40 (s1) ◽  
pp. 31-49
Author(s):  
Anja Bechmann

AbstractThis study investigates the Facebook posting behaviour of 922 posting users over a time span of seven years (from 2007 to 2014), using an innovative combination of survey data and private profile feed post counts obtained through the Facebook Application Programming Interface (API) prior to the changes in 2015. A digital inequality lens is applied to study the effect of socio-demographic characteristics as well as time on posting behaviour. The findings indicate differences, for example in terms of gender and age, but some of this inequality is becoming smaller over time. The data set also shows inequality in the poster ratio in different age groups. Across all the demographic groups, the results show an increase in posting frequency in the time period observed, and limited evidence is found that young age groups have posted less on Facebook in more recent years.


Sign in / Sign up

Export Citation Format

Share Document