Studying Facebook and Instagram data: The Digital Footprints software

Author(s):  
Anja Bechmann ◽  
Peter Bjerregaard Vahlstrup

The aim of this article is to discuss methodological implications and challenges in different kinds of deep and big data studies of Facebook and Instagram through methods involving the use of Application Programming Interface (API) data. This article describes and discusses Digital Footprints (www.digitalfootprints.dk), a data extraction and analytics software that allows researchers to extract user data from Facebook and Instagram data sources; public streams as well as private data with user consent. Based on insights from the software design process and data driven studies the article argues for three main challenges: Data quality, data access and analysis, and legal and ethical considerations.

Author(s):  
Amit Sharma

The paper portrays the utilization of tools for data gathering and extraction that permits researchers to fare data in standard document groups from various areas of the facebook long range informal communication benefit. Kinship networks, gatherings, and pages can subsequently be breaking down quantitatively and subjectively with respect to demographical, post-demographical, and social qualities. The paper gives a review over expository headings opened up by the data made accessible, talks about stage particular parts of data extraction through the official Application Programming Interface, and quickly connects with the troublesome moral contemplations connected to this sort of research.


Author(s):  
Amir Hassanpour ◽  
Alexander Bigazzi ◽  
Don MacKenzie

Better understanding of the impacts of new mobility services (NMS) is needed to inform evidence-based policy, but cities and researchers are hindered by a lack of access to detailed system data. Application programming interface (API) services can be a medium for real-time data sharing and access, and have been used for data collection in the past, but the literature lacks a systematic examination of the potential value of publicly available API data for extracting policy-relevant information, specifically supply and demand, on NMS. The objectives of this study are: 1) to catalogue all the publicly available API data streams for NMS in three major cities known as the Cascadia Corridor (Vancouver, British Columbia; Seattle, Washington; and Portland, Oregon); 2) to create, apply, and share web data extraction tools (Python scripts) for each API; and 3) to assess the usefulness of the extracted data in quantifying supply and demand for each service. Results reveal some measures of supply and demand that can be extracted from API data and be useful in future analysis (mostly for bikeshare and carshare services, not ridesourcing). However, important information on supply and demand of most of the NMS in these cities cannot be obtained through API data extraction. Stronger open data policies for mobility services are therefore needed if policymakers want to obtain useful and independent insights on the usage of these services.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Kenneth D. Mandl ◽  
Daniel Gottlieb ◽  
Joshua C. Mandel ◽  
Vladimir Ignatov ◽  
Raheel Sayeed ◽  
...  

AbstractThe 21st Century Cures Act requires that certified health information technology have an application programming interface (API) giving access to all data elements of a patient’s electronic health record, “without special effort”. In the spring of 2020, the Office of the National Coordinator of Health Information Technology (ONC) published a rule—21st Century Cures Act Interoperability, Information Blocking, and the ONC Health IT Certification Program—regulating the API requirement along with protections against information blocking. The rule specifies the SMART/HL7 FHIR Bulk Data Access API, which enables access to patient-level data across a patient population, supporting myriad use cases across healthcare, research, and public health ecosystems. The API enables “push button population health” in that core data elements can readily and standardly be extracted from electronic health records, enabling local, regional, and national-scale data-driven innovation.


2020 ◽  
Vol 17 ◽  
pp. 326-331
Author(s):  
Kamil Siebyła ◽  
Maria Skublewska-Paszkowska

There are various methods for creating web applications. Each of these methods has different levels of performance. This factor is measurable at every level of the application. The performance of the frontend layer depends on the response time from individual endpoint of the used API (Application Programming Interface). The way the data access will be programmed at a specific endpoint, therefore, determines the performance of the entire application. There are many programming methods that are often time-consuming to implement. This article presents a comparison of the available methods of handling the persistence layer in relation to the efficiency of their implementation.                                                                                    


2011 ◽  
Vol 14 (1) ◽  
pp. 1-12
Author(s):  
Norman L. Jones ◽  
Robert M. Wallace ◽  
Russell Jones ◽  
Cary Butler ◽  
Alan Zundel

This paper describes an Application Programming Interface (API) for managing multi-dimensional data produced for water resource computational modeling that is being developed by the US Army Engineer Research and Development Center (ERDC), in conjunction with Brigham Young University. This API, along with a corresponding data standard, is being implemented within ERDC computational models to facilitate rapid data access, enhanced data compression and data sharing, and cross-platform independence. The API and data standard are known as the eXtensible Model Data Format (XMDF), and version 1.3 is available for free download. This API is designed to manage geometric data associated with grids, meshes, riverine and coastal cross sections, and both static and transient array-based datasets. The inclusion of coordinate system data makes it possible to share data between models developed in different coordinate systems. XMDF is used to store the data-intensive components of a modeling study in a compressed binary format that is platform-independent. It also provides a standardized file format that enhances modeling linking and data sharing between models.


2021 ◽  
Author(s):  
Daniel Santillan Pedrosa ◽  
Alexander Geiss ◽  
Isabell Krisch ◽  
Fabian Weiler ◽  
Peggy Fischer ◽  
...  

<p><span>The VirES for Aeolus service (https://aeolus.services) has been successfully running </span><span>by EOX </span><span>since August 2018. The service </span><span>provides</span><span> easy access </span><span>and</span><span> analysis functions for the entire data archive of ESA's Aeolus Earth Explorer mission </span><span>through a web browser</span><span>.</span></p><p><span>This </span>free and open service <span>is being extended with a Virtual Research Environment (VRE). </span><span>The VRE </span><span>builds on the available data access capabilities of the service and provides </span><span>a </span><span>data access Application Programming Interface (API) a</span><span>s part of a </span><span>developing environment </span><span>i</span><span>n the cloud </span><span>using </span><span>JupyterHub and </span><span>JupyterLab</span><span> for processing and exploitation of the Aeolus data. </span>In collaboration with Aeolus DISC user requirements are being collected, implemented and validated.</p><p>Jupyter Notebook templates, an extensive set of tutorials, and documentation are being made available to enable a quick start on how to use VRE in projects. <span>The VRE is intended to support and simplify </span><span>the </span><span>work of (citizen-) scientists </span><span>interested in</span><span> Aeolus data by being able to </span><span>quickly develop processes or algorithms that can be </span><span>shar</span><span>ed or used to create </span><span>visualizations</span><span> for publications. Having a unified constant platform could potentially also be very helpful for calibration and validation activities </span><span>by </span><span>allowing easier result comparisons. </span></p>


Author(s):  
C. C. Fonte ◽  
J. Patriarca ◽  
J. Estima ◽  
J.-P. de Almeida ◽  
A. Cardoso

<p><strong>Abstract.</strong> Volunteered geographical information (VGI) is an increasing source of data for many applications. In order to explore some of these sources of data, an algorithm was conceived and implemented in the ExploringVGI platform enabling the collection of georeferenced data from collaborative projects that provide an Application Programming Interface (API). This paper presents a preliminary study to evaluate the consistency and relevance of VGI extracted from Flickr platform for emergency mitigation and municipal management. The study carried out was based on data extraction and analysis with keywords related to emergency events (“Accident”, “Flood” and “Fire apartment”), and municipal management (“Graffiti” and “Homeless”) in four European cities (Frankfurt, Lisbon, London, and Rome). The proposed approach sets up a region of interest on a map, selects one or more keywords for the search, and carries out a search using the Flickr API. Data detected and extracted were then loaded into a database and further analysed to verify whether they were consistently obtained through consecutive searches at different locations. A statistical analysis performed on data collected for each case provided us with: the total number of data collected for each keyword and location; their relevance in terms of search goal; and the quality of the associate geolocation of the post. Results obtained illustrate the effectiveness of the approach when applied to different scenarios, which contributes to assess the role that VGI available on the Web may have in different events depending on the specific context of a geolocation/keyword(s) combination.</p>


Sign in / Sign up

Export Citation Format

Share Document