scholarly journals Impact of the persistence layer implementation methods on application per-formance

2020 ◽  
Vol 17 ◽  
pp. 326-331
Author(s):  
Kamil Siebyła ◽  
Maria Skublewska-Paszkowska

There are various methods for creating web applications. Each of these methods has different levels of performance. This factor is measurable at every level of the application. The performance of the frontend layer depends on the response time from individual endpoint of the used API (Application Programming Interface). The way the data access will be programmed at a specific endpoint, therefore, determines the performance of the entire application. There are many programming methods that are often time-consuming to implement. This article presents a comparison of the available methods of handling the persistence layer in relation to the efficiency of their implementation.                                                                                    

2016 ◽  
Vol 44 (3) ◽  
pp. 377-391 ◽  
Author(s):  
Azadeh Esfandyari ◽  
Matteo Zignani ◽  
Sabrina Gaito ◽  
Gian Paolo Rossi

To take advantage of the full range of services that online social networks (OSNs) offer, people commonly open several accounts on diverse OSNs where they leave lots of different types of profile information. The integration of these pieces of information from various sources can be achieved by identifying individuals across social networks. In this article, we address the problem of user identification by treating it as a classification task. Relying on common public attributes available through the official application programming interface (API) of social networks, we propose different methods for building negative instances that go beyond usual random selection so as to investigate the effectiveness of each method in training the classifier. Two test sets with different levels of discrimination are set up to evaluate the robustness of our different classifiers. The effectiveness of the approach is measured in real conditions by matching profiles gathered from Google+, Facebook and Twitter.


Author(s):  
Anja Bechmann ◽  
Peter Bjerregaard Vahlstrup

The aim of this article is to discuss methodological implications and challenges in different kinds of deep and big data studies of Facebook and Instagram through methods involving the use of Application Programming Interface (API) data. This article describes and discusses Digital Footprints (www.digitalfootprints.dk), a data extraction and analytics software that allows researchers to extract user data from Facebook and Instagram data sources; public streams as well as private data with user consent. Based on insights from the software design process and data driven studies the article argues for three main challenges: Data quality, data access and analysis, and legal and ethical considerations.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Kenneth D. Mandl ◽  
Daniel Gottlieb ◽  
Joshua C. Mandel ◽  
Vladimir Ignatov ◽  
Raheel Sayeed ◽  
...  

AbstractThe 21st Century Cures Act requires that certified health information technology have an application programming interface (API) giving access to all data elements of a patient’s electronic health record, “without special effort”. In the spring of 2020, the Office of the National Coordinator of Health Information Technology (ONC) published a rule—21st Century Cures Act Interoperability, Information Blocking, and the ONC Health IT Certification Program—regulating the API requirement along with protections against information blocking. The rule specifies the SMART/HL7 FHIR Bulk Data Access API, which enables access to patient-level data across a patient population, supporting myriad use cases across healthcare, research, and public health ecosystems. The API enables “push button population health” in that core data elements can readily and standardly be extracted from electronic health records, enabling local, regional, and national-scale data-driven innovation.


2011 ◽  
Vol 14 (1) ◽  
pp. 1-12
Author(s):  
Norman L. Jones ◽  
Robert M. Wallace ◽  
Russell Jones ◽  
Cary Butler ◽  
Alan Zundel

This paper describes an Application Programming Interface (API) for managing multi-dimensional data produced for water resource computational modeling that is being developed by the US Army Engineer Research and Development Center (ERDC), in conjunction with Brigham Young University. This API, along with a corresponding data standard, is being implemented within ERDC computational models to facilitate rapid data access, enhanced data compression and data sharing, and cross-platform independence. The API and data standard are known as the eXtensible Model Data Format (XMDF), and version 1.3 is available for free download. This API is designed to manage geometric data associated with grids, meshes, riverine and coastal cross sections, and both static and transient array-based datasets. The inclusion of coordinate system data makes it possible to share data between models developed in different coordinate systems. XMDF is used to store the data-intensive components of a modeling study in a compressed binary format that is platform-independent. It also provides a standardized file format that enhances modeling linking and data sharing between models.


2021 ◽  
Author(s):  
Daniel Santillan Pedrosa ◽  
Alexander Geiss ◽  
Isabell Krisch ◽  
Fabian Weiler ◽  
Peggy Fischer ◽  
...  

<p><span>The VirES for Aeolus service (https://aeolus.services) has been successfully running </span><span>by EOX </span><span>since August 2018. The service </span><span>provides</span><span> easy access </span><span>and</span><span> analysis functions for the entire data archive of ESA's Aeolus Earth Explorer mission </span><span>through a web browser</span><span>.</span></p><p><span>This </span>free and open service <span>is being extended with a Virtual Research Environment (VRE). </span><span>The VRE </span><span>builds on the available data access capabilities of the service and provides </span><span>a </span><span>data access Application Programming Interface (API) a</span><span>s part of a </span><span>developing environment </span><span>i</span><span>n the cloud </span><span>using </span><span>JupyterHub and </span><span>JupyterLab</span><span> for processing and exploitation of the Aeolus data. </span>In collaboration with Aeolus DISC user requirements are being collected, implemented and validated.</p><p>Jupyter Notebook templates, an extensive set of tutorials, and documentation are being made available to enable a quick start on how to use VRE in projects. <span>The VRE is intended to support and simplify </span><span>the </span><span>work of (citizen-) scientists </span><span>interested in</span><span> Aeolus data by being able to </span><span>quickly develop processes or algorithms that can be </span><span>shar</span><span>ed or used to create </span><span>visualizations</span><span> for publications. Having a unified constant platform could potentially also be very helpful for calibration and validation activities </span><span>by </span><span>allowing easier result comparisons. </span></p>


2021 ◽  
Vol 6 (2) ◽  
pp. 30-39
Author(s):  
Guy Dobson

APIs (Application Programming Interface) provide the ability to do what needs to be done. The fact that FOLIO includes API as one of its building blocks makes it that much more attractive. When my library’s administration decided to switch from a legacy ILS (Integrated Library System) to a FOLIO LSP (Library Services Platform) the first thing that I looked at was the API. The lessons learned helped me to configure the system and massage the data from ILS output to FOLIO-friendly input. By building web applications and writing Perl scripts our staff is able to get the job done even when it is impossible to accomplish the task through the user interface (UI).


2014 ◽  
Vol 8 ◽  
Author(s):  
Nichols B. Nolan ◽  
Haselgrove Christian ◽  
Poline Jean-Baptiste ◽  
Ghosh Satrajit S.

2018 ◽  
Vol 2 ◽  
pp. e25560
Author(s):  
Dmitry Dmitriev

TaxonWorks (http://taxonworks.org) is an integrated workbench for taxonomists and biodiversity scientists. It is designed to capture, organize, and enrich data, share and refine it with collaborators, and package it for analysis and publication. It is based on PostgreSQL (database) and the Ruby-on-Rails programming language and framework for developing web applications (https://github.com/SpeciesFileGroup/taxonworks). The TaxonWorks community is built around an open software ecosystem that facilitates participation at many levels. TaxonWorks is designed to serve both researchers who create and curate the data, as well as technical users, such as programmers and informatics specialists, who act as data consumers. TaxonWorks provides researchers with robust, user friendly interfaces based on well thought out customized workflows for efficient and validated data entry. It provides technical users database access through an application programming interface (API) that serves data in JSON format. The data model includes coverage for nearly all classes of data recorded in modern taxonomic treatments primary studies of biodiversity, including nomenclature, bibliography, specimens and collecting events, phylogenetic matrices and species descriptions, etc. The nomenclatural classes are based on the NOMEN ontology (https://github.com/SpeciesFileGroup/nomen).


Sign in / Sign up

Export Citation Format

Share Document