scholarly journals Serverless Computing: An Investigation of Deployment Environments for Web APIs

Computers ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 50
Author(s):  
Ivan ◽  
Vasile ◽  
Dadarlat

Cloud vendors offer a variety of serverless technologies promising high availability and dynamic scaling while reducing operational and maintenance costs. One such technology, serverless computing, or function-as-a-service (FaaS), is advertised as a good candidate for web applications, data-processing, or backend services, where you only pay for usage. Unlike virtual machines (VMs), they come with automatic resource provisioning and allocation, providing elastic and automatic scaling. We present the results from our investigation of a specific serverless candidate, Web Application Programming Interface or Web API, deployed on virtual machines and as function(s)-as-a-service. We contrast these deployments by varying the number of concurrent users for measuring response times and costs. We found no significant response time differences between deployments when VMs are configured for the expected load, and test scenarios are within the FaaS hardware limitations. Higher numbers of concurrent users or unexpected user growths are effortlessly handled by FaaS, whereas additional labor must be invested in VMs for equivalent results. We identified that despite the advantages serverless computing brings, there is no clear choice between serverless or virtual machines for a Web API application because one needs to carefully measure costs and factor-in all components that are included with FaaS.

Author(s):  
Raul Sierra-Alcocer ◽  
Christopher Stephens ◽  
Juan Barrios ◽  
Constantino González‐Salazar ◽  
Juan Carlos Salazar Carrillo ◽  
...  

SPECIES (Stephens et al. 2019) is a tool to explore spatial correlations in biodiversity occurrence databases. The main idea behind the SPECIES project is that the geographical correlations between the distributions of taxa records have useful information. The problem, however, is that if we have thousands of species (Mexico's National System of Biodiversity Information has records of around 70,000 species) then we have millions of potential associations, and exploring them is far from easy. Our goal with SPECIES is to facilitate the discovery and application of meaningful relations hiding in our data. The main variables in SPECIES are the geographical distributions of species occurrence records. Other types of variables, like the climatic variables from WorldClim (Hijmans et al. 2005), are explanatory data that serve for modeling. The system offers two modes of analysis. In one, the user defines a target species, and a selection of species and abiotic variables; then the system computes the spatial correlations between the target species and each of the other species and abiotic variables. The request from the user can be as small as comparing one species to another, or as large as comparing one species to all the species in the database. A user may wonder, for example, which species are usual neighbors of the jaguar, this mode could help answer this question. The second mode of analysis gives a network perspective, in it, the user defines two groups of taxa (and/or environmental variables), the output in this case is a correlation network where the weight of a link between two nodes represents the spatial correlation between the variables that the nodes represent. For example, one group of taxa could be hummingbirds (Trochilidae family) and the second flowers of the Lamiaceae family. This output would help the user analyze which pairs of hummingbird and flower are highly correlated in the database. SPECIES data architecture is optimized to support fast hypotheses prototyping and testing with the analysis of thousands of biotic and abiotic variables. It has a visualization web interface that presents descriptive results to the user at different levels of detail. The methodology in SPECIES is relatively simple, it partitions the geographical space with a regular grid and treats a species occurrence distribution as a present/not present boolean variable over the cells. Given two species (or one species and one abiotic variable) it measures if the number of co-occurrences between the two is more (or less) than expected. If it is more than expected indicates a signal of a positive relation, whereas if it is less it would be evidence of disjoint distributions. SPECIES provides an open web application programming interface (API) to request the computation of correlations and statistical dependencies between variables in the database. Users can create applications that consume this 'statistical web service' or use it directly to further analyze the results in frameworks like R or Python. The project includes an interactive web application that does exactly that: requests analysis from the web service and lets the user experiment and visually explore the results. We believe this approach can be used on one side to augment the services provided from data repositories; and on the other side, facilitate the creation of specialized applications that are clients of these services. This scheme supports big-data-driven research for a wide range of backgrounds because end users do not need to have the technical know-how nor the infrastructure to handle large databases. Currently, SPECIES hosts: all records from Mexico's National Biodiversity Information System (CONABIO 2018) and a subset of Global Biodiversity Information Facility data that covers the contiguous USA (GBIF.org 2018b) and Colombia (GBIF.org 2018a). It also includes discretizations of environmental variables from WorldClim, from the Environmental Rasters for Ecological Modeling project (Title and Bemmels 2018), from CliMond (Kriticos et al. 2012), and topographic variables (USGS EROS Center 1997b, USGS EROS Center 1997a). The long term plan, however, is to incrementally include more data, specially all data from the Global Biodiversity Information Facility. The code of the project is open source, and the repositories are available online (Front-end, Web Services Application Programming Interface, Database Building scripts). This presentation is a demonstration of SPECIES' functionality and its overall design.


2018 ◽  
Vol 21 (2) ◽  
pp. 34-38
Author(s):  
Awal Kurniawan ◽  
Intan Sari Areni ◽  
Andani Achmad

Teknologi web sudah mengalami banyak kemajuan. Dimulai dari era web 1.0 yang masih bersifat statis hingga teknologi web yang mampu mengatasi permasalah perangkat keras seperti storage, speech recognition, hingga geolocation. Salah satu teknologi web yang hadir saat ini adalah progressive web application. Penelitian ini bertujuan untuk merancang sebuah sistem yang dapat melakukan proses caching file pada konten website. Sistem menggunakan progressive web application dengan memanfaatkan service worker. Sumber data yang akan dijadikan objek pada penelitian ini adalah data keluhan masyarakat yang berbentuk JSON. Pada penelitian ini digunakan metode eksperimental dalam merancang aplikasi. Data keluhan yang bersumber dari sebuah API (Application Programming Interface) kemudian ditampilkan dalam keadaan jaringan aktif. Selama dalam keadaan jaringan aktif, service worker melakukan tugasnya dalam melakukan proses caching. Setelah itu , data yang sudah disimpan bisa diakses pada jaringan tidak aktif. Hasil dari penelitian ini adalah sistem keluhan yang disisipkan service worker mampu melakukan proses caching data hingga 500 data keluhan. Meskipun eksekusi waktu yang dibutuhkan dalam mengakses aplikasi lebih lama karena pemasangan service worker, namun aplikasi yang diakses lebih cepat ketika dalam keadaan offline karena data dimuat dalam cache service worker.


2020 ◽  
Author(s):  
Diane C. Saunders ◽  
James Messmer ◽  
Irina Kusmartseva ◽  
Maria L. Beery ◽  
Mingder Yang ◽  
...  

SummaryHuman tissue phenotyping generates complex spatial information from numerous imaging modalities, yet images typically become static figures for publication and original data and metadata are rarely available. While comprehensive image maps exist for some organs, most resources have limited support for multiplexed imaging or have non-intuitive user interfaces. Therefore, we built a Pancreatlas™ resource that integrates several technologies into a novel interface, allowing users to access richly annotated web pages, drill down to individual images, and deeply explore data online. The current version of Pancreatlas contains over 800 unique images acquired by whole-slide scanning, confocal microscopy, and imaging mass cytometry, and is available at https://www.pancreatlas.org. To create this human pancreas-specific biological imaging resource, we developed a React-based web application and Python-based application programming interface, collectively called Flexible Framework for Integrating and Navigating Data (FFIND), which can be adapted beyond Pancreatlas to meet countless imaging or other structured data management needs.


2017 ◽  
Author(s):  
Kelsy C. Cotto ◽  
Alex H. Wagner ◽  
Yang-Yang Feng ◽  
Susanna Kiwala ◽  
Adam C. Coffman ◽  
...  

ABSTRACTThe Drug-Gene Interaction Database (DGIdb, www.dgidb.org) consolidates, organizes, and presents drug-gene interactions and gene druggability information from papers, databases, and web resources. DGIdb normalizes content from more than thirty disparate sources and allows for user-friendly advanced browsing, searching and filtering for ease of access through an intuitive web user interface, application programming interface (API), and public cloud-based server image. DGIdb v3.0 represents a major update of the database. Nine of the previously included twenty-eight sources were updated. Six new resources were added, bringing the total number of sources to thirty-three. These updates and additions of sources have cumulatively resulted in 56,309 interaction claims. This has also substantially expanded the comprehensive catalogue of druggable genes and antineoplastic drug-gene interactions included in the DGIdb. Along with these content updates, v3.0 has received a major overhaul of its codebase, including an updated user interface, preset interaction search filters, consolidation of interaction information into interaction groups, greatly improved search response times, and upgrading the underlying web application framework. In addition, the expanded API features new endpoints which allow users to extract more detailed information about queried drugs, genes, and drug-gene interactions, including listings of PubMed IDs (PMIDs), interaction type, and other interaction metadata.


2013 ◽  
Vol 10 (4) ◽  
pp. 82-101 ◽  
Author(s):  
Buqing Cao ◽  
Jianxun Liu ◽  
Mingdong Tang ◽  
Zibin Zheng ◽  
Guangrong Wang

With the rapid development of Web2.0 and its related technologies, Mashup services (i.e., Web applications created by combining two or more Web APIs) are becoming a hot research topic. The explosion of Mashup services, especially the functionally similar or equivalent services, however, make services discovery more difficult than ever. In this paper, we present an approach to recommend Mashup services to users based on usage history and service network. This approach firstly extracts users' interests from their Mashup service usage history and builds a service network based on social relationships information among Mashup services, Web application programming interfaces (APIs) and their tags. The approach then leverages the target user's interest and the service social relationship to perform Mashup service recommendation. Large-scale experiments based on a real-world Mashup service dataset show that the authors' proposed approach can effectively recommend Mashup services to users with excellent performance. Moreover, a Mashup service recommendation prototype system is developed.


Author(s):  
Ricardo Santos ◽  
Ivo Pereira ◽  
Isabel Azevedo

Detailed documentation and software tests are key factors for the success of a web application programming interface (API). When designing an API, especially in a design first approach, it is relevant to define a formal contract, known as API specification. This document must contain all necessary information regarding the API behavior. Thereby, the specification can be used to dynamically generate API components like documentation, client and server code, and software tests, reducing development and maintenance costs. This chapter presents a study of OpenAPI specification and its application on designing a new RESTful API for E-goi. It also presents a set of solutions for generating documentation, client code libraries, and test cases.


2019 ◽  
Vol 26 (3) ◽  
pp. 1926-1951
Author(s):  
Cong Peng ◽  
Prashant Goswami ◽  
Guohua Bai

Health data integration enables a collaborative utilization of data across different systems. It not only provides a comprehensive view of a patient’s health but can also potentially cope with challenges faced by the current healthcare system. In this literature review, we investigated the existing work on heterogeneous health data integration as well as the methods of utilizing the integrated health data. Our search was narrowed down to 32 articles for analysis. The integration approaches in the reviewed articles were classified into three classifications, and the utilization approaches were classified into five classifications. The topic of health data integration is still under debate and problems are far from being resolved. This review suggests the need for a more efficient way to invoke the various services for aggregating health data, as well as a more effective way to integrate the aggregated health data for supporting collaborative utilization. We have found that the combination of Web Application Programming Interface and Semantic Web technologies has the potential to cope with the challenges based on our analysis of the review result.


2015 ◽  
Vol 30 (2) ◽  
pp. 220-236 ◽  
Author(s):  
Frances Buchanan ◽  
Niccolo Capanni ◽  
Horacio González-Vélez

AbstractThe sources of information on the Web relating to Fine Art and in particular to Fine Artists are numerous, heterogeneous and distributed. Data relating to the biographies of an artist, images of their artworks, location of the artworks and exhibition reviews invariably reside in distinct and seemingly unrelated, or at least unlinked, sources. While communication and exchange exists, there is a great deal of independence between major repositories, such as museum, often owing to their ownership or heritage. This increases the individuality in the repository’s own processes and dissemination. It is currently necessary to browse through numerous different websites to obtain information about any one artist, and at this time there is little aggregation of Fine Art Information. This is in contrast to the domain of books and music, where the aggregation and re-grouping of information (usually by author or artist/band name) has become the norm. A Museum API (Application Programming Interface), however, is a tool that can facilitate a similar information service for the domain of Fine Art, by allowing the retrieval and aggregation of Web-based Fine Art Information, whilst at the same time increasing public access to the content of a museum’s collection. In this paper, we present the case for a pragmatic solution to the problems of heterogeneity and distribution of Fine Art Data and this is the first step towards the comprehensive re-presentation of Fine Art Information in a more ‘artist-centric’ way, via accessible Web applications. This paper examines the domain of Fine Art Information on the Web, putting forward the case for more Web services such as generic Museum APIs, highlighting this via a prototype Web application known as the ArtBridge. The generic Museum API is the standardisation mechanism to enable interfacing with specific Museum APIs.


2020 ◽  
Vol 17 ◽  
pp. 326-331
Author(s):  
Kamil Siebyła ◽  
Maria Skublewska-Paszkowska

There are various methods for creating web applications. Each of these methods has different levels of performance. This factor is measurable at every level of the application. The performance of the frontend layer depends on the response time from individual endpoint of the used API (Application Programming Interface). The way the data access will be programmed at a specific endpoint, therefore, determines the performance of the entire application. There are many programming methods that are often time-consuming to implement. This article presents a comparison of the available methods of handling the persistence layer in relation to the efficiency of their implementation.                                                                                    


2021 ◽  
Vol 2078 (1) ◽  
pp. 012039
Author(s):  
Qi An

Abstract Skin cancer has become a great concern for people's wellness. With the popularization of machine learning, a considerable amount of data about skin cancer has been created. However, applications on the market featuring skin cancer diagnosis have barely utilized the data. In this paper, we have designed a web application to diagnose skin cancer with the CNN model and Chatterbot API. First, the application allows the user to upload an image of the user's skin. Next, a CNN model is trained with a huge amount of pre-taken images to make predictions about whether the skin is affected by skin cancer, and if the answer is yes, which kind of skin cancer the uploaded image can be classified. Last, a chatbot using the Chatterbot API is trained with hundreds of answers and questions asked and answered on the internet to interact with and give feedback to the user based on the information provided by the CNN model. The application has achieved significant performance in making classifications and has acquired the ability to interact with users. The CNN model has reached an accuracy of 0.95 in making classifications, and the chatbot can answer more than 100 questions about skin cancer. We have also done a great job on connecting the backend based on the CNN model as well as the Chatterbot API and the frontend based on the VUE Javascript framework.


Sign in / Sign up

Export Citation Format

Share Document