Dynamic Generation of Documentation, Code, and Tests for a Digital Marketing Platform's API

Author(s):  
Ricardo Santos ◽  
Ivo Pereira ◽  
Isabel Azevedo

Detailed documentation and software tests are key factors for the success of a web application programming interface (API). When designing an API, especially in a design first approach, it is relevant to define a formal contract, known as API specification. This document must contain all necessary information regarding the API behavior. Thereby, the specification can be used to dynamically generate API components like documentation, client and server code, and software tests, reducing development and maintenance costs. This chapter presents a study of OpenAPI specification and its application on designing a new RESTful API for E-goi. It also presents a set of solutions for generating documentation, client code libraries, and test cases.

Author(s):  
Raul Sierra-Alcocer ◽  
Christopher Stephens ◽  
Juan Barrios ◽  
Constantino González‐Salazar ◽  
Juan Carlos Salazar Carrillo ◽  
...  

SPECIES (Stephens et al. 2019) is a tool to explore spatial correlations in biodiversity occurrence databases. The main idea behind the SPECIES project is that the geographical correlations between the distributions of taxa records have useful information. The problem, however, is that if we have thousands of species (Mexico's National System of Biodiversity Information has records of around 70,000 species) then we have millions of potential associations, and exploring them is far from easy. Our goal with SPECIES is to facilitate the discovery and application of meaningful relations hiding in our data. The main variables in SPECIES are the geographical distributions of species occurrence records. Other types of variables, like the climatic variables from WorldClim (Hijmans et al. 2005), are explanatory data that serve for modeling. The system offers two modes of analysis. In one, the user defines a target species, and a selection of species and abiotic variables; then the system computes the spatial correlations between the target species and each of the other species and abiotic variables. The request from the user can be as small as comparing one species to another, or as large as comparing one species to all the species in the database. A user may wonder, for example, which species are usual neighbors of the jaguar, this mode could help answer this question. The second mode of analysis gives a network perspective, in it, the user defines two groups of taxa (and/or environmental variables), the output in this case is a correlation network where the weight of a link between two nodes represents the spatial correlation between the variables that the nodes represent. For example, one group of taxa could be hummingbirds (Trochilidae family) and the second flowers of the Lamiaceae family. This output would help the user analyze which pairs of hummingbird and flower are highly correlated in the database. SPECIES data architecture is optimized to support fast hypotheses prototyping and testing with the analysis of thousands of biotic and abiotic variables. It has a visualization web interface that presents descriptive results to the user at different levels of detail. The methodology in SPECIES is relatively simple, it partitions the geographical space with a regular grid and treats a species occurrence distribution as a present/not present boolean variable over the cells. Given two species (or one species and one abiotic variable) it measures if the number of co-occurrences between the two is more (or less) than expected. If it is more than expected indicates a signal of a positive relation, whereas if it is less it would be evidence of disjoint distributions. SPECIES provides an open web application programming interface (API) to request the computation of correlations and statistical dependencies between variables in the database. Users can create applications that consume this 'statistical web service' or use it directly to further analyze the results in frameworks like R or Python. The project includes an interactive web application that does exactly that: requests analysis from the web service and lets the user experiment and visually explore the results. We believe this approach can be used on one side to augment the services provided from data repositories; and on the other side, facilitate the creation of specialized applications that are clients of these services. This scheme supports big-data-driven research for a wide range of backgrounds because end users do not need to have the technical know-how nor the infrastructure to handle large databases. Currently, SPECIES hosts: all records from Mexico's National Biodiversity Information System (CONABIO 2018) and a subset of Global Biodiversity Information Facility data that covers the contiguous USA (GBIF.org 2018b) and Colombia (GBIF.org 2018a). It also includes discretizations of environmental variables from WorldClim, from the Environmental Rasters for Ecological Modeling project (Title and Bemmels 2018), from CliMond (Kriticos et al. 2012), and topographic variables (USGS EROS Center 1997b, USGS EROS Center 1997a). The long term plan, however, is to incrementally include more data, specially all data from the Global Biodiversity Information Facility. The code of the project is open source, and the repositories are available online (Front-end, Web Services Application Programming Interface, Database Building scripts). This presentation is a demonstration of SPECIES' functionality and its overall design.


2018 ◽  
Vol 21 (2) ◽  
pp. 34-38
Author(s):  
Awal Kurniawan ◽  
Intan Sari Areni ◽  
Andani Achmad

Teknologi web sudah mengalami banyak kemajuan. Dimulai dari era web 1.0 yang masih bersifat statis hingga teknologi web yang mampu mengatasi permasalah perangkat keras seperti storage, speech recognition, hingga geolocation. Salah satu teknologi web yang hadir saat ini adalah progressive web application. Penelitian ini bertujuan untuk merancang sebuah sistem yang dapat melakukan proses caching file pada konten website. Sistem menggunakan progressive web application dengan memanfaatkan service worker. Sumber data yang akan dijadikan objek pada penelitian ini adalah data keluhan masyarakat yang berbentuk JSON. Pada penelitian ini digunakan metode eksperimental dalam merancang aplikasi. Data keluhan yang bersumber dari sebuah API (Application Programming Interface) kemudian ditampilkan dalam keadaan jaringan aktif. Selama dalam keadaan jaringan aktif, service worker melakukan tugasnya dalam melakukan proses caching. Setelah itu , data yang sudah disimpan bisa diakses pada jaringan tidak aktif. Hasil dari penelitian ini adalah sistem keluhan yang disisipkan service worker mampu melakukan proses caching data hingga 500 data keluhan. Meskipun eksekusi waktu yang dibutuhkan dalam mengakses aplikasi lebih lama karena pemasangan service worker, namun aplikasi yang diakses lebih cepat ketika dalam keadaan offline karena data dimuat dalam cache service worker.


2021 ◽  
Vol 2069 (1) ◽  
pp. 012135
Author(s):  
N D Svane ◽  
A Pranskunas ◽  
L B Lindgren ◽  
R L Jensen

Abstract The architecture, engineering, and construction (AEC) industry experiences a growing need for building performance simulations (BPS) as facilitators in the design process. However, inconsistent modelling practice and varying quality of export/import functions entail error-prone interoperability with IFC and gbXML data formats. Consequently, repeated manual modelling is still necessary. This paper presents a coupling module enabling a semi-automated extract of geometry data from the BIM software Revit and a further translation to a BPS input file using Revit Application Programming Interface (API) and visual programming in Dynamo. The module is tested with three test cases which shows promising results for fast and structured semi-automatic geometry modelling designed to fit today’s practice.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012039
Author(s):  
Qi An

Abstract Skin cancer has become a great concern for people's wellness. With the popularization of machine learning, a considerable amount of data about skin cancer has been created. However, applications on the market featuring skin cancer diagnosis have barely utilized the data. In this paper, we have designed a web application to diagnose skin cancer with the CNN model and Chatterbot API. First, the application allows the user to upload an image of the user's skin. Next, a CNN model is trained with a huge amount of pre-taken images to make predictions about whether the skin is affected by skin cancer, and if the answer is yes, which kind of skin cancer the uploaded image can be classified. Last, a chatbot using the Chatterbot API is trained with hundreds of answers and questions asked and answered on the internet to interact with and give feedback to the user based on the information provided by the CNN model. The application has achieved significant performance in making classifications and has acquired the ability to interact with users. The CNN model has reached an accuracy of 0.95 in making classifications, and the chatbot can answer more than 100 questions about skin cancer. We have also done a great job on connecting the backend based on the CNN model as well as the Chatterbot API and the frontend based on the VUE Javascript framework.


Author(s):  
Uwe Zdun

This chapter examines the use of patterns for reengineering legacy systems to the Web. Today reengineering existing (legacy) systems to the Web is a typical software maintenance task. In such projects developers integrate a Web representation with the legacy system’s application programming interface (API) and its responses. Often, the same information is provided to other channels than HTTP and in other formats than HTML as well, and the old (legacy) interfaces are still supported. Add-on services such as security or logging are required. Performance and scalability of the Web application might be crucial. To resolve these issues, many different concepts and frameworks have to be well understood, especially legacy system wrapping, connection handling, remoting, service abstraction, adaptation techniques, dynamic content generation, and others. In this chapter, we present patterns from different sources that resolve these issues. We integrate them to a pattern language operating in the context of reengineering to the Web, and present pattern variants and examples in this context.


2014 ◽  
Vol 518 ◽  
pp. 305-309
Author(s):  
Wen Tao Liu

Offline storage technology has many uses in the Web application and it can store the user status, cache data, temporary data, and persistent data and so on. In this paper several typical web client storage technologies are discussed and it includes the IE browser's unique storage technology UserData, localStorage and sessionStorage of HTML5, Web SQL Databases, Indexed Database, as well as classic storage technology Cookie and so on. Their concrete using methods are explained and their individual strengths and differences are compared. Their respective applications occasions and some issues that need attention are discussed. The general cross-browser offline storage method is presented and it can use the same application programming interface to complete different browser offline storage technologies.


Author(s):  
Xiang-Jun Lu

Abstract Sophisticated analysis and simplified visualization are crucial for understanding complicated structures of biomacromolecules. DSSR (Dissecting the Spatial Structure of RNA) is an integrated computational tool that has streamlined the analysis and annotation of 3D nucleic acid structures. The program creates schematic block representations in diverse styles that can be seamlessly integrated into PyMOL and complement its other popular visualization options. In addition to portraying individual base blocks, DSSR can draw Watson-Crick pairs as long blocks and highlight the minor-groove edges. Notably, DSSR can dramatically simplify the depiction of G-quadruplexes by automatically detecting G-tetrads and treating them as large square blocks. The DSSR-enabled innovative schematics with PyMOL are aesthetically pleasing and highly informative: the base identity, pairing geometry, stacking interactions, double-helical stems, and G-quadruplexes are immediately obvious. These features can be accessed via four interfaces: the command-line interface, the DSSR plugin for PyMOL, the web application, and the web application programming interface. The supplemental PDF serves as a practical guide, with complete and reproducible examples. Thus, even beginners or occasional users can get started quickly, especially via the web application at http://skmatic.x3dna.org.


2020 ◽  
pp. 245-253 ◽  
Author(s):  
Alex H. Wagner ◽  
Susanna Kiwala ◽  
Adam C. Coffman ◽  
Joshua F. McMichael ◽  
Kelsy C. Cotto ◽  
...  

PURPOSE Precision oncology depends on the matching of tumor variants to relevant knowledge describing the clinical significance of those variants. We recently developed the Clinical Interpretations for Variants in Cancer (CIViC; civicdb.org ) crowd-sourced, expert-moderated, and open-access knowledgebase. CIViC provides a structured framework for evaluating genomic variants of various types (eg, fusions, single-nucleotide variants) for their therapeutic, prognostic, predisposing, diagnostic, or functional utility. CIViC has a documented application programming interface for accessing CIViC records: assertions, evidence, variants, and genes. Third-party tools that analyze or access the contents of this knowledgebase programmatically must leverage this application programming interface, often reimplementing redundant functionality in the pursuit of common analysis tasks that are beyond the scope of the CIViC Web application. METHODS To address this limitation, we developed CIViCpy ( civicpy.org ), a software development kit for extracting and analyzing the contents of the CIViC knowledgebase. CIViCpy enables users to query CIViC content as dynamic objects in Python. We assess the viability of CIViCpy as a tool for advancing individualized patient care by using it to systematically match CIViC evidence to observed variants in patient cancer samples. RESULTS We used CIViCpy to evaluate variants from 59,437 sequenced tumors of the American Association for Cancer Research Project GENIE data set. We demonstrate that CIViCpy enables annotation of > 1,200 variants per second, resulting in precise variant matches to CIViC level A (professional guideline) or B (clinical trial) evidence for 38.6% of tumors. CONCLUSION The clinical interpretation of genomic variants in cancers requires high-throughput tools for interoperability and analysis of variant interpretation knowledge. These needs are met by CIViCpy, a software development kit for downstream applications and rapid analysis. CIViCpy is fully documented, open-source, and available free online.


2020 ◽  
Vol 23 (3) ◽  
pp. 29-40
Author(s):  
Michał Dzięcielski ◽  
Adam Radzimski ◽  
Marcin Woźniak

Bike-sharing systems, also known as public bicycles, are among the most dynamically developing mobility solutions in contemporary cities. In the past decade, numerous Polish cities hoping to increase the modal share of cycling have also adopted bike-sharing. Such systems continuously register user movements through installed sensors. The resulting database allows a highly detailed representation of this segment of urban mobility. This article illustrates how a database accessed via a Web API (Web Application Programming Interface) could be used to investigate the spatial distribution of trips, using the case study of Poznań, the fifth-largest city in Poland. Using geographical information systems, we identify the hot spots of bike-sharing as well as areas with low usage. The research procedure outlined in the paper provides knowledge that allows better responding to users’ needs.


Sign in / Sign up

Export Citation Format

Share Document