scholarly journals Bike-sharing system in Poznan – what will Web API data tell us?

2020 ◽  
Vol 23 (3) ◽  
pp. 29-40
Author(s):  
Michał Dzięcielski ◽  
Adam Radzimski ◽  
Marcin Woźniak

Bike-sharing systems, also known as public bicycles, are among the most dynamically developing mobility solutions in contemporary cities. In the past decade, numerous Polish cities hoping to increase the modal share of cycling have also adopted bike-sharing. Such systems continuously register user movements through installed sensors. The resulting database allows a highly detailed representation of this segment of urban mobility. This article illustrates how a database accessed via a Web API (Web Application Programming Interface) could be used to investigate the spatial distribution of trips, using the case study of Poznań, the fifth-largest city in Poland. Using geographical information systems, we identify the hot spots of bike-sharing as well as areas with low usage. The research procedure outlined in the paper provides knowledge that allows better responding to users’ needs.

Author(s):  
Raul Sierra-Alcocer ◽  
Christopher Stephens ◽  
Juan Barrios ◽  
Constantino González‐Salazar ◽  
Juan Carlos Salazar Carrillo ◽  
...  

SPECIES (Stephens et al. 2019) is a tool to explore spatial correlations in biodiversity occurrence databases. The main idea behind the SPECIES project is that the geographical correlations between the distributions of taxa records have useful information. The problem, however, is that if we have thousands of species (Mexico's National System of Biodiversity Information has records of around 70,000 species) then we have millions of potential associations, and exploring them is far from easy. Our goal with SPECIES is to facilitate the discovery and application of meaningful relations hiding in our data. The main variables in SPECIES are the geographical distributions of species occurrence records. Other types of variables, like the climatic variables from WorldClim (Hijmans et al. 2005), are explanatory data that serve for modeling. The system offers two modes of analysis. In one, the user defines a target species, and a selection of species and abiotic variables; then the system computes the spatial correlations between the target species and each of the other species and abiotic variables. The request from the user can be as small as comparing one species to another, or as large as comparing one species to all the species in the database. A user may wonder, for example, which species are usual neighbors of the jaguar, this mode could help answer this question. The second mode of analysis gives a network perspective, in it, the user defines two groups of taxa (and/or environmental variables), the output in this case is a correlation network where the weight of a link between two nodes represents the spatial correlation between the variables that the nodes represent. For example, one group of taxa could be hummingbirds (Trochilidae family) and the second flowers of the Lamiaceae family. This output would help the user analyze which pairs of hummingbird and flower are highly correlated in the database. SPECIES data architecture is optimized to support fast hypotheses prototyping and testing with the analysis of thousands of biotic and abiotic variables. It has a visualization web interface that presents descriptive results to the user at different levels of detail. The methodology in SPECIES is relatively simple, it partitions the geographical space with a regular grid and treats a species occurrence distribution as a present/not present boolean variable over the cells. Given two species (or one species and one abiotic variable) it measures if the number of co-occurrences between the two is more (or less) than expected. If it is more than expected indicates a signal of a positive relation, whereas if it is less it would be evidence of disjoint distributions. SPECIES provides an open web application programming interface (API) to request the computation of correlations and statistical dependencies between variables in the database. Users can create applications that consume this 'statistical web service' or use it directly to further analyze the results in frameworks like R or Python. The project includes an interactive web application that does exactly that: requests analysis from the web service and lets the user experiment and visually explore the results. We believe this approach can be used on one side to augment the services provided from data repositories; and on the other side, facilitate the creation of specialized applications that are clients of these services. This scheme supports big-data-driven research for a wide range of backgrounds because end users do not need to have the technical know-how nor the infrastructure to handle large databases. Currently, SPECIES hosts: all records from Mexico's National Biodiversity Information System (CONABIO 2018) and a subset of Global Biodiversity Information Facility data that covers the contiguous USA (GBIF.org 2018b) and Colombia (GBIF.org 2018a). It also includes discretizations of environmental variables from WorldClim, from the Environmental Rasters for Ecological Modeling project (Title and Bemmels 2018), from CliMond (Kriticos et al. 2012), and topographic variables (USGS EROS Center 1997b, USGS EROS Center 1997a). The long term plan, however, is to incrementally include more data, specially all data from the Global Biodiversity Information Facility. The code of the project is open source, and the repositories are available online (Front-end, Web Services Application Programming Interface, Database Building scripts). This presentation is a demonstration of SPECIES' functionality and its overall design.


2020 ◽  
pp. 004912412092621
Author(s):  
C. Ben Gibson ◽  
Jeannette Sutton ◽  
Sarah K. Vos ◽  
Carter T. Butts

Microblogging sites have become important data sources for studying network dynamics and information transmission. Both areas of study, however, require accurate counts of indegree, or follower counts; unfortunately, collection of complete time series on follower counts can be limited by application programming interface constraints, system failures, or temporal constraints. In addition, there is almost always a time difference between the point at which follower counts are queried and the time a user posts a tweet. Here, we consider the use of three classes of simple, easily implemented methods for follower imputation: polynomial functions, splines, and generalized linear models. We evaluate the performance of each method via a case study of accounts from 236 health organizations during the 2014 Ebola outbreak. For accurate interpolation and extrapolation, we find that negative binomial regression, modeled separately for each account, using time as an interval variable, accurately recovers missing values while retaining narrow prediction intervals.


Author(s):  
Manraj Singh Bains ◽  
Shriniwas S. Arkatkar ◽  
K. S. Anbumani ◽  
Siva Subramaniam

This study aimed to develop a microsimulation model for optimizing toll plaza operations in relation to operational cost and level of service for users. A well-calibrated and validated simulation model was developed in PTV Vissim, and several scenarios were simulated to test their efficacy at improving toll plaza operations. Data collected included classified entry traffic volume at the toll plaza, service time for different payment categories, percentage of lane utilization, and travel time while crossing the toll plaza. For modeling lane selection for vehicles, the PTV Vissim component object model application programming interface—which enables dynamic route choice—was used. From the results it was observed that the simulation model accurately represented the current operations at the toll plaza. Scenarios, such as implementing a number plate recognition technology and segregating lanes for different vehicle types to improve the level of service, were evaluated with the simulation model.


2018 ◽  
Vol 21 (2) ◽  
pp. 34-38
Author(s):  
Awal Kurniawan ◽  
Intan Sari Areni ◽  
Andani Achmad

Teknologi web sudah mengalami banyak kemajuan. Dimulai dari era web 1.0 yang masih bersifat statis hingga teknologi web yang mampu mengatasi permasalah perangkat keras seperti storage, speech recognition, hingga geolocation. Salah satu teknologi web yang hadir saat ini adalah progressive web application. Penelitian ini bertujuan untuk merancang sebuah sistem yang dapat melakukan proses caching file pada konten website. Sistem menggunakan progressive web application dengan memanfaatkan service worker. Sumber data yang akan dijadikan objek pada penelitian ini adalah data keluhan masyarakat yang berbentuk JSON. Pada penelitian ini digunakan metode eksperimental dalam merancang aplikasi. Data keluhan yang bersumber dari sebuah API (Application Programming Interface) kemudian ditampilkan dalam keadaan jaringan aktif. Selama dalam keadaan jaringan aktif, service worker melakukan tugasnya dalam melakukan proses caching. Setelah itu , data yang sudah disimpan bisa diakses pada jaringan tidak aktif. Hasil dari penelitian ini adalah sistem keluhan yang disisipkan service worker mampu melakukan proses caching data hingga 500 data keluhan. Meskipun eksekusi waktu yang dibutuhkan dalam mengakses aplikasi lebih lama karena pemasangan service worker, namun aplikasi yang diakses lebih cepat ketika dalam keadaan offline karena data dimuat dalam cache service worker.


2015 ◽  
Vol 6 (2) ◽  
Author(s):  
Stan Ruecker ◽  
Peter Hodges ◽  
Nayaab Lokhadwala ◽  
Szu-Ying Ching ◽  
Jennifer Windsor ◽  
...  

An Application Programming Interface (API) can serve as a mechanism for separating interface concerns on the one hand from data and processing on the other, allowing for easier implementation of alternative human-computer interfaces. The API can also be used as a sounding board for ideas about what an interface should and should not accomplish. Our discussion will take as its case study our recent work in designing experimental interfaces for the visual construction of Boolean queries, for a project we have previously called the Mandala Browser.


Author(s):  
John Anderson Gómez Múnera ◽  
Alejandro Giraldo Quintero

The considerable increase in computation of the optimal control problems has in many cases overflowed the computing capacity available to handle complex systems in real time. For this reason, alternatives such as parallel computing are studied in this article, where the problem is worked out by distributing the tasks among several processors in order to accelerate the computation and to analyze and investigate the reduction of the total time of calculation the incremental gradually the processors used in it. We explore the use of these methods with a case study represented in a rolling mill process, and in turn making use of the strategy of updating the Phase Finals values for the construction of the final penalty matrix for the solution of the differential Riccati Equation. In addition, the order of the problem studied is increasing gradually for compare the improvements achieved in the models with major dimension. Parallel computing alternatives are also studied through multiple processing elements within a single machine or in a cluster via OpenMP, which is an application programming interface (API) that allows the creation of shared memory programs.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
María Novo-Lourés ◽  
Reyes Pavón ◽  
Rosalía Laza ◽  
David Ruano-Ordas ◽  
Jose R. Méndez

During the last years, big data analysis has become a popular means of taking advantage of multiple (initially valueless) sources to find relevant knowledge about real domains. However, a large number of big data sources provide textual unstructured data. A proper analysis requires tools able to adequately combine big data and text-analysing techniques. Keeping this in mind, we combined a pipelining framework (BDP4J (Big Data Pipelining For Java)) with the implementation of a set of text preprocessing techniques in order to create NLPA (Natural Language Preprocessing Architecture), an extendable open-source plugin implementing preprocessing steps that can be easily combined to create a pipeline. Additionally, NLPA incorporates the possibility of generating datasets using either a classical token-based representation of data or newer synset-based datasets that would be further processed using semantic information (i.e., using ontologies). This work presents a case study of NLPA operation covering the transformation of raw heterogeneous big data into different dataset representations (synsets and tokens) and using the Weka application programming interface (API) to launch two well-known classifiers.


Author(s):  
Ricardo Santos ◽  
Ivo Pereira ◽  
Isabel Azevedo

Detailed documentation and software tests are key factors for the success of a web application programming interface (API). When designing an API, especially in a design first approach, it is relevant to define a formal contract, known as API specification. This document must contain all necessary information regarding the API behavior. Thereby, the specification can be used to dynamically generate API components like documentation, client and server code, and software tests, reducing development and maintenance costs. This chapter presents a study of OpenAPI specification and its application on designing a new RESTful API for E-goi. It also presents a set of solutions for generating documentation, client code libraries, and test cases.


2021 ◽  
Vol 2078 (1) ◽  
pp. 012039
Author(s):  
Qi An

Abstract Skin cancer has become a great concern for people's wellness. With the popularization of machine learning, a considerable amount of data about skin cancer has been created. However, applications on the market featuring skin cancer diagnosis have barely utilized the data. In this paper, we have designed a web application to diagnose skin cancer with the CNN model and Chatterbot API. First, the application allows the user to upload an image of the user's skin. Next, a CNN model is trained with a huge amount of pre-taken images to make predictions about whether the skin is affected by skin cancer, and if the answer is yes, which kind of skin cancer the uploaded image can be classified. Last, a chatbot using the Chatterbot API is trained with hundreds of answers and questions asked and answered on the internet to interact with and give feedback to the user based on the information provided by the CNN model. The application has achieved significant performance in making classifications and has acquired the ability to interact with users. The CNN model has reached an accuracy of 0.95 in making classifications, and the chatbot can answer more than 100 questions about skin cancer. We have also done a great job on connecting the backend based on the CNN model as well as the Chatterbot API and the frontend based on the VUE Javascript framework.


Sign in / Sign up

Export Citation Format

Share Document