scholarly journals Web services for data warehouses: OMOP and PCORnet on i2b2

2018 ◽  
Vol 25 (10) ◽  
pp. 1331-1338 ◽  
Author(s):  
Jeffrey G Klann ◽  
Lori C Phillips ◽  
Christopher Herrick ◽  
Matthew A H Joss ◽  
Kavishwar B Wagholikar ◽  
...  

Abstract Objective Healthcare organizations use research data models supported by projects and tools that interest them, which often means organizations must support the same data in multiple models. The healthcare research ecosystem would benefit if tools and projects could be adopted independently from the underlying data model. Here, we introduce the concept of a reusable application programming interface (API) for healthcare and show that the i2b2 API can be adapted to support diverse patient-centric data models. Materials and Methods We develop methodology for extending i2b2’s pre-existing API to query additional data models, using i2b2’s recent “multi-fact-table querying” feature. Our method involves developing data-model-specific i2b2 ontologies and mapping these to query non-standard table structure. Results We implement this methodology to query OMOP and PCORnet models, which we validate with the i2b2 query tool. We implement the entire PCORnet data model and a five-domain subset of the OMOP model. We also demonstrate that additional, ancillary data model columns can be modeled and queried as i2b2 “modifiers.” Discussion i2b2’s REST API can be used to query multiple healthcare data models, enabling shared tooling to have a choice of backend data stores. This enables separation between data model and software tooling for some of the more popular open analytic data models in healthcare. Conclusion This methodology immediately allows querying OMOP and PCORnet using the i2b2 API. It is released as an open-source set of Docker images, and also on the i2b2 community wiki.

Author(s):  
Adian Fatchur Rochim ◽  
Abda Rafi ◽  
Adnan Fauzi ◽  
Kurniawan Teguh Martono

The use of information technology these days are very high. From business through education activities tend to use this technology most of the time. Information technology uses computer networks for integration and management data. To avoid business problems, the number of network devices installed requires a manageable network configuration for easier maintenance. Traditionally, each of network devices has to be manually configured by network administrators. This process takes time and inefficient. Network automation methods exist to overcome the repetitive process. Design model uses a web-based application for maintenance and automates networking tasks. In this research, the network automation system implemented and built a controller application that used REST API (Representational State Transfer Application Programming Interface) architecture and built by Django framework with Python programming language. The design modeled namely As-RaD System. The network devices used in this research are Cisco CSR1000V because it supports REST API communication to manage its network configuration and could be placed on the server either. The As-RaD System provides 75% faster performance than Paramiko and 92% than NAPALM.


2020 ◽  
Vol 9 (4) ◽  
pp. 394-402
Author(s):  
Helmy ◽  
Athadhia Febyana ◽  
Agung Al Rasyid ◽  
Arif Nursyahid ◽  
Thomas Agung Setyawan ◽  
...  

Akuaponik merupakan penggabungan antara akuakultur dengan hidroponik. Salah satu sistem hidroponik yaitu sistem drip (tetes). Parameter yang perlu diperhatikan dalam budidaya akuaponik antara lain keasaman larutan nutrisi yaitu pH, suhu air, dan larutan nutrisi yang ditunjukkan oleh kepekatan zat padat terlarut dalam air (Total Dissolved Solids, TDS). Nutrisi tanaman diperoleh dari kotoran ikan yang mengandung nitrogen. Oleh karena itu, diperlukan pemantauan pH, TDS, dan suhu secara realtime dan pengendalian kelembapan tanah pada tanaman akuaponik agar tanaman tidak kekurangan nutrisi. Proses pengendalian menggunakan Representational State Transfer Application Programming Interface (REST API) dalam menerima nilai batas ambang yang ditentukan petani akuaponik melalui situs web dan mengirimkan nilai kelembapan tanah dan parameter kolam ikan berupa pH, suhu dan TDS ke server. Pengujian data loss dan delay pada sistem pemantauan dan pengendalian ini diperlukan untuk mengetahui keandalan alat dalam pengiriman dan penerimaan data. Selain itu, diperlukan notifikasi berupa e-mail kepada petani apabila nilai kelembapan tanah kurang dari batas ambang. Hasil pengujian menunjukkan sistem dapat mengirimkan notifikasi berupa e-mail kepada petani apabila nilai kelembapan tanah kurang dari batas ambang, rerata delay pemantauan node-gateway sebesar 6,01 detik, sedangkan rerata delay pemantauan gateway–server sebesar 10,02 detik, dan rerata delay pengendalian server–gateway sebesar 92,55 detik.


2016 ◽  
Vol 23 (5) ◽  
pp. 899-908 ◽  
Author(s):  
Joshua C Mandel ◽  
David A Kreda ◽  
Kenneth D Mandl ◽  
Isaac S Kohane ◽  
Rachel B Ramoni

Abstract Objective In early 2010, Harvard Medical School and Boston Children’s Hospital began an interoperability project with the distinctive goal of developing a platform to enable medical applications to be written once and run unmodified across different healthcare IT systems. The project was called Substitutable Medical Applications and Reusable Technologies (SMART). Methods We adopted contemporary web standards for application programming interface transport, authorization, and user interface, and standard medical terminologies for coded data. In our initial design, we created our own openly licensed clinical data models to enforce consistency and simplicity. During the second half of 2013, we updated SMART to take advantage of the clinical data models and the application-programming interface described in a new, openly licensed Health Level Seven draft standard called Fast Health Interoperability Resources (FHIR). Signaling our adoption of the emerging FHIR standard, we called the new platform SMART on FHIR. Results We introduced the SMART on FHIR platform with a demonstration that included several commercial healthcare IT vendors and app developers showcasing prototypes at the Health Information Management Systems Society conference in February 2014. This established the feasibility of SMART on FHIR, while highlighting the need for commonly accepted pragmatic constraints on the base FHIR specification. Conclusion In this paper, we describe the creation of SMART on FHIR, relate the experience of the vendors and developers who built SMART on FHIR prototypes, and discuss some challenges in going from early industry prototyping to industry-wide production use.


2015 ◽  
Vol 87 (11-12) ◽  
pp. 1127-1137
Author(s):  
Stuart J. Chalk

AbstractThis paper details an approach to re-purposing scientific data as presented on a web page for the sole purpose of making the data more available for searching and integration into other websites. Data ‘scraping’ is used to extract metadata from a set of pages on the National Institute of Standards and Technology (NIST) website, clean, organize and store the metadata in a MySQL database. The metadata is then used to create a new website at the authors institution using the CakePHP framework to create a representational state transfer (REST) style application program interface (API). The processes used for website analysis, schema development, database construction, metadata scraping, REST API development, and remote data integration are discussed. Lessons learned and tips and tricks on how to get the most out of the process are also included.


2021 ◽  
Vol 8 (2) ◽  
pp. 180-185
Author(s):  
Anna Tolwinska

This article aims to explain the key metadata elements listed in Participation Reports, why it’s important to check them regularly, and how Crossref members can improve their scores. Crossref members register a lot of metadata in Crossref. That metadata is machine-readable, standardized, and then shared across discovery services and author tools. This is important because richer metadata makes content more discoverable and useful to the scholarly community. It’s not always easy to know what metadata Crossref members register in Crossref. This is why Crossref created an easy-to-use tool called Participation Reports to show editors, and researchers the key metadata elements Crossref members register to make their content more useful. The key metadata elements include references and whether they are set to open, ORCID iDs, funding information, Crossmark metadata, licenses, full-text URLs for text-mining, and Similarity Check indexing, as well as abstracts. ROR IDs (Research Organization Registry Identifiers), that identify institutions will be added in the future. This data was always available through the Crossref ’s REST API (Representational State Transfer Application Programming Interface) but is now visualized in Participation Reports. To improve scores, editors should encourage authors to submit ORCIDs in their manuscripts and publishers should register as much metadata as possible to help drive research further.


2020 ◽  
Vol 1 (4) ◽  
pp. 127-132
Author(s):  
Irfan Kurniawan ◽  
Humaira ◽  
Fazrol Rozi

Sebelumnya penjualan dan penawaran jasa hanya dilakukan secara langsung, yang tentunya ini membuat konsumen merasa kurang berminat, karena harus menguras tenaga pergi ketempat orang yang buka jasa. Dari permasalahan ini maka dikembangkan sebuah system mengenai transaksi jasa dengan basis Application Programming Interface (API) sebagai backend dan diimplementasikan ke mobile android sebagai frontend. Dalam tugas akhir ini mengasilkan sistem berbasis API dengan arsitektur REST dari segi backend untuk memudahkan dalam proses transaksi jasa dan diterapkan pada aplikasi android sebagai antarmuka pengguna


Data Science ◽  
2021 ◽  
pp. 1-15
Author(s):  
Jörg Schad ◽  
Rajiv Sambasivan ◽  
Christopher Woodward

Experimenting with different models, documenting results and findings, and repeating these tasks are day-to-day activities for machine learning engineers and data scientists. There is a need to keep control of the machine-learning pipeline and its metadata. This allows users to iterate quickly through experiments and retrieve key findings and observations from historical activity. This is the need that Arangopipe serves. Arangopipe is an open-source tool that provides a data model that captures the essential components of any machine learning life cycle. Arangopipe provides an application programming interface that permits machine-learning engineers to record the details of the salient steps in building their machine learning models. The components of the data model and an overview of the application programming interface is provided. Illustrative examples of basic and advanced machine learning workflows are provided. Arangopipe is not only useful for users involved in developing machine learning models but also useful for users deploying and maintaining them.


Paleobiology ◽  
2015 ◽  
Vol 42 (1) ◽  
pp. 1-7 ◽  
Author(s):  
Shanan E. Peters ◽  
Michael McClennen

AbstractThe Paleobiology Database (PBDB; https://paleobiodb.org) consists of geographically and temporally explicit, taxonomically identified fossil occurrence data. The taxonomy utilized by the PBDB is not static, but is instead dynamically generated using an algorithm applied to separately managed taxonomic authority and opinion data. The PBDB owes its existence to many individuals, some of whom have entered more than 1.26 million fossil occurrences and over 570,000 taxonomic opinions, and some of whom have developed and maintained supporting infrastructure and analysis tools. Here, we provide an overview of the data model currently used by the PBDB and then briefly describe how this model is exposed via an Application Programming Interface (API). Our objective is to outline how PBDB data can now be accessed within individual scientific workflows, used to develop independently managed educational and scientific applications, and accessed to forge dynamic, near real-time connections to other data resources.


Competitive ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. 87-94
Author(s):  
Supono Syafiq ◽  
Sari Armiati

Saat ini data merupakan bagian yang penting di era transformasi teknologi informasi, proses komunikasi pun tidak dibatasi oleh perbedaan jenis perangkat yang digunakan membuat informasi dapat diakses degan mudah. Saat ini Lembaga Penelitian dan Pengabdian kepada Masyarakat (LPPM) Politeknik Pos Indonesia sudah memiliki pangkalan data dalam proses pengelolaan data penelitian, pengabdian, publikasi dan HaKI berbasis web. Saat ini akses data APTIMAS masih terpusat dan tidak bisa diakses dengan menggunakan aplikasi selain aplikasi APTIMAS, sehingga dibutuhkan sebuah media middle ware atau web service yang memberikan solusi bagaiman aplikasi lain dapat mengkases data APTIMAS seperti untuk pengembangan aplikasi Mobile, dashboard di aplikasi lain, kebutuhan data lainnya. Oleh karena itu dibangunlah sebuah Web Service dengan arsitektur Representational State Transfer (REST) Application Programming Interface (API) yang berfungsi jembatan dalam memberikan layanan untuk komunikasi data. Dengan dibangunnya aplikasi middleware web service ini, diharapkan APTIMAS sebagai penyedia data penelitian, pengabdian, publikasi dan HaKI di lingkungan internal perguruan tinggi Politeknik Pos Inodoneisa dapat digunakan sebagai rujukan pengambilan data untuk pengembangan aplikasi lain dan kebutuhan data di lingkungan kampus Politeknik Pos Indonesia, tanpa harus mengakses langsung ke database APTIMAS


Sign in / Sign up

Export Citation Format

Share Document