representational state transfer
Recently Published Documents


TOTAL DOCUMENTS

101
(FIVE YEARS 46)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Vol 4 (3) ◽  
pp. 153-176
Author(s):  
Yongtak Park ◽  
Doyoung Kim

This study designs a reference model of the Defense REST API server based on the representational state transfer (REST) architecture style to present the most efficient, stable, and sustainable technical criteria for real-time service integration of defense information systems in Korea. The purpose of this component is to provide evidence to be stipulated as part of the Korean Defense Ministry's instructions and regulations, such as the Defense Interoperability Management Directive and the Interoperability Guide, and to support the development of the National Defense Interworking Technology and Interoperability. As the defense information system was subdivided and developed by the army, navy, air force, or business functions, interworking between information systems has become one of the most important factors. However, despite the need for advanced service integration and interworking, various interconnection service modules based on enterprise application integration (EAI), EAI hubs, and spokes were developed at a level that met local requirements (simple data transmission) without specific criteria for each network or information system. As a result, most of the interconnection modules currently in operation suffer from the absence of a technical spectrum, such as not meeting the military's demands for real-time interconnection and service integration, which increases with time. Therefore, this study seeks to identify the above problems by integrating the defense information system into one service and presenting a reference model of the defense REST API server to meet various real-time interworking requirements, analyze the technical basis, and pursue a model that fits military reality.


2021 ◽  
Vol 7 (2) ◽  
pp. 108-112
Author(s):  
Syahrul Usman ◽  
Jeffry Jeffry ◽  
Firman Aziz

Semenjak penetapan status sebagai Pandemi global oleh badan kesehatan dunia (WHO), wabah Corona Virus Disease (Covid-19) sudah menjadi momok di seluruh penjuru dunia. Berbagai standar prosedur penaggulangan penularan telah ditetapkan oleh WHO untuk memutus mata rantai penularan. Pemerintah kabupaten Bone melalui surat edaran Sekretaris Daerah No.  800/1919/VI/BKPSDM/2020 tanggal 4 Juni 2020 perihal sistem kerja Pegawai Aparatur Sipil Negara (ASN) dalam tatanan normal baru, mengatur kehadiran pegawai menggunakan absensi secara manual dan tidak menggunakan mesin absensi sidik jari. Hal ini tentunya akan berpengaruh pada pencatatan kinerja tiap ASN, dimana data absensi sudah terhubung dengan aplikasi e-kinerja yang sudah diterapkan pada lingkup kabupaten Bone. Tujuan dari penelitian ini adalah pembuatan aplikasi absensi online berbasis mobile Android untuk menjadi alternatif cara absen dengan memanfaatkan web service menggunakan metode komunikasi data Representational State Transfer (Rest) serta memanfaatkan protocol HTTP menggunakan format JavaScript Object Notation (JSon) dan bahasa pemrograman Java sebagai bahasa pemrograman aplikasi berbasis mobile. Hasil dari penelitian ini berupa aplikasi absensi berbasis mobile yang telah dilakukan pengujian performa web service menggunakan Aplikasi Apache JMETER untuk memastikan aplikasi ini sudah siap digunakan secara bersamaan oleh banyak ASN.


2021 ◽  
Vol 5 (2) ◽  
pp. 252-260
Author(s):  
Ariyan Zubaidi ◽  
◽  
Rhomy Idris Sardi ◽  
Andy Hidayat Jatmika ◽  
◽  
...  

Data confidentiality and resource's limitation issues are challenges for the Internet of Things. To implement good security on IoT systems, cryptography can do it, but it needs an effective encryption algorithm that does not require a lot of resources. The purpose of this study is to secure an IoT system by implementing an algorithm that is successful in maintaining the confidentiality of data transmitted. This research uses an experimental approach, by creating an IoT system for agriculture and adding an encryption algorithm. The IoT system uses NodeMCU as a microcontroller. NodeMCU is a microcontroller with small resources so it needs an efficient algorithm to be implemented in it. One algorithm that has good performance in a desktop computing environment is the Advance Encryption Standard (AES) algorithm. The algorithm is tested in an IoT computing environment with a data exchange architecture using an REST (Representational State Transfer) web service, resulting in an IoT system for agriculture with cryptographic implementations in it. In the tests carried out, the encryption process of 128 and 256 bits of plain text took 266.31 and 274.31 microseconds, while the memory used was 16% and 17% of the total memory, respectively. This shows the encryption time is fast, and the memory usage is relatively small.


Competitive ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. 87-94
Author(s):  
Supono Syafiq ◽  
Sari Armiati

Saat ini data merupakan bagian yang penting di era transformasi teknologi informasi, proses komunikasi pun tidak dibatasi oleh perbedaan jenis perangkat yang digunakan membuat informasi dapat diakses degan mudah. Saat ini Lembaga Penelitian dan Pengabdian kepada Masyarakat (LPPM) Politeknik Pos Indonesia sudah memiliki pangkalan data dalam proses pengelolaan data penelitian, pengabdian, publikasi dan HaKI berbasis web. Saat ini akses data APTIMAS masih terpusat dan tidak bisa diakses dengan menggunakan aplikasi selain aplikasi APTIMAS, sehingga dibutuhkan sebuah media middle ware atau web service yang memberikan solusi bagaiman aplikasi lain dapat mengkases data APTIMAS seperti untuk pengembangan aplikasi Mobile, dashboard di aplikasi lain, kebutuhan data lainnya. Oleh karena itu dibangunlah sebuah Web Service dengan arsitektur Representational State Transfer (REST) Application Programming Interface (API) yang berfungsi jembatan dalam memberikan layanan untuk komunikasi data. Dengan dibangunnya aplikasi middleware web service ini, diharapkan APTIMAS sebagai penyedia data penelitian, pengabdian, publikasi dan HaKI di lingkungan internal perguruan tinggi Politeknik Pos Inodoneisa dapat digunakan sebagai rujukan pengambilan data untuk pengembangan aplikasi lain dan kebutuhan data di lingkungan kampus Politeknik Pos Indonesia, tanpa harus mengakses langsung ke database APTIMAS


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1227
Author(s):  
Emmanuel Baldwin Mbaya ◽  
Babatunde Alao ◽  
Philip Ewejobi ◽  
Innocent Nwokolo ◽  
Victoria Oguntosin ◽  
...  

Background: In this work, a COVID19 Application Programming Interface (API) was built using the Representational State Transfer (REST) API architecture and it is designed to fetch data daily from the Nigerian Center for Disease Control (NCDC) website. Methods: The API is developed using ASP.NET Core Web API framework using C# programming language and Visual Studio 2019 as the Integrated Development Environment (IDE). The application has been deployed to Microsoft Azure as the cloud hosting platform and to successfully get new data from the NCDC website using Hangfire where a job has been scheduled to run every 12:30 pm (GMT + 1) and load the fetched data into our database. Various API Endpoints are defined to interact with the system and get data as needed, data can be fetched from a single state by name, all states on a particular day or over a range of days, etc. Results: The results from the data showed that Lagos and Abuja FCT in Nigeria were the hardest-hit states in terms of Total Confirmed cases while Lagos and Edo states had the highest death causalities with 465 and 186 as of August 2020. This analysis and many more can be easily made as a result of this API we have created that warehouses all COVID19 Data as presented by the NCDC since the first contracted case on February 29, 2020. This system was tested on the BlazeMeter platform, and it had an average of 11Hits/s with a response time of 2905milliseconds. Conclusions: The extension of NaijaCovidAPI over existing COVID19 APIs for Nigeria is the access and retrieval of previous data. Our contribution to the body of knowledge is the creation of a data hub for Nigeria's COVID-19 incidence from February 29, 2020, to date


Author(s):  
Alexandros Ioannidis-Pantopikos ◽  
Donat Agosti

In the landscape of general-purpose repositories, Zenodo was built at the European Laboratory for Particle Physics' (CERN) data center to facilitate the sharing and preservation of the long tail of research across all disciplines and scientific domains. Given Zenodo’s long tradition of making research artifacts FAIR (Findable, Accessible, Interoperable, and Reusable), there are still challenges in applying these principles effectively when serving the needs of specific research domains. Plazi’s biodiversity taxonomic literature processing pipeline liberates data from publications, making it FAIR via extensive metadata, the minting of a DataCite Digital Object Identifier (DOI), a licence and both human- and machine-readable output provided by Zenodo, and accessible via the Biodiversity Literature Repository community at Zenodo. The deposits (e.g., taxonomic treatments, figures) are an example of how local networks of information can be formally linked to explicit resources in a broader context of other platforms like GBIF (Global Biodiversity Information Facility). In the context of biodiversity taxonomic literature data workflows, a general-purpose repository’s traditional submission approach is not enough to preserve rich metadata and to capture highly interlinked objects, such as taxonomic treatments and digital specimens. As a prerequisite to serve these use cases and ensure that the artifacts remain FAIR, Zenodo introduced the concept of custom metadata, which allows enhancing submissions such as figures or taxonomic treatments (see as an example the treatment of Eurygyrus peloponnesius) with custom keywords, based on terms from common biodiversity vocabularies like Darwin Core and Audubon Core and with an explicit link to the respective vocabulary term. The aforementioned pipelines and features are designed to be served first and foremost using public Representational State Transfer Application Programming Interfaces (REST APIs) and open web technologies like webhooks. This approach allows researchers and platforms to integrate existing and new automated workflows into Zenodo and thus empowers research communities to create self-sustained cross-platform ecosystems. The BiCIKL project (Biodiversity Community Integrated Knowledge Library) exemplifies how repositories and tools can become building blocks for broader adoption of the FAIR principles. Starting with the above literature processing pipeline, the concepts of and resulting FAIR data, with a focus on the custom metadata used to enhance the deposits, will be explained.


2021 ◽  
Vol 8 (2) ◽  
pp. 180-185
Author(s):  
Anna Tolwinska

This article aims to explain the key metadata elements listed in Participation Reports, why it’s important to check them regularly, and how Crossref members can improve their scores. Crossref members register a lot of metadata in Crossref. That metadata is machine-readable, standardized, and then shared across discovery services and author tools. This is important because richer metadata makes content more discoverable and useful to the scholarly community. It’s not always easy to know what metadata Crossref members register in Crossref. This is why Crossref created an easy-to-use tool called Participation Reports to show editors, and researchers the key metadata elements Crossref members register to make their content more useful. The key metadata elements include references and whether they are set to open, ORCID iDs, funding information, Crossmark metadata, licenses, full-text URLs for text-mining, and Similarity Check indexing, as well as abstracts. ROR IDs (Research Organization Registry Identifiers), that identify institutions will be added in the future. This data was always available through the Crossref ’s REST API (Representational State Transfer Application Programming Interface) but is now visualized in Participation Reports. To improve scores, editors should encourage authors to submit ORCIDs in their manuscripts and publishers should register as much metadata as possible to help drive research further.


Author(s):  
I Kadek Dendy Senapartha

Single Sign-On (SSO) is a technology that can support user convenience in accessing a system. By using SSO, a user only needs to authenticate once to get access to a system. OAuth 2.0 is one of the protocols that can be implemented on the SSO system. Currently, many Application Service Providers (ASP) support the OAuth 2.0 protocol thus providing convenience in the development of a more standard SSO system. Google Identity is one of the services provided by Google that can be used to build SSO systems using the OAuth 2.0 protocol. Application of the request and response methods provided by the protocol specification OAuth 2.0 and Representational State Transfer (REST) architecture of the system implementation can also make SSO systems more secure. In its implementation, the use of an agile system development methodology with the Scrum framework is used to increase speed and flexibility. The results of this research show that the use of Google Identity, REST, and OAuth 2.0 can provide easy user access, guarantee access validity, accelerate client-server data exchange and simplify the SSO implementation process.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5375
Author(s):  
Ovidiu Baniaș ◽  
Diana Florea ◽  
Robert Gyalai ◽  
Daniel-Ioan Curiac

Nowadays, REpresentational State Transfer Application Programming Interfaces (REST APIs) are widely used in web applications, hence a plethora of test cases are developed to validate the APIs calls. We propose a solution that automates the generation of test cases for REST APIs based on their specifications. In our approach, apart from the automatic generation of test cases, we provide an option for the user to influence the test case generation process. By adding user interaction, we aim to augment the automatic generation of APIs test cases with human testing expertise and specific context. We use the latest version of OpenAPI 3.x and a wide range of coverage metrics to analyze the functionality and performance of the generated test cases, and non-functional metrics to analyze the performance of the APIs. The experiments proved the effectiveness and practicability of our method.


Author(s):  
Sarah Hunt ◽  
Benjamin Moore ◽  
M. Amode ◽  
Irina Armean ◽  
Diana Lemos ◽  
...  

The Ensembl Variant Effect Predictor (VEP) is a freely available, open source tool for the annotation and filtering of genomic variants. It predicts variant molecular consequence using the Ensembl/GENCODE or RefSeq gene sets. It also reports phenotype associations from databases such as ClinVar, allele frequencies from studies including gnomAD, and predictions of deleteriousness from tools such as SIFT and CADD. Ensembl VEP includes filtering options to customise variant prioritisation. It is well supported and updated roughly quarterly to incorporate the latest gene, variant and phenotype association information. Ensembl VEP analysis can be performed using a highly configurable, extensible command-line tool, a Representational State Transfer (REST) application programming interface (API) and a user-friendly web interface. These access methods are designed to suit different levels of bioinformatics experience and meet different needs in terms of data size, visualisation and flexibility. In this tutorial, we will describe performing variant annotation using the Ensembl VEP web tool, which enables sophisticated analysis through a simple interface.


Sign in / Sign up

Export Citation Format

Share Document