scholarly journals A formal pattern of information system design

2021 ◽  
Vol 2094 (3) ◽  
pp. 032045
Author(s):  
A Y Unger

Abstract A new design pattern intended for distributed cloud-based information systems is proposed. Pattern is based on the traditional client-server architecture. The server side is divided into three principal components: data storage, application server and cache server. Each component can be used to deploy parts of several independent information systems, thus realizing shared-resource approach. A strategy of separation of competencies between the client and the server is proposed. The strategy assumes that the client side is responsible for application logic and the server side is responsible for data storage consistency and data access control. Data protection is ensured by means of two particular approaches: at the entity level and at the transaction level. The application programming interface to access data is presented at the level of identified transaction descriptors.

2020 ◽  
Vol 39 (3) ◽  
pp. 3297-3314
Author(s):  
Keshav Sinha ◽  
Annu Priya ◽  
Partha Paul

Cloud has become one of the most demanding services for data storage. On another hand, the security of data is one of the challenging tasks for Cloud Service Provider (CSP). Cryptography is one of the ways for securing the storage data. Cryptography is not a new approach instead of the efficient utilization of cryptographical algorithms is greatly needed. In this work, we proposed a Secure Hidden Layer (SHL) and Application Programming Interface (API) for data encryption. The SHL is consisting of two major modules (i) Key Management Server (KMS) and (ii) Share Holder Server (SHS) which is used for storing and sharing of cryptographic key. For this purpose, we proposed a server-side encryption algorithm, which is based on the asymmetric algorithm (RSA and CRT) for providing end-to-end security of multimedia data. The experimental results of text and video are evidence that the size of file is not much affected after the encryption and effectively stored at Cloud Storage Server (CSS). The parameters like ciphertext size, encryption time and throughput are considered for performance evaluation of the proposed encryption technique.


2009 ◽  
pp. 1204-1225
Author(s):  
Wen-Chen Hu ◽  
Chyuan-Huei Thomas Yang ◽  
Jyh-haw Yeh ◽  
Weihong Hu

The emergence of wireless and mobile networks has made possible the introduction of electronic commerce to a new application and research subject: mobile commerce. Understanding or constructing a mobile or an electronic commerce system is an arduous task because the system involves a wide variety of disciplines and technologies and the technologies are constantly changing. To facilitate understanding and constructing such a system, this article divides the system into six components: (i) applications, (ii) client computers or devices, (iii) mobile middleware, (iv) wireless networks, (v) wired networks, and (vi) host computers. Elements in these components specifi- cally related to the subject are described in detail and lists of current technologies for component construction are discussed. Another important and complicated issue related to the subject is the mobile or electronic commerce application programming. It includes two types of programming: client-side and server-side programming, which will be introduced too.


Author(s):  
Anja Bechmann ◽  
Peter Bjerregaard Vahlstrup

The aim of this article is to discuss methodological implications and challenges in different kinds of deep and big data studies of Facebook and Instagram through methods involving the use of Application Programming Interface (API) data. This article describes and discusses Digital Footprints (www.digitalfootprints.dk), a data extraction and analytics software that allows researchers to extract user data from Facebook and Instagram data sources; public streams as well as private data with user consent. Based on insights from the software design process and data driven studies the article argues for three main challenges: Data quality, data access and analysis, and legal and ethical considerations.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jin Li ◽  
Songqi Wu ◽  
Yundan Yang ◽  
Fenghui Duan ◽  
Hui Lu ◽  
...  

In the process of sharing data, the costless replication of electric energy data leads to the problem of uncontrolled data and the difficulty of third-party access verification. This paper proposes a controlled sharing mechanism of data based on the consortium blockchain. The data flow range is controlled by the data isolation mechanism between channels provided by the consortium blockchain by constructing a data storage consortium chain to achieve trusted data storage, combining attribute-based encryption to complete data access control and meet the demands for granular data accessibility control and secure sharing; the data flow transfer ledger is built to record the original data life cycle management and effectively record the data transfer process of each data controller. Taking the application scenario of electric energy data sharing as an example, the scheme is designed and simulated on the Linux system and Hyperledger Fabric. Experimental results have verified that the mechanism can effectively control the scope of access to electrical energy data and realize the control of the data by the data owner.


2021 ◽  
Author(s):  
Philipp S. Sommer ◽  
Viktoria Wichert ◽  
Daniel Eggert ◽  
Tilman Dinter ◽  
Klaus Getzlaff ◽  
...  

<p>A common challenge for projects with multiple involved research institutes is a well-defined and productive collaboration. All parties measure and analyze different aspects, depend on each other, share common methods, and exchange the latest results, findings, and data. Today this exchange is often impeded by a lack of ready access to shared computing and storage resources. In our talk, we present a new and innovative remote procedure call (RPC) framework. We focus on a distributed setup, where project partners do not necessarily work at the same institute, and do not have access to each others resources.</p><p>We present the prototype of an application programming interface (API) developed in Python that enables scientists to collaboratively explore and analyze sets of distributed data. It offers the functionality to request remote data through a comfortable interface, and to share and invoke single computational methods or even entire analytical workflows and their results. The prototype enables researchers to make their methods accessible as a backend module running on their own <span>infrastructure</span>. Hence researchers from other institutes may apply the available methods through a lightweight python or Javascript API. This API transforms standard python calls into requests to the backend process on the remote server. In the end, the overhead for both, the backend developer and the remote user, is very low. The effort of implementing the necessary workflow and API usage equalizes the writing of code in a non-distributed setup. Besides that, data do not have to be downloaded locally, the analysis can be executed “close to the data” while using the institutional infrastructure where the eligible data set is stored.</p><p>With our prototype, we demonstrate distributed data access and analysis workflows across institutional borders to enable effective scientific collaboration, thus deepening our understanding of the Earth system.</p><p>This framework has been developed in a joint effort of the DataHub and Digitial Earth initiatives within the Research Centers of the Helmholtz-Gemeinschaft Deutscher Forschungszentren e.V.  (Helmholtz Association of German Research Centres, HGF).</p>


Cloud computing, an efficient technology that utilizes huge amount of data file storage with security. However, the content owner does not controlling data access for unauthorized clients and does not control data storage and usage of data. Some previous approaches data access control to help data de-duplication concurrently for cloud storage system. Encrypted data for cloud storage is not effectively handled by current industrial de-duplication solutions. The deduplication is unguarded from brute-force attacks and fails in supporting control of data access .An efficient data confining technique that eliminates redundant data’s multiple copies which is commonly used is Data-Deduplication. It reduces the space needed to store these data and thus bandwidth is saved. An efficient content discovery and preserving De-duplication (ECDPD) algorithm that detects client file range and block range of de-duplication in storing data files in the cloud storage system was proposed to overpower the above problems.Data access control is supported by ECDPD actively. Based on Experimental evaluations, proposed ECDPD method reduces 3.802 milliseconds of DUT (Data Uploading Time) and 3.318 milliseconds of DDT (Data Downloading Time) compared than existing approaches


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Kenneth D. Mandl ◽  
Daniel Gottlieb ◽  
Joshua C. Mandel ◽  
Vladimir Ignatov ◽  
Raheel Sayeed ◽  
...  

AbstractThe 21st Century Cures Act requires that certified health information technology have an application programming interface (API) giving access to all data elements of a patient’s electronic health record, “without special effort”. In the spring of 2020, the Office of the National Coordinator of Health Information Technology (ONC) published a rule—21st Century Cures Act Interoperability, Information Blocking, and the ONC Health IT Certification Program—regulating the API requirement along with protections against information blocking. The rule specifies the SMART/HL7 FHIR Bulk Data Access API, which enables access to patient-level data across a patient population, supporting myriad use cases across healthcare, research, and public health ecosystems. The API enables “push button population health” in that core data elements can readily and standardly be extracted from electronic health records, enabling local, regional, and national-scale data-driven innovation.


2020 ◽  
Vol 17 ◽  
pp. 326-331
Author(s):  
Kamil Siebyła ◽  
Maria Skublewska-Paszkowska

There are various methods for creating web applications. Each of these methods has different levels of performance. This factor is measurable at every level of the application. The performance of the frontend layer depends on the response time from individual endpoint of the used API (Application Programming Interface). The way the data access will be programmed at a specific endpoint, therefore, determines the performance of the entire application. There are many programming methods that are often time-consuming to implement. This article presents a comparison of the available methods of handling the persistence layer in relation to the efficiency of their implementation.                                                                                    


Sign in / Sign up

Export Citation Format

Share Document