scholarly journals Building a Secure Biomedical Data Sharing Decentralized App (DApp): Tutorial (Preprint)

Author(s):  
Matthew Johnson ◽  
Michael Jones ◽  
Mark Shervey ◽  
Joel T Dudley ◽  
Noah Zimmerman

UNSTRUCTURED Decentralized applications (DApps) are computer programs that run on a distributed computing system, such as a blockchain network. Unlike the client-server architecture that powers most internet applications, DApps that are integrated with a blockchain network can execute application logic that is guaranteed to be transparent, verifiable, and immutable. This new paradigm has a number of unique properties that are attractive to the biomedical and health care communities. However, instructional resources are scarcely available for biomedical software developers to begin building DApps on a blockchain. Such applications require new ways of thinking about how to build, maintain, and deploy software. This tutorial serves as a complete working prototype of a DApp, motivated by a real use case in biomedical research requiring data privacy. We describe the architecture of a DApp, the implementation details of a smart contract, a sample iPhone operating system (iOS) DApp that interacts with the smart contract, and the development tools and libraries necessary to get started. The code necessary to recreate the application is publicly available.

10.2196/13601 ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. e13601 ◽  
Author(s):  
Matthew Johnson ◽  
Michael Jones ◽  
Mark Shervey ◽  
Joel T Dudley ◽  
Noah Zimmerman

Decentralized apps (DApps) are computer programs that run on a distributed computing system, such as a blockchain network. Unlike the client-server architecture that powers most internet apps, DApps that are integrated with a blockchain network can execute app logic that is guaranteed to be transparent, verifiable, and immutable. This new paradigm has a number of unique properties that are attractive to the biomedical and health care communities. However, instructional resources are scarcely available for biomedical software developers to begin building DApps on a blockchain. Such apps require new ways of thinking about how to build, maintain, and deploy software. This tutorial serves as a complete working prototype of a DApp, motivated by a real use case in biomedical research requiring data privacy. We describe the architecture of a DApp, the implementation details of a smart contract, a sample iPhone operating system (iOS) DApp that interacts with the smart contract, and the development tools and libraries necessary to get started. The code necessary to recreate the app is publicly available.


10.2196/13046 ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. e13046 ◽  
Author(s):  
Mengchun Gong ◽  
Shuang Wang ◽  
Lezi Wang ◽  
Chao Liu ◽  
Jianyang Wang ◽  
...  

Background Patient privacy is a ubiquitous problem around the world. Many existing studies have demonstrated the potential privacy risks associated with sharing of biomedical data. Owing to the increasing need for data sharing and analysis, health care data privacy is drawing more attention. However, to better protect biomedical data privacy, it is essential to assess the privacy risk in the first place. Objective In China, there is no clear regulation for health systems to deidentify data. It is also not known whether a mechanism such as the Health Insurance Portability and Accountability Act (HIPAA) safe harbor policy will achieve sufficient protection. This study aimed to conduct a pilot study using patient data from Chinese hospitals to understand and quantify the privacy risks of Chinese patients. Methods We used g-distinct analysis to evaluate the reidentification risks with regard to the HIPAA safe harbor approach when applied to Chinese patients’ data. More specifically, we estimated the risks based on the HIPAA safe harbor and limited dataset policies by assuming an attacker has background knowledge of the patient from the public domain. Results The experiments were conducted on 0.83 million patients (with data field of date of birth, gender, and surrogate ZIP codes generated based on home address) across 33 provincial-level administrative divisions in China. Under the Limited Dataset policy, 19.58% (163,262/833,235) of the population could be uniquely identifiable under the g-distinct metric (ie, 1-distinct). In contrast, the Safe Harbor policy is able to significantly reduce privacy risk, where only 0.072% (601/833,235) of individuals are uniquely identifiable, and the majority of the population is 3000 indistinguishable (ie the population is expected to share common attributes with 3000 or less people). Conclusions Through the experiments based on real-world patient data, this work illustrates that the results of g-distinct analysis about Chinese patient privacy risk are similar to those from a previous US study, in which data from different organizations/regions might be vulnerable to different reidentification risks under different policies. This work provides reference to Chinese health care entities for estimating patients’ privacy risk during data sharing, which laid the foundation of privacy risk study about Chinese patients’ data in the future.


Author(s):  
Steve Sawyer ◽  
William Gibbons

This teaching case describes the efforts of one department in a large organization to migrate from an internally developed, mainframe-based, computing system to a system based on purchased software running on a client/server architecture. The case highlights issues with large scale software implementations such as those demanded by enterprise resource package (ERP) installations. Often, the ERP selected by an organization does not have all the required functionality. This demands purchasing and installing additional packages (known colloquially as “bolt-ons”) to provide the needed functionality. These implementations lead to issues regarding oversight of the technical architecture, both project and technology governance, and user department capability for managing the installation of new systems.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Qingsong Zhao ◽  
Qingkai Zeng ◽  
Ximeng Liu

Functional encryption (FE) is a vast new paradigm for encryption scheme which allows tremendous flexibility in accessing encrypted data. In a FE scheme, a user can learn specific function of encrypted messages by restricted functional key and reveals nothing else about the messages. Besides the standard notion of data privacy in FE, it should protect the privacy of the function itself which is also crucial for practical applications. In this paper, we construct a secret key FE scheme for the inner product functionality using asymmetric bilinear pairing groups of prime order. Compared with the existing similar schemes, our construction reduces both necessary storage and computational complexity by a factor of 2 or more. It achieves simulation-based security, security strength which is higher than that of indistinguishability-based security, against adversaries who get hold of an unbounded number of ciphertext queries and adaptive secret key queries under the External Decisional Linear (XDLIN) assumption in the standard model. In addition, we implement the secret key inner product scheme and compare the performance with the similar schemes.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4375 ◽  
Author(s):  
Yuxuan Wang ◽  
Jun Yang ◽  
Xiye Guo ◽  
Zhi Qu

As one of the information industry’s future development directions, the Internet of Things (IoT) has been widely used. In order to reduce the pressure on the network caused by the long distance between the processing platform and the terminal, edge computing provides a new paradigm for IoT applications. In many scenarios, the IoT devices are distributed in remote areas or extreme terrain and cannot be accessed directly through the terrestrial network, and data transmission can only be achieved via satellite. However, traditional satellites are highly customized, and on-board resources are designed for specific applications rather than universal computing. Therefore, we propose to transform the traditional satellite into a space edge computing node. It can dynamically load software in orbit, flexibly share on-board resources, and provide services coordinated with the cloud. The corresponding hardware structure and software architecture of the satellite is presented. Through the modeling analysis and simulation experiments of the application scenarios, the results show that the space edge computing system takes less time and consumes less energy than the traditional satellite constellation. The quality of service is mainly related to the number of satellites, satellite performance, and task offloading strategy.


2019 ◽  
Vol 11 (4) ◽  
pp. 83 ◽  
Author(s):  
Antonio Celesti ◽  
Maria Fazio ◽  
Massimo Villari

Presently, we are observing an explosion of data that need to be stored and processed over the Internet, and characterized by large volume, velocity and variety. For this reason, software developers have begun to look at NoSQL solutions for data storage. However, operations that are trivial in traditional Relational DataBase Management Systems (DBMSs) can become very complex in NoSQL DBMSs. This is the case of the join operation to establish a connection between two or more DB structures, whose construct is not explicitly available in many NoSQL databases. As a consequence, the data model has to be changed or a set of operations have to be performed to address particular queries on data. Thus, open questions are: how do NoSQL solutions work when they have to perform join operations on data that are not natively supported? What is the quality of NoSQL solutions in such cases? In this paper, we deal with such issues specifically considering one of the major NoSQL document oriented DB available on the market: MongoDB. In particular, we discuss an approach to perform join operations at application layer in MongoDB that allows us to preserve data models. We analyse performance of the proposes approach discussing the introduced overhead in comparison with SQL-like DBs.


Author(s):  
B. A. Malin ◽  
K. E. Emam ◽  
C. M. O'Keefe

2013 ◽  
Vol 2013 ◽  
pp. 1-6 ◽  
Author(s):  
Ying-Chih Lin ◽  
Chin-Sheng Yu ◽  
Yen-Jen Lin

Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable.


2020 ◽  
Author(s):  
Hyeong-Joon ­Kim ◽  
Hye Hyun Kim ◽  
Hosuk Ku ◽  
Kyung Don Yoo ◽  
Suehyun Lee ◽  
...  

BACKGROUND The Health Avatar Platform (HAP) provides a mobile health environment with interconnected patient Avatars, physician apps, and intelligent agents (IoA3) for data privacy and participatory medicine. However, its fully decentralized architecture has come at the expense of decentralized data management and data provenance. OBJECTIVE The introduction of blockchain and smart contract (SC) technologies to the HAP legacy platform with a clinical metadata registry (MDR) remarkably strengthens decentralized health data integrity and immutable transaction traceability at the corresponding data-element level in a privacy-preserving fashion. A crypto-economy ecosystem was built to facilitate secure and traceable exchanges of sensitive health data. METHODS HAP decentralizes patient data in appropriate locations with no central storage, i.e., on patients’ smartphones and on physicians’ smart devices. We implemented an Ethereum-based hash chain for all transactions and SC-based processes to guarantee decentralized data integrity and to generate block data containing transaction metadata on-chain. Parameters of all types of data communications were enumerated and incorporated into three SCs, in this case a health data transaction manager, a transaction status manager, and an API transaction manager. The actual decentralized health data are managed in off-chain manner on their appropriate smart devices and authenticated by hashed metadata on-chain. RESULTS Metadata of each data transaction are captured in a HAP blockchain node by the SCs. We provide workflow diagrams each of the three use cases of data push (from a physician app or an intelligent Agents to a patient Avatar), data pull (requested to a patient Avatar by other entities), and data backup transactions. Each transaction can be finely managed at the corresponding data-element level rather than at the resource or document levels. Hash chained metadata support data element-level verification of the data integrity in subsequent transactions. SCs can incentivize transactions for data sharing and intelligent digital healthcare services. CONCLUSIONS HAP and IoA3 provide a decentralized blockchain ecosystem for health data that enables trusted and finely tuned data sharing and facilitates health value-creating transactions by SCs.


Sign in / Sign up

Export Citation Format

Share Document