scholarly journals An Adaptive Biomedical Data Managing Scheme Based on the Blockchain Technique

2019 ◽  
Vol 9 (12) ◽  
pp. 2494 ◽  
Author(s):  
Ahmed Faeq Hussein ◽  
Abbas K. ALZubaidi ◽  
Qais Ahmed Habash ◽  
Mustafa Musa Jaber

A crucial role is played by personal biomedical data when it comes to maintaining proficient access to health records by patients as well as health professionals. However, it is difficult to get a unified view pertaining to health data that have been scattered across various health centers/hospital sections. To be specific, health records are distributed across many places and cannot be integrated easily. In recent years, blockchain has arisen as a promising solution that helps to achieve the sharing of individual biomedical information in a secure way, whilst also having the benefit of privacy preservation because of its immutability. This research puts forward a blockchain-based managing scheme that helps to establish interpretation improvements pertaining to electronic biomedical systems. In this scheme, two blockchains were employed to construct the base, whereby the second blockchain algorithm was used to generate a secure sequence for the hash key that was generated in first blockchain algorithm. This adaptive feature enables the algorithm to use multiple data types and also combines various biomedical images and text records. All data, including keywords, digital records, and the identity of patients, are private key encrypted with a keyword searching function so as to maintain data privacy, access control, and a protected search function. The obtained results, which show a low latency (less than 750 ms) at 400 requests/second, indicate the possibility of its use within several health care units such as hospitals and clinics.

Author(s):  
Ahmed Faeq Hussein ◽  
Abbas K. AlZubaidi ◽  
Qais Ahmed Habash ◽  
Mustafa Musa Jaber

A crucial role is played by personal biomedical data when it comes to maintaining proficient access to health records by patients as well as health professionals. However, it is difficult to get a unified view pertaining to health data that have been scattered across various health center/hospital sections. To be specific, health records are distributed across many places and cannot be found integrated easily. In recent years, blockchain is regarded as a promising explanation that helps to achieve individual biomedical information sharing in a secured way along with privacy preservation, because of its benefit of immutability. This research work put forwards a blockchain-based managing scheme that helps to establish interpretation improvements pertaining to electronic biomedical systems. In this scheme, two blockchain were employed to construct the base of it, where the second blockchain algorithm is used to generate a secure sequence for the hash key that generated in first blockchain algorithm. The adaptively feature enable the algorithm to use multiple data types and combine between various biomedical images and text records as well. All the data, including keywords, digital records as well as the identity of patients are private key encrypted along with keyword searching capability so as to maintain data privacy preservation, access control and protected search. The obtained results which show the low latency (less than 750 ms) at 400 requests / second indicate the ability to use it within several health care units such as hospitals and clinics.


Author(s):  
Yuancheng Li ◽  
Jiawen Yu

Background: In the power Internet of Things (IoT), power consumption data faces the risk of privacy leakage. Traditional privacy-preserving schemes cannot ensure data privacy on the system, as the secret key pairs shall be shared between all the interior nodes once leaked. In addition, the general schemes only support summation algorithms, resulting in a lack of extensibility. Objective: To preserve the privacy of power consumption data, ensure the privacy of secret keys, and support multiple data processing methods, we propose an improved power consumption data privacy-preserving scheme. Method: Firstly, we have established a power IoT architecture based on edge computing. Then the data is encrypted with the multi-key fully homomorphic algorithm to realize the operation of ciphertext, without the restrictions of calculation type. Through the improved decryption algorithm, ciphertext that can be separately decrypted in cloud nodes is generated, which contributes to reducing communication costs and preventing data leakage. Results: The experimental results show that our scheme is more efficient than traditional schemes in privacy preservation. According to the variance calculation result, the proposed scheme has reached the application standard in terms of computational cost and is feasible for practical operation. Discussion: In the future, we plan to adopt a secure multi-party computation based scheme so that data can be managed locally with homomorphic encryption, so as to ensure data privacy.


Author(s):  
Shalin Eliabeth S. ◽  
Sarju S.

Big data privacy preservation is one of the most disturbed issues in current industry. Sometimes the data privacy problems never identified when input data is published on cloud environment. Data privacy preservation in hadoop deals in hiding and publishing input dataset to the distributed environment. In this paper investigate the problem of big data anonymization for privacy preservation from the perspectives of scalability and time factor etc. At present, many cloud applications with big data anonymization faces the same kind of problems. For recovering this kind of problems, here introduced a data anonymization algorithm called Two Phase Top-Down Specialization (TPTDS) algorithm that is implemented in hadoop. For the data anonymization-45,222 records of adults information with 15 attribute values was taken as the input big data. With the help of multidimensional anonymization in map reduce framework, here implemented proposed Two-Phase Top-Down Specialization anonymization algorithm in hadoop and it will increases the efficiency on the big data processing system. By conducting experiment in both one dimensional and multidimensional map reduce framework with Two Phase Top-Down Specialization algorithm on hadoop, the better result shown in multidimensional anonymization on input adult dataset. Data sets is generalized in a top-down manner and the better result was shown in multidimensional map reduce framework by the better IGPL values generated by the algorithm. The anonymization was performed with specialization operation on taxonomy tree. The experiment shows that the solutions improves the IGPL values, anonymity parameter and decreases the execution time of big data privacy preservation by compared to the existing algorithm. This experimental result will leads to great application to the distributed environment.


Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 910
Author(s):  
Tong-Yuen Chai ◽  
Bok-Min Goi ◽  
Wun-She Yap

Biometric template protection (BTP) schemes are implemented to increase public confidence in biometric systems regarding data privacy and security in recent years. The introduction of BTP has naturally incurred loss of information for security, which leads to performance degradation at the matching stage. Although efforts are shown in the extended work of some iris BTP schemes to improve their recognition performance, there is still a lack of a generalized solution for this problem. In this paper, a trainable approach that requires no further modification on the protected iris biometric templates has been proposed. This approach consists of two strategies to generate a confidence matrix to reduce the performance degradation of iris BTP schemes. The proposed binary confidence matrix showed better performance in noisy iris data, whereas the probability confidence matrix showed better performance in iris databases with better image quality. In addition, our proposed scheme has also taken into consideration the potential effects in recognition performance, which are caused by the database-associated noise masks and the variation in biometric data types produced by different iris BTP schemes. The proposed scheme has reported remarkable improvement in our experiments with various publicly available iris research databases being tested.


Author(s):  
Dhamanpreet Kaur ◽  
Matthew Sobiesk ◽  
Shubham Patil ◽  
Jin Liu ◽  
Puran Bhagat ◽  
...  

Abstract Objective This study seeks to develop a fully automated method of generating synthetic data from a real dataset that could be employed by medical organizations to distribute health data to researchers, reducing the need for access to real data. We hypothesize the application of Bayesian networks will improve upon the predominant existing method, medBGAN, in handling the complexity and dimensionality of healthcare data. Materials and Methods We employed Bayesian networks to learn probabilistic graphical structures and simulated synthetic patient records from the learned structure. We used the University of California Irvine (UCI) heart disease and diabetes datasets as well as the MIMIC-III diagnoses database. We evaluated our method through statistical tests, machine learning tasks, preservation of rare events, disclosure risk, and the ability of a machine learning classifier to discriminate between the real and synthetic data. Results Our Bayesian network model outperformed or equaled medBGAN in all key metrics. Notable improvement was achieved in capturing rare variables and preserving association rules. Discussion Bayesian networks generated data sufficiently similar to the original data with minimal risk of disclosure, while offering additional transparency, computational efficiency, and capacity to handle more data types in comparison to existing methods. We hope this method will allow healthcare organizations to efficiently disseminate synthetic health data to researchers, enabling them to generate hypotheses and develop analytical tools. Conclusion We conclude the application of Bayesian networks is a promising option for generating realistic synthetic health data that preserves the features of the original data without compromising data privacy.


2016 ◽  
Vol 206 (1) ◽  
pp. 605-629 ◽  
Author(s):  
T. Bodin ◽  
J. Leiva ◽  
B. Romanowicz ◽  
V. Maupin ◽  
H. Yuan

2021 ◽  
Vol 11 (12) ◽  
pp. 2928-2936
Author(s):  
S. Vairaprakash ◽  
A. Shenbagavalli ◽  
S. Rajagopal

The biomedical processing of images is an important aspect of the modern medicine field and has an immense influence on the modern world. Automatic device assisted systems are immensely useful in order to diagnose biomedical images easily, accurately and effectively. Remote health care systems allow medical professionals and patients to work from different locations. In addition, expert advice on a patient can be received within a prescribed period of time from a specialist in a foreign country or in a remote area. Digital biomedical images must be transmitted over the network in remote healthcare systems. But the delivery of the biomedical goods entails many security challenges. Patient privacy must be protected by ensuring that images are secure from unwanted access. Furthermore, it must be effectively maintained so that nothing will affect the content of biomedical images. In certain instances, data manipulation can yield dramatic effects. A biomedical image safety method was suggested in this work. The suggested method will initially be used to construct a binary pixel encoding matrix and then to adjust matrix with the use of decimation mutation DNA watermarking principle. Afterwards to defend the sub keys couple privacy which was considered over the logical uplift utilization of tent maps and purpose. As acknowledged by chaotic (C-function) development, the security was investigated similar to transmission in addition to uncertainty. Depending on the preliminary circumstances, various numbers of random were generated intended for every map as of chaotic maps. An algorithm of Multi scale grasshopper optimization resource with correlation coefficient fitness function and PSNR was projected for choosing the optimal public key and secret key of system over random numbers. For choosing the validation process of optimization is to formulate novel model more relative stable to the conventional approach. In conclusion, the considered suggested findings were contrasted with current approaches protection that was appear to be successful extremely.


Sign in / Sign up

Export Citation Format

Share Document