scholarly journals Secure KNN Classification Scheme Based on Homomorphic Encryption for Cyberspace

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jiasen Liu ◽  
Chao Wang ◽  
Zheng Tu ◽  
Xu An Wang ◽  
Chuan Lin ◽  
...  

With the advent of the intelligent era, more and more artificial intelligence algorithms are widely used and a large number of user data are collected in the cloud server for sharing and analysis, but the security risks of private data breaches are also increasing in the meantime. CKKS homomorphic encryption has become a research focal point in the cryptography field because of its ability of homomorphic encryption for floating-point numbers and comparable computational efficiency. Based on the CKKS homomorphic encryption, this paper implements a secure KNN classification scheme in cloud servers for Cyberspace (CKKSKNNC) and supports batch calculation. This paper uses the CKKS homomorphic encryption scheme to encrypt user data samples and then uses Euclidean distance, Pearson similarity, and cosine similarity to compute the similarity between ciphertext data samples. Finally, the security classification of the samples is realized by voting rules. This paper selects IRIS data set for experimental, which is the classification data set commonly used in machine learning. The experimental results show that the accuracy of the other three similarity algorithms of the IRIS data is around 97% except for the Pearson correlation coefficient, which is almost the same as that in plaintext, which proves the effectiveness of this scheme. Through comparative experiments, the efficiency of this scheme is proved.

2021 ◽  
Vol 11 (18) ◽  
pp. 8757
Author(s):  
Mikail Mohammed Salim ◽  
Inyeung Kim ◽  
Umarov Doniyor ◽  
Changhoon Lee ◽  
Jong Hyuk Park

Healthcare applications store private user data on cloud servers and perform computation operations that support several patient diagnoses. Growing cyber-attacks on hospital systems result in user data being held at ransom. Furthermore, mathematical operations on data stored in the Cloud are exposed to untrusted external entities that sell private data for financial gain. In this paper, we propose a privacy-preserving scheme using homomorphic encryption to secure medical plaintext data from being accessed by attackers. Secret sharing distributes computations to several virtual nodes on the edge and masks all arithmetic operations, preventing untrusted cloud servers from learning the tasks performed on the encrypted patient data. Virtual edge nodes benefit from cloud computing resources to accomplish computing-intensive mathematical functions and reduce latency in device–edge node data transmission. A comparative analysis with existing studies demonstrates that homomorphically encrypted data stored at the edge preserves data privacy and integrity. Furthermore, secret sharing-based multi-node computation using virtual nodes ensures data confidentiality from untrusted cloud networks.


Author(s):  
Liu Jiasen ◽  
Wang Xu An ◽  
Chen Bowei ◽  
Tu Zheng ◽  
Zhao Kaiyang

With the enhancement of the performance of cloud servers, face recognition applications are becoming more and more popular, but it also has some security problems, such as user privacy data leakage. This article proposes a face recognition scheme based on homomorphic encryption in cloud environment. The article first uses the MTCNN algorithm to detect face and correct the data and extracts the face feature vector through the FaceNet algorithm. Then, the article encrypts the facial features with the CKKS homomorphic encryption scheme and builds a database of the encrypted facial feature in the cloud server. The process of face recognition is as follows: calculate the distance between the encrypted feature vectors and the maximum value of the ciphertext result, decrypt it, and compare the threshold to determine whether it is a person. The experimental results show that when the scheme is based on the LFW data set, the threshold is 1.1236, and the recognition accuracy in the ciphertext is 94.8837%, which proves the reliability of the proposed scheme.


2003 ◽  
Vol 42 (05) ◽  
pp. 564-571 ◽  
Author(s):  
M. Schumacher ◽  
E. Graf ◽  
T. Gerds

Summary Objectives: A lack of generally applicable tools for the assessment of predictions for survival data has to be recognized. Prediction error curves based on the Brier score that have been suggested as a sensible approach are illustrated by means of a case study. Methods: The concept of predictions made in terms of conditional survival probabilities given the patient’s covariates is introduced. Such predictions are derived from various statistical models for survival data including artificial neural networks. The idea of how the prediction error of a prognostic classification scheme can be followed over time is illustrated with the data of two studies on the prognosis of node positive breast cancer patients, one of them serving as an independent test data set. Results and Conclusions: The Brier score as a function of time is shown to be a valuable tool for assessing the predictive performance of prognostic classification schemes for survival data incorporating censored observations. Comparison with the prediction based on the pooled Kaplan Meier estimator yields a benchmark value for any classification scheme incorporating patient’s covariate measurements. The problem of an overoptimistic assessment of prediction error caused by data-driven modelling as it is, for example, done with artificial neural nets can be circumvented by an assessment in an independent test data set.


2021 ◽  
pp. 016555152110184
Author(s):  
Gunjan Chandwani ◽  
Anil Ahlawat ◽  
Gaurav Dubey

Document retrieval plays an important role in knowledge management as it facilitates us to discover the relevant information from the existing data. This article proposes a cluster-based inverted indexing algorithm for document retrieval. First, the pre-processing is done to remove the unnecessary and redundant words from the documents. Then, the indexing of documents is done by the cluster-based inverted indexing algorithm, which is developed by integrating the piecewise fuzzy C-means (piFCM) clustering algorithm and inverted indexing. After providing the index to the documents, the query matching is performed for the user queries using the Bhattacharyya distance. Finally, the query optimisation is done by the Pearson correlation coefficient, and the relevant documents are retrieved. The performance of the proposed algorithm is analysed by the WebKB data set and Twenty Newsgroups data set. The analysis exposes that the proposed algorithm offers high performance with a precision of 1, recall of 0.70 and F-measure of 0.8235. The proposed document retrieval system retrieves the most relevant documents and speeds up the storing and retrieval of information.


Cryptography ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 2
Author(s):  
Tushar Kanti Saha ◽  
Takeshi Koshiba

Conjunctive queries play a key role in retrieving data from a database. In a database, a query containing many conditions in its predicate, connected by an “and/&/∧” operator, is called a conjunctive query. Retrieving the outcome of a conjunctive query from thousands of records is a heavy computational task. Private data access to an outsourced database is required to keep the database secure from adversaries; thus, private conjunctive queries (PCQs) are indispensable. Cheon, Kim, and Kim (CKK) proposed a PCQ protocol using search-and-compute circuits in which they used somewhat homomorphic encryption (SwHE) for their protocol security. As their protocol is far from being able to be used practically, we propose a practical batch private conjunctive query (BPCQ) protocol by applying a batch technique for processing conjunctive queries over an outsourced database, in which both database and queries are encoded in binary format. As a main technique in our protocol, we develop a new data-packing method to pack many data into a single polynomial with the batch technique. We further enhance the performances of the binary-encoded BPCQ protocol by replacing the binary encoding with N-ary encoding. Finally, we compare the performance to assess the results obtained by the binary-encoded BPCQ protocol and the N-ary-encoded BPCQ protocol.


Animals ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 50
Author(s):  
Jennifer Salau ◽  
Jan Henning Haas ◽  
Wolfgang Junge ◽  
Georg Thaller

Machine learning methods have become increasingly important in animal science, and the success of an automated application using machine learning often depends on the right choice of method for the respective problem and data set. The recognition of objects in 3D data is still a widely studied topic and especially challenging when it comes to the partition of objects into predefined segments. In this study, two machine learning approaches were utilized for the recognition of body parts of dairy cows from 3D point clouds, i.e., sets of data points in space. The low cost off-the-shelf depth sensor Microsoft Kinect V1 has been used in various studies related to dairy cows. The 3D data were gathered from a multi-Kinect recording unit which was designed to record Holstein Friesian cows from both sides in free walking from three different camera positions. For the determination of the body parts head, rump, back, legs and udder, five properties of the pixels in the depth maps (row index, column index, depth value, variance, mean curvature) were used as features in the training data set. For each camera positions, a k nearest neighbour classifier and a neural network were trained and compared afterwards. Both methods showed small Hamming losses (between 0.007 and 0.027 for k nearest neighbour (kNN) classification and between 0.045 and 0.079 for neural networks) and could be considered successful regarding the classification of pixel to body parts. However, the kNN classifier was superior, reaching overall accuracies 0.888 to 0.976 varying with the camera position. Precision and recall values associated with individual body parts ranged from 0.84 to 1 and from 0.83 to 1, respectively. Once trained, kNN classification is at runtime prone to higher costs in terms of computational time and memory compared to the neural networks. The cost vs. accuracy ratio for each methodology needs to be taken into account in the decision of which method should be implemented in the application.


2011 ◽  
Vol 44 (1) ◽  
pp. 14271-14276 ◽  
Author(s):  
H. Chang ◽  
A. Astolfi
Keyword(s):  
Data Set ◽  

2018 ◽  
Vol 10 (9) ◽  
pp. 136
Author(s):  
Rakibul Islam ◽  
Mohammad Emdad Hossain ◽  
Mohammad Nazmul Hoq ◽  
Md. Morshedul Alam

Working capital management plays centric role in enhancing operational efficiency and their ultimate profitability. Globally financial managers have been searching the proper way on how to utilize working capital components which prolong profitability. The purpose of this study is to assess the impact of working capital components on profitability indicators of selected pharmaceutical firms in Bangladesh. The paper used financial data of 9 pharmaceutical firms listed in Dhaka stock exchange (DSE) covered 2011-2015. Two methods were used in this study for analysis data set. Firstly, to measure the relationship between selected variables Pearson Correlation matrix was used. Secondly, multiple regression analysis was used to investigate the impact working capital components on profitability of selected pharmaceutical firms. The study also conducted Durbin Watson test to assess autocorrelation of selected variables. In this study the correlation matrix identified a negative correlation between working capital components and profitability, whereas regression analysis found number of days account receivable (AR) had significant positive and current ratio (CR) and debt ratio (DR) had appeared a significant negative impact on profitability.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Ruoshui Liu ◽  
Jianghui Liu ◽  
Jingjie Zhang ◽  
Moli Zhang

Cloud computing is a new way of data storage, where users tend to upload video data to cloud servers without redundantly local copies. However, it keeps the data out of users' hands which would conventionally control and manage the data. Therefore, it becomes the key issue on how to ensure the integrity and reliability of the video data stored in the cloud for the provision of video streaming services to end users. This paper details the verification methods for the integrity of video data encrypted using the fully homomorphic crytosystems in the context of cloud computing. Specifically, we apply dynamic operation to video data stored in the cloud with the method of block tags, so that the integrity of the data can be successfully verified. The whole process is based on the analysis of present Remote Data Integrity Checking (RDIC) methods.


Sign in / Sign up

Export Citation Format

Share Document