storage overhead
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 26)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Vol 11 (21) ◽  
pp. 10332
Author(s):  
Zong-Wu Zhu ◽  
Ru-Wei Huang

Aiming at the problems of large ciphertext size and low efficiency in the current secure multi-party computation (SMC) protocol based on fully homomorphic encryption (FHE), the paper proves that the fully homomorphic encryption scheme that supports multi-bit encryption proposed by Chen Li et al. satisfies the key homomorphism. Based on this scheme and threshold decryption, a three-round, interactive, leveled, secure multi-party computation protocol under the Common Random String (CRS) model is designed. The protocol is proved to be safe under the semi-honest model and the semi-malicious model. From the non-interactive zero-knowledge proof, it can be concluded that the protocol is also safe under the malicious model. Its security can be attributed to the Decisional Learning With Errors (DLWE) and a variant of this problem (some-are-errorless LWE). Compared with the existing secure multi-party computation protocol based on fully homomorphic encryption under the CRS model, the ciphertext size of this protocol is smaller, the efficiency is higher, the storage overhead is smaller, and the overall performance is better than the existing protocol.


2021 ◽  
Vol 9 (4) ◽  
pp. 0-0

In the location aware services, past mobile device cache invalidation-replacement practises used are ineffective if the client travel route varies rapidly. In addition, in terms of storage expense, previous cache invalidation-replacement policies indicate high storage overhead. These limitations of past policies are inspiration for this research work. The paper describes the models to solve the aforementioned challenges using two different approaches separately for predicting the future path for the user movement. In the first approach, the most prevalent Sequential Pattern Mining & Clustering (SPMC) technique is used to pre-process the user's movement trajectory and find out the pattern that appears frequently. In the second approach, frequent patterns are forwarded into the Mobility Markov Chain & Matrix-(MMCM) algorithm leading to a reduction in the size of candidate sets and, therefore, efficiency enhancement of mining sequence patterns. Analytical results show significant caching performance improvement compared to previous caching policies.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Yongli Tang ◽  
Feifei Xia ◽  
Qing Ye ◽  
Mengyao Wang ◽  
Ruijie Mu ◽  
...  

Although most existing linkable ring signature schemes on lattice can effectively resist quantum attacks, they still have the disadvantages of excessive time and storage overhead. This paper constructs an identity-based linkable ring signature (LRS) scheme over NTRU lattice by employing the technologies of trapdoor generation and rejection sampling. The security of this scheme relies on the small integer solution (SIS) problem on NTRU lattice. We prove that this scheme has unconditional anonymity, unforgeability, and linkability under the random oracle model (ROM). Through the performance analysis, this scheme has a shorter size of public/private keys, and when the number of ring members is small (such as N ≤ 8 ), this scheme has a shorter signature size compared with other existing latest lattice-based LRS schemes. The computational efficiency of signature has also been further improved since it only involves multiplication in the polynomial ring and modular operations of small integers. Finally, we implemented our scheme and other similar schemes, and it is shown that the time for the signature generation and verification of this scheme decreases roughly by 44.951% and 33.503%, respectively.


2021 ◽  
Author(s):  
Fuan Xiao ◽  
Lai Wang ◽  
Yimo Chen ◽  
Jiaming Hong ◽  
Honglai Zhang ◽  
...  

UNSTRUCTURED Abstract: Nowadays, Traditional Chinese medicine (TCM) safety issues have attracted attention from both industry and academia. Counterfeit drugs not only have a serious harmful impact on human health but also cause heavy economic loss to the industry. Establishing a reliable drug traceability system to prevent fake drugs’ safety issues is necessary. Most traditional drug traceability systems are centralized, which leads to data privacy, transparency, and data tampering. Blockchain, a promising technology for decentralization, tamper-resistant and privacy-preserving, provides a basis for the complete traceability of drugs. However, blockchain also has some issues such as high computation overhead, storage overhead, limited scalability, and low transaction throughput. Sharding has emerged as a well approach that can overcome the above issues in blockchain. Sharding is dividing the network into multiple smaller groups, these smaller groups can work in parallel on disjoint transactions. In this paper, we propose a lightweight blockchain architecture based on sharding to solve the above issues in TCM traceability system (LBS-TCM). Our architecture consists of two blockchains, namely: leader shard blockchain and sub shard blockchain. The leader shard blockchain saves all states and processes cross-shard transactions. The sub shard blockchain deals with transactions within shards. LBS-TCM also supports state sharding, transaction sharding, and ledger pruning. Our empirical evaluations suggest that our proposed architecture can process more than 1500 tx/sec in a network of 2048 nodes and 64 shards. The storage overhead of each node decreases linearly.


Author(s):  
Frederik Armknecht ◽  
Jens-Matthias Bohli ◽  
Ghassan O. Karame ◽  
Wenting Li
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Yixiao Zhu ◽  
Wenjie Ma ◽  
Jiangtao Cui ◽  
Xiaofang Xia ◽  
Yanguo Peng ◽  
...  

Contact tracing is a critical tool in containing epidemics such as COVID-19. Researchers have carried out a lot of work on contact tracing. However, almost all of the existing works assume that their clients and authorities have large storage space and powerful computation capability and clients can implement contact tracing on their own mobile devices such as mobile phones, tablet computers, and wearable computers. With the widespread outbreaks of the epidemics, these approaches are of less robustness to a larger scale of datasets when it comes to resource-constrained clients. To address this limitation, we propose a publicly verifiable contact tracing algorithm in cloud computing (PvCT), which utilizes cloud services to provide storage and computation capability in contact tracing. To guarantee the integrity and accuracy of contact tracing results, PvCT applies a novel set accumulator-based authentication data structure whose computation is outsourced, and the client can check whether returned results are valid. Furthermore, we provide rigorous security proof of our algorithm based on the q -Strong Bilinear Diffie–Hellman assumption. Detailed experimental evaluation is also conducted on three real-world datasets. The results show that our algorithm is feasible within milliseconds of client CPU time and can significantly reduce the storage overhead from the size of datasets to a constant 128 bytes.


Author(s):  
K. V. Uma Maheswari ◽  
Dr. Dhanaraj Cheelu

Cloud computing is recognized as an alternative to traditional information technology due to its intrinsic resource sharing and low maintenance characteristics. Cloud computing provides an economical and efficient solution for sharing group resource among cloud users. Unfortunately, when sharing the data in a group while preserving data, identity privacy is still a challenging issue due to frequent change in membership. In overcome this problem, a secure data sharing scheme for dynamic groups is proposed so that any user within a group can share the data in a secure manner by leveraging both the group signature and dynamic broadcast encryption techniques. It should enable any cloud user to anonymously share data with others within the group and support efficient member revocation. The storage overhead and encryption computation cost are dependent on the number of revoked users.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hongtao Li ◽  
Feng Guo ◽  
Lili Wang ◽  
Jie Wang ◽  
Bo Wang ◽  
...  

Cloud storage can provide a way to effectively store and manage big data. However, due to the separation of data ownership and management, it is difficult for users to check the integrity of data in a traditional way, which leads to the introduction of the auditing techniques. This paper proposes a public auditing protocol with a self-certified public key system using blockchain technology. The user's operational information and metadata information of the file are formed to a block after verified by the checked nodes and then to be put into the blockchain. The chain structure of the block ensures the security of auditing data source. The security analysis shows that attackers can neither derive user’s secret key nor derive users’ data from the collected auditing information in the presented scheme. Furthermore, it can effectively resist against not only the signature forging attacks but also the proof forging attacks. Compared with other public auditing schemes, our scheme based on the self-certified public key system has been improved in storage overhead, communication bandwidth, and verification efficiency.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 896
Author(s):  
Jeongsoo Park ◽  
Jungrae Kim ◽  
Jong Hwan Ko

Due to limited resources of the Internet of Things (IoT) edge devices, deep neural network (DNN) inference requires collaboration with cloud server platforms, where DNN inference is partitioned and offloaded to high-performance servers to reduce end-to-end latency. As data-intensive intermediate feature space at the partitioned layer should be transmitted to the servers, efficient compression of the feature space is imperative for high-throughput inference. However, the feature space at deeper layers has different characteristics than natural images, limiting the compression performance by conventional preprocessing and encoding techniques. To tackle this limitation, we introduce a new method for compressing DNN intermediate feature space using a specialized autoencoder, called auto-tiler. The proposed auto-tiler is designed to include the tiling process and provide multiple input/output dimensions to support various partitioned layers and compression ratios. The results show that auto-tiler achieves 18% to 67% higher percent point accuracy compared to the existing methods at the same bitrate while reducing the process latency by 73% to 81%. The dimension variability of an auto-tiler also reduces the storage overhead by 62% with negligible accuracy loss.


Author(s):  
Kavita Srivastava

The steep rise in autonomous systems and the internet of things in recent years has influenced the way in which computation has performed. With built-in AI (artificial intelligence) in IoT and cyber-physical systems, the need for high-performance computing has emerged. Cloud computing is no longer sufficient for the sensor-driven systems which continuously keep on collecting data from the environment. The sensor-based systems such as autonomous vehicles require analysis of data and predictions in real-time which is not possible only with the centralized cloud. This scenario has given rise to a new computing paradigm called edge computing. Edge computing requires the storage of data, analysis, and prediction performed on the network edge as opposed to a cloud server thereby enabling quick response and less storage overhead. The intelligence at the edge can be obtained through deep learning. This chapter contains information about various deep learning frameworks, hardware, and systems for edge computing and examples of deep neural network training using the Caffe 2 framework.


Sign in / Sign up

Export Citation Format

Share Document