Groups and Individuals to See Increased Data Auditing

ASA Monitor ◽  
2021 ◽  
Vol 85 (7) ◽  
pp. 37-38
Author(s):  
Matthew T. Popovich ◽  
DeLaine Schmitz
Keyword(s):  
Author(s):  
Cheng Zhang ◽  
Yang Xu ◽  
Yupeng Hu ◽  
J. Wu ◽  
Ju Ren ◽  
...  

Author(s):  
Rokesh Kumar Yarava ◽  
Ponnuru Sowjanya ◽  
Sowmya Gudipati ◽  
G. Charles Babu ◽  
Srisailapu D Vara Prasad

2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Jian Mao ◽  
Wenqian Tian ◽  
Yan Zhang ◽  
Jian Cui ◽  
Hanjun Ma ◽  
...  

With the increasing demand for ubiquitous connectivity, wireless technology has significantly improved our daily lives. Meanwhile, together with cloud-computing technology (e.g., cloud storage services and big data processing), new wireless networking technology becomes the foundation infrastructure of emerging communication networks. Particularly, cloud storage has been widely used in services, such as data outsourcing and resource sharing, among the heterogeneous wireless environments because of its convenience, low cost, and flexibility. However, users/clients lose the physical control of their data after outsourcing. Consequently, ensuring the integrity of the outsourced data becomes an important security requirement of cloud storage applications. In this paper, we present Co-Check, a collaborative multicloud data integrity audition scheme, which is based on BLS (Boneh-Lynn-Shacham) signature and homomorphic tags. According to the proposed scheme, clients can audit their outsourced data in a one-round challenge-response interaction with low performance overhead. Our scheme also supports dynamic data maintenance. The theoretical analysis and experiment results illustrate that our scheme is provably secure and efficient.


2018 ◽  
Vol 11 (1) ◽  
pp. 90
Author(s):  
Sara Alomari ◽  
Mona Alghamdi ◽  
Fahd S. Alotaibi

The auditing services of the outsourced data, especially big data, have been an active research area recently. Many schemes of remotely data auditing (RDA) have been proposed. Both categories of RDA, which are Provable Data Possession (PDP) and Proof of Retrievability (PoR), mostly represent the core schemes for most researchers to derive new schemes that support additional capabilities such as batch and dynamic auditing. In this paper, we choose the most popular PDP schemes to be investigated due to the existence of many PDP techniques which are further improved to achieve efficient integrity verification. We firstly review the work of literature to form the required knowledge about the auditing services and related schemes. Secondly, we specify a methodology to be adhered to attain the research goals. Then, we define each selected PDP scheme and the auditing properties to be used to compare between the chosen schemes. Therefore, we decide, if possible, which scheme is optimal in handling big data auditing.


Author(s):  
VINITHA S P ◽  
GURUPRASAD E

Cloud computing has been envisioned as the next generation architecture of IT enterprise. It moves the application software and databases to the centralized large data centers where management of data and services may not be fully trustworthy. This unique paradigm brings out many new security challenges like, maintaining correctness and integrity of data in cloud. Integrity of cloud data may be lost due to unauthorized access, modification or deletion of data. Lacking of availability of data may be due to the cloud service providers (CSP), in order to increase their margin of profit by reducing the cost, CSP may discard rarely accessed data without detecting in timely fashion. To overcome above issues, flexible distributed storage, token utilizing, signature creations used to ensure integrity of data, auditing mechanism used assists in maintaining the correctness of data and also locating, identifying of server where exactly the data has been corrupted and also dependability and availability of data achieved through distributed storage of data in cloud. Further in order to ensure authorized access to cloud data a admin module has been proposed in our previous conference paper, which prevents unauthorized users from accessing data and also selective storage scheme based on different parameters of cloud servers proposed in previous paper, in order to provide efficient storage of data in the cloud. In order to provide more efficiency in this paper dynamic data operations are supported such as updating, deletion and addition of data.


2021 ◽  
Author(s):  
Yilin Yuan ◽  
Jianbiao Zhang ◽  
Wanshan Xu ◽  
Xiao Wang ◽  
Yanhui Liu

Abstract Under the shared big data environment, most of the existing data auditing schemes rarely consider the authorization management of group users. Meanwhile, how to deal with the shared data integrity is a problem that needs to be pondered. Thus, in this paper, we propose a novel remote data checking possession scheme which achieves group authority management while completing the public auditing. To perform authority management work, we introduce a trusted entity – group manager. We formalize a new algebraic structure operator named authorization invisible authenticator (AIA). Meanwhile, we provide two versions of AIA scheme: basic AIA scheme and standard AIA scheme. The standard AIA scheme is constructed based on the basic AIA scheme and user information table (UIT), with advanced security and wider applicable scenarios. By virtue of standard AIA scheme, the group manager can perfectly and easily carry out authority management, including enrolling, revoking, updating. On the basis of the above, we further design a public auditing scheme for non-revoked users’ shared data. The scheme is based on identity-based encryption (IBE), which greatly reduce the necessary certificate management cost. Furthermore, the detailed security analysis and performance evaluation demonstrate that the scheme is safe and feasible.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Guangjun Liu ◽  
Wangmei Guo ◽  
Ximeng Liu ◽  
Jinbo Xiong

Enabling remote data integrity checking with failure recovery becomes exceedingly critical in distributed cloud systems. With the properties of a lower repair bandwidth while preserving fault tolerance, regenerating coding and network coding (NC) have received much attention in the coding-based storage field. Recently, an outstanding outsourced auditing scheme named NC-Audit was proposed for regenerating-coding-based distributed storage. The scheme claimed that it can effectively achieve lightweight privacy-preserving data verification remotely for these networked distributed systems. However, our algebraic analysis shows that NC-Audit can be easily broken due to a potential defect existing in its schematic design. That is, an adversarial cloud server can forge some illegal blocks to cheat the auditor with a high probability when the coding field is large. From the perspective of algebraic security, we propose a remote data integrity checking scheme RNC-Audit by resorting to hiding partial critical information to the server without compromising system performance. Our evaluation shows that the proposed scheme has significantly lower overhead compared to the state-of-the-art schemes for distributed remote data auditing.


Author(s):  
Yong Yu ◽  
Yannan Li ◽  
Man Ho Au ◽  
Willy Susilo ◽  
Kim-Kwang Raymond Choo ◽  
...  

2021 ◽  
pp. 1-6
Author(s):  
Jennifer Schuette ◽  
Hayden Zaccagni ◽  
Janet Donohue ◽  
Julie Bushnell ◽  
Kelly Veneziale ◽  
...  

Abstract Background: The Pediatric Cardiac Critical Care Consortium (PC4) is a multi-institutional quality improvement registry focused on the care delivered in the cardiac ICU for patients with CHD and acquired heart disease. To assess data quality, a rigorous procedure of data auditing has been in place since the inception of the consortium. Materials and methods: This report describes the data auditing process and quantifies the audit results for the initial 39 audits that took place after the transition from version one to version two of the registry’s database. Results: In total, 2219 total encounters were audited for an average of 57 encounters per site. The overall data accuracy rate across all sites was 99.4%, with a major discrepancy rate of 0.52%. A passing score is based on an overall accuracy of >97% (achieved by all sites) and a major discrepancy rate of <1.5% (achieved by 38 of 39 sites, with 35 of 39 sites having a major discrepancy rate of <1%). Fields with the highest discrepancy rates included arrhythmia type, cardiac arrest count, and current surgical status. Conclusions: The extensive PC4 auditing process, including initial and routinely scheduled follow-up audits of every participating site, demonstrates an extremely high level of accuracy across a broad array of audited fields and supports the continued use of consortium data to identify best practices in paediatric cardiac critical care.


Sign in / Sign up

Export Citation Format

Share Document