scholarly journals Research on optimization of real-time efficient storage algorithm in data information serialization

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0260697
Author(s):  
Bin Huang ◽  
You Tang

Background Along with the vigorous development of Internet technology, increasing the functions of the various types of equipment, network communication easy and diversity, at the same time, the amount of data is very huge, under the network bandwidth limitations, through long lead to a data need to be cut into more, one by one, transfer times, information delay problems. Results Aiming at the problems of poor data integrity, low efficiency and poor serialization efficiency of traditional data storage information, this article introduces Protobuf technology to study the serialization of data storage information. The serpentine gap method is used to complete the allocation interval of the sequence nodes, so that the working state and the resting state always maintain a dynamic balance. According to the first-level rules, the storage data of the completed target node is obtained, and the grammatical structure and the semantics of the target data are analyzed, Meanwhile corresponding correspondences are established, and the data storage information is serialized. In order to verify the effectiveness of Protobuf’s data storage information serialization method, a comparative experiment is designed. By using three methods of HDVM, Redis and Protobuf to serialize JSON data, the comparative analysis shows that HDVM has the longest processing time and Protobuf has the shortest processing time, and the data integrity is not affected. The simulation data shows that the Protobuf serialization method has short conversion time, high space utilization, and the Obvious advantages in correctness and integrity. It is vary suitable for serialization of JSON data in the case of limited bandwidth.

2018 ◽  
Vol 8 (11) ◽  
pp. 2216
Author(s):  
Jiahui Jin ◽  
Qi An ◽  
Wei Zhou ◽  
Jiakai Tang ◽  
Runqun Xiong

Network bandwidth is a scarce resource in big data environments, so data locality is a fundamental problem for data-parallel frameworks such as Hadoop and Spark. This problem is exacerbated in multicore server-based clusters, where multiple tasks running on the same server compete for the server’s network bandwidth. Existing approaches solve this problem by scheduling computational tasks near the input data and considering the server’s free time, data placements, and data transfer costs. However, such approaches usually set identical values for data transfer costs, even though a multicore server’s data transfer cost increases with the number of data-remote tasks. Eventually, this hampers data-processing time, by minimizing it ineffectively. As a solution, we propose DynDL (Dynamic Data Locality), a novel data-locality-aware task-scheduling model that handles dynamic data transfer costs for multicore servers. DynDL offers greater flexibility than existing approaches by using a set of non-decreasing functions to evaluate dynamic data transfer costs. We also propose online and offline algorithms (based on DynDL) that minimize data-processing time and adaptively adjust data locality. Although DynDL is NP-complete (nondeterministic polynomial-complete), we prove that the offline algorithm runs in quadratic time and generates optimal results for DynDL’s specific uses. Using a series of simulations and real-world executions, we show that our algorithms are 30% better than algorithms that do not consider dynamic data transfer costs in terms of data-processing time. Moreover, they can adaptively adjust data localities based on the server’s free time, data placement, and network bandwidth, and schedule tens of thousands of tasks within subseconds or seconds.


Author(s):  
Tarik Chafiq ◽  
Mohammed Ouadoud ◽  
Hassane Jarar Oulidi ◽  
Ahmed Fekri

The aim of this research work is to ensure the integrity and correction of the geotechnical database which contains anomalies. These anomalies occurred mainly in the phase of inputting and/or transferring of data. The algorithm created in the framework of this paper was tested on a dataset of 70 core drillings. In fact, it is based on a multi-criteria analysis qualifying the geotechnical data integrity using the sequential approach. The implementation of this algorithm has given a relevant set of values in terms of output; which will minimalize processing time and manual verification. The application of the methodology used in this paper could be useful to define the type of foundation adapted to the nature of the subsoil, and thus, foresee the adequate budget.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Ruoshui Liu ◽  
Jianghui Liu ◽  
Jingjie Zhang ◽  
Moli Zhang

Cloud computing is a new way of data storage, where users tend to upload video data to cloud servers without redundantly local copies. However, it keeps the data out of users' hands which would conventionally control and manage the data. Therefore, it becomes the key issue on how to ensure the integrity and reliability of the video data stored in the cloud for the provision of video streaming services to end users. This paper details the verification methods for the integrity of video data encrypted using the fully homomorphic crytosystems in the context of cloud computing. Specifically, we apply dynamic operation to video data stored in the cloud with the method of block tags, so that the integrity of the data can be successfully verified. The whole process is based on the analysis of present Remote Data Integrity Checking (RDIC) methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Shan-Shan Li ◽  
Jian Zhou ◽  
Xuan Wang

Aiming at the shortcomings of traditional broadcast transmitter noise test methods, such as low efficiency, inconvenient data storage, and high requirements for testers, a dynamic online test method for transmitter noise is proposed. The principle of system composition and test method is given. The transmitter noise is real-time changing. The Voice Active Detection (VAD) noise estimation algorithm cannot track the transmitter noise change in real time. This paper proposes a combined noise estimation algorithm for VAD and dynamic estimation. By setting the threshold of the double-threshold VAD detection to be low, it can accurately detect the silent segment. The silent segment is used as a noise signal for noise estimation. For the nonsilent segment detected by the VAD, a minimum value search dynamic spectrum estimation algorithm based on the existence probability of the speech (IMCRA) is used for noise estimation. Transmitter noise is measured by calculating the noise figure (NF).The test method collects the input and output data of the transmitter in real time, which has better accuracy and real-time performance, and the feasibility of the method is verified by experimental simulation.


2014 ◽  
Vol 508 ◽  
pp. 192-195
Author(s):  
Zhi Dong Wang ◽  
Yu Guo ◽  
Heng Ye ◽  
Bing Yang

By utilized VISA functions provided by LabVIEW, we can easily and successfully realize the serial communication between cardiechema senor and monitoring system, as well as collected data written into and read from Access through database functions. The perfect combination between the internet technology and the virtual instrument has provided a promising platform for the realization of the virtual instrument networking. This paper mainly introduces the construction of phonocardiogram monitoring system which has some functions such as real-time monitoring, data storage, signal preprocessing, playback, characteristic signal extraction, remote monitoring and so on.


2014 ◽  
Vol 556-562 ◽  
pp. 5395-5399
Author(s):  
Jian Hong Zhang ◽  
Wen Jing Tang

Data integrity is one of the biggest concerns with cloud data storage for cloud user. Besides, the cloud user’s constrained computing capabilities make the task of data integrity auditing expensive and even formidable. Recently, a proof-of-retrievability scheme proposed by Yuan et al. has addressed the issue, and security proof of the scheme was provided. Unfortunately, in this work we show that the scheme is insecure. Namely, the cloud server who maliciously modifies the data file can pass the verification, and the client who executes the cloud storage auditing can recover the whole data file through the interactive process. Furthermore, we also show that the protocol is vulnerable to an efficient active attack, which means that the active attacker is able to arbitrarily modify the cloud data without being detected by the auditor in the auditing process. After giving the corresponding attacks to Yuan et al.’s scheme, we suggest a solution to fix the problems.


Sign in / Sign up

Export Citation Format

Share Document