A Method of Solving Data Consistency of Disk Array Cache

2012 ◽  
Vol 532-533 ◽  
pp. 1172-1176
Author(s):  
Wei Wang ◽  
Shi Qun Yin

In order to raise speed of reading data from disk array memory, scientific and technological personnel have introduced cache technology into disk array. Since this technique has been invented, although the efficiency of reading data have been solved, after writing operation of countless times in disk cache, data consistency problem has been prominent expression. Especially in this condition that false of electricity and machine abnormal failure, the consistency of the data is more difficult to guarantee. In this paper, we adopt Non-Volatile RAM (NVRAM) devices to realize that the data will not be lost in disk array cache after power failure. Here we design a kind of new cache organizational structure. We firstly introduce cache structure of two tables (real-time mapping table, backup mapping table) and a cache backup block. Then through these two tables, we can work to recover data through the copy between the two tables in the macroscopic, and in the microscopic through cache backup block can backup the cache data from writing failure. As power failure and system breakdown, we can ensure that data will not easily lose and the original data can recovery after system crash by this technology. Thus it ensures the consistency of the data cache.

2021 ◽  
Vol 1 (1) ◽  
Author(s):  
E. Bertino ◽  
M. R. Jahanshahi ◽  
A. Singla ◽  
R.-T. Wu

AbstractThis paper addresses the problem of efficient and effective data collection and analytics for applications such as civil infrastructure monitoring and emergency management. Such problem requires the development of techniques by which data acquisition devices, such as IoT devices, can: (a) perform local analysis of collected data; and (b) based on the results of such analysis, autonomously decide further data acquisition. The ability to perform local analysis is critical in order to reduce the transmission costs and latency as the results of an analysis are usually smaller in size than the original data. As an example, in case of strict real-time requirements, the analysis results can be transmitted in real-time, whereas the actual collected data can be uploaded later on. The ability to autonomously decide about further data acquisition enhances scalability and reduces the need of real-time human involvement in data acquisition processes, especially in contexts with critical real-time requirements. The paper focuses on deep neural networks and discusses techniques for supporting transfer learning and pruning, so to reduce the times for training the networks and the size of the networks for deployment at IoT devices. We also discuss approaches based on machine learning reinforcement techniques enhancing the autonomy of IoT devices.


2021 ◽  
Vol 11 (11) ◽  
pp. 4874
Author(s):  
Milan Brankovic ◽  
Eduardo Gildin ◽  
Richard L. Gibson ◽  
Mark E. Everett

Seismic data provides integral information in geophysical exploration, for locating hydrocarbon rich areas as well as for fracture monitoring during well stimulation. Because of its high frequency acquisition rate and dense spatial sampling, distributed acoustic sensing (DAS) has seen increasing application in microseimic monitoring. Given large volumes of data to be analyzed in real-time and impractical memory and storage requirements, fast compression and accurate interpretation methods are necessary for real-time monitoring campaigns using DAS. In response to the developments in data acquisition, we have created shifted-matrix decomposition (SMD) to compress seismic data by storing it into pairs of singular vectors coupled with shift vectors. This is achieved by shifting the columns of a matrix of seismic data before applying singular value decomposition (SVD) to it to extract a pair of singular vectors. The purpose of SMD is data denoising as well as compression, as reconstructing seismic data from its compressed form creates a denoised version of the original data. By analyzing the data in its compressed form, we can also run signal detection and velocity estimation analysis. Therefore, the developed algorithm can simultaneously compress and denoise seismic data while also analyzing compressed data to estimate signal presence and wave velocities. To show its efficiency, we compare SMD to local SVD and structure-oriented SVD, which are similar SVD-based methods used only for denoising seismic data. While the development of SMD is motivated by the increasing use of DAS, SMD can be applied to any seismic data obtained from a large number of receivers. For example, here we present initial applications of SMD to readily available marine seismic data.


2020 ◽  
Vol 91 (4) ◽  
pp. 2127-2140 ◽  
Author(s):  
Glenn Thompson ◽  
John A. Power ◽  
Jochen Braunmiller ◽  
Andrew B. Lockhart ◽  
Lloyd Lynch ◽  
...  

Abstract An eruption of the Soufrière Hills Volcano (SHV) on the eastern Caribbean island of Montserrat began on 18 July 1995 and continued until February 2010. Within nine days of the eruption onset, an existing four-station analog seismic network (ASN) was expanded to 10 sites. Telemetered data from this network were recorded, processed, and archived locally using a system developed by scientists from the U.S. Geological Survey (USGS) Volcano Disaster Assistance Program (VDAP). In October 1996, a digital seismic network (DSN) was deployed with the ability to capture larger amplitude signals across a broader frequency range. These two networks operated in parallel until December 2004, with separate telemetry and acquisition systems (analysis systems were merged in March 2001). Although the DSN provided better quality data for research, the ASN featured superior real-time monitoring tools and captured valuable data including the only seismic data from the first 15 months of the eruption. These successes of the ASN have been rather overlooked. This article documents the evolution of the ASN, the VDAP system, the original data captured, and the recovery and conversion of more than 230,000 seismic events from legacy SUDS, Hypo71, and Seislog formats into Seisan database with waveform data in miniSEED format. No digital catalog existed for these events, but students at the University of South Florida have classified two-thirds of the 40,000 events that were captured between July 1995 and October 1996. Locations and magnitudes were recovered for ∼10,000 of these events. Real-time seismic amplitude measurement, seismic spectral amplitude measurement, and tiltmeter data were also captured. The result is that the ASN seismic dataset is now more discoverable, accessible, and reusable, in accordance with FAIR data principles. These efforts could catalyze new research on the 1995–2010 SHV eruption. Furthermore, many observatories have data in these same legacy data formats and might benefit from procedures and codes documented here.


Author(s):  
B. Shameedha Begum ◽  
N. Ramasubramanian

Embedded systems are designed for a variety of applications ranging from Hard Real Time applications to mobile computing, which demands various types of cache designs for better performance. Since real-time applications place stringent requirements on performance, the role of the cache subsystem assumes significance. Reconfigurable caches meet performance requirements under this context. Existing reconfigurable caches tend to use associativity and size for maximizing cache performance. This article proposes a novel approach of a reconfigurable and intelligent data cache (L1) based on replacement algorithms. An intelligent embedded data cache and a dynamic reconfigurable intelligent embedded data cache have been implemented using Verilog 2001 and tested for cache performance. Data collected by enabling the cache with two different replacement strategies have shown that the hit rate improves by 40% when compared to LRU and 21% when compared to MRU for sequential applications which will significantly improve performance of embedded real time application.


2017 ◽  
Vol 21 (2) ◽  
Author(s):  
Edgar Garcia ◽  
Ivan Amaya ◽  
Rodrigo Correa

<p class="MsoNormal"><span lang="EN-US">This work considers the prediction in real time of physicochemical parameters of a sample heated in a uniform electromagnetic field. The thermal conductivity (K)</span><!--[if gte msEquation 12]><m:oMath><i style='mso-bidi-font-style:normal'><span lang=EN-US style='font-family:"Cambria Math","serif"'><m:r>(</m:r><m:r>K</m:r><m:r>) </m:r></span></i></m:oMath><![endif]--><!--[if !msEquation]--><!--[endif]--><span lang="EN-US">and the </span><span lang="EN">combination of density and heat capacity terms (pc)</span><span lang="EN"> were estimated as a demonstrative example.</span><span lang="EN-US">The sample (with known geometry) was subjected to electromagnetic radiation, generating a uniform and time constant volumetric heat flow within it. Real temperature profile was simulated adding white Gaussian noise to the original data, obtained from the theoretical model. For solving the objective function, simulated annealing and genetic algorithms, along with the traditional Levenberg-Marquardt method were used for comparative purposes. Results show similar findings of all algorithms for three simulation scenarios, as long as the signal to noise ratio sits at least at 30 dB. It means for practical purposes, that the estimation procedure presented here requires both, a good experimental design and an electronic instrumentation correctly specified.</span><span lang="EN-US">If both requirements are satisfied simultaneously, it is possible to estimate these type of parameters on-line, without need for an additional experimental setup.</span></p><p class="MsoNormal"><span lang="EN-US">This work considers the prediction in real time of physicochemical parameters of a sample heated in a uniform electromagnetic field. The thermal conductivity </span><!--[if gte msEquation 12]><m:oMath><i style='mso-bidi-font-style:normal'><span lang=EN-US style='font-family:"Cambria Math","serif"'><m:r>(</m:r><m:r>K</m:r><m:r>) </m:r></span></i></m:oMath><![endif]--><!--[if !msEquation]--><!--[endif]--><span lang="EN-US">and the </span><span lang="EN">combination of density and heat capacity terms (</span><!--[if gte msEquation 12]><m:oMath><i style='mso-bidi-font-style:normal'><span lang=EN style='font-family:"Cambria Math","serif"; mso-ansi-language:EN'><m:r>ρc</m:r><m:r>)</m:r></span></i></m:oMath><![endif]--><!--[if !msEquation]--><!--[endif]--><span lang="EN"> were estimated as a demonstrative example.</span><span lang="EN-US">The sample (with known geometry) was subjected to electromagnetic radiation, generating a uniform and time constant volumetric heat flow within it. Real temperature profile was simulated adding white Gaussian noise to the original data, obtained from the theoretical model. For solving the objective function, simulated annealing and genetic algorithms, along with the traditional Levenberg-Marquardt method were used for comparative purposes. Results show similar findings of all algorithms for three simulation scenarios, as long as the signal to noise ratio sits at least at 30 dB. It means for practical purposes, that the estimation procedure presented here requires both, a good experimental design and an electronic instrumentation correctly specified.</span><span lang="EN-US">If both requirements are satisfied simultaneously, it is possible to estimate these type of parameters on-line, without need for an additional experimental setup.</span></p>


Author(s):  
Neng Huang ◽  
Junxing Zhu ◽  
Chaonian Guo ◽  
Shuhan Cheng ◽  
Xiaoyong Li

With the rapid development of mobile Internet, there is a higher demand for the real-time, reliability and availability of information systems and to prevent the possible systemic risks of information systems, various business consistency standards and regulatory guidelines have been published, such as Recovery Time Object (RTO) and Recovery Point Object (RPO). Some of the current related researches focus on the standards, methods, management tools and technical frameworks of business consistency, while others study the data consistency algorithms in the cases of large data, cloud computing and distributed storage. However, few researchers have studied on how to monitor the data consistency and RPO of production-disaster recovery, and what architecture and technology should be applied in the monitoring. Moreover, in some information systems, due to the complex structures and distributions of data, it is difficult for traditional methods to quickly detect and accurately locate the first error data. Besides, due to the separation of production data center (PDC) and disaster recovery data center (DRDC), it is difficult to calculate the data difference and RPO between the two centers. This paper first discusses the architecture of remote distributed DRDCs. The architecture can make the disaster recovery (DR) system always online and the data always readable, and support the real-time monitoring of data availability, consistency as well as other related indicators, in this way to make DRDC out-of-the-box in disasters. Second, inspired by blockchain, this paper proposes a method to realize real-time monitoring of data consistency and RTO by building hash chains for PDC and DRDC. Third, this paper evaluates the hash chain operations from the algorithm time complexity, the data consistency, and the validity of RPO monitoring algorithms and since DR system is actually a kind of distributed system, the proposed approach can also be applied to the data consistency detection and data difference monitoring in other distributed systems.


2020 ◽  
Vol 16 (3) ◽  
pp. 1-16
Author(s):  
Hong He

In recent years, peer-to-peer (P2P) systems have become a promising paradigm to provide efficient storage service in distributed environments. Although its effectiveness has been proven in many areas, the data consistency problem in P2P systems are still an opening issue. This article proposes a novel data consistence model, virtual peers-based data consistency (VPDC), which introduces a set of virtual peers to provide guaranteed data consistency in decentralized and unstructured P2P systems. The VPDC model can be easily implemented in any P2P system without introducing any interference to data retrieval. Theoretical analysis on VPDC is presented to analyze its effectiveness and efficiency, and massive experiments are conducted to evaluate the performance of a VPDC model in a real-world P2P system. The results indicate that it can significantly improve the data consistence of P2P systems and outperform many similar approaches in various experimental settings.


2014 ◽  
Vol 513-517 ◽  
pp. 1072-1076
Author(s):  
Qiang Gao ◽  
Yuan Li Gu ◽  
Teng Hua Zhang

Identification and modification of real-time traffic data has been the basic and critical part in the intelligent transportation system.Through the research to a large number of data,the original data is divided into the correct data,the irregular time-point data,inaccurate detection data,missing data and event data. Etkin interpolation algorithm is to gain the values of specified missing value by a successive approximation method with high order polynomial and implemented by using a successive approximation of multiple linear combinations.The paper selects improved Etkin interpolation algorithm to correct the traffic data and makes use of the DongZhiMen Bridge North 728 meters' 2001 detector data for example.The algorithm not only considers the practicability in the engineering practice,but also improves the accuracy of real-time data.


Sign in / Sign up

Export Citation Format

Share Document