A Research and Implement of Data Storage and Management Method Based on the Embedded MCU Data Flash

2013 ◽  
Vol 756-759 ◽  
pp. 1984-1988
Author(s):  
Jian Hui Ma ◽  
Zhi Xue Wang ◽  
Gang Wang ◽  
Yuan Yang Liu ◽  
Yan Qiang Li

This paper designed method for non-volatile data storage using MCU internal data Flash, certain data Flash sector is divided into multiple data partitions, different data partition storage data copies in different historical time, the current data partition storagethe latest copy of the data; In the data read operation, first calculate the latest data copying Flash storage location, then directly reads the address. In the data write operation, first judge if the data writing position is already erased, if not,write data in next partition, while copy the other data in the current partition to the next partition; if the write position has been erased, write data directly to the current partition. This method is similar to EEPROM data read and write, easy to operate, and give a simple application interface, and can avoid the sector erase operation, to improve storage efficiency, while increasing the service life of the MCU's internal data Flash.

2020 ◽  
Vol 245 ◽  
pp. 04027
Author(s):  
X. Espinal ◽  
S. Jezequel ◽  
M. Schulz ◽  
A. Sciabà ◽  
I. Vukotic ◽  
...  

HL-LHC will confront the WLCG community with enormous data storage, management and access challenges. These are as much technical as economical. In the WLCG-DOMA Access working group, members of the experiments and site managers have explored different models for data access and storage strategies to reduce cost and complexity, taking into account the boundary conditions given by our community.Several of these scenarios have been evaluated quantitatively, such as the Data Lake model and incremental improvements of the current computing model with respect to resource needs, costs and operational complexity.To better understand these models in depth, analysis of traces of current data accesses and simulations of the impact of new concepts have been carried out. In parallel, evaluations of the required technologies took place. These were done in testbed and production environments at small and large scale.We will give an overview of the activities and results of the working group, describe the models and summarise the results of the technology evaluation focusing on the impact of storage consolidation in the form of Data Lakes, where the use of streaming caches has emerged as a successful approach to reduce the impact of latency and bandwidth limitation.We will describe the experience and evaluation of these approaches in different environments and usage scenarios. In addition we will present the results of the analysis and modelling efforts based on data access traces of the experiments.


Data ◽  
2019 ◽  
Vol 4 (3) ◽  
pp. 94 ◽  
Author(s):  
Steve Kopp ◽  
Peter Becker ◽  
Abhijit Doshi ◽  
Dawn J. Wright ◽  
Kaixi Zhang ◽  
...  

Earth observation imagery have traditionally been expensive, difficult to find and access, and required specialized skills and software to transform imagery into actionable information. This has limited adoption by the broader science community. Changes in cost of imagery and changes in computing technology over the last decade have enabled a new approach for how to organize, analyze, and share Earth observation imagery, broadly referred to as a data cube. The vision and promise of image data cubes is to lower these hurdles and expand the user community by making analysis ready data readily accessible and providing modern approaches to more easily analyze and visualize the data, empowering a larger community of users to improve their knowledge of place and make better informed decisions. Image data cubes are large collections of temporal, multivariate datasets typically consisting of analysis ready multispectral Earth observation data. Several flavors and variations of data cubes have emerged. To simplify access for end users we developed a flexible approach supporting multiple data cube styles, referencing images in their existing structure and storage location, enabling fast access, visualization, and analysis from a wide variety of web and desktop applications. We provide here an overview of that approach and three case studies.


2006 ◽  
Vol 07 (02) ◽  
pp. 257-293 ◽  
Author(s):  
MITCHELL D. THEYS ◽  
NOAH B. BECK ◽  
HOWARD JAY SIEGEL ◽  
MICHAEL JURCZYK

The data staging problem involves positioning data within a distributed heterogeneous computing environment such that programs can access the requested data faster. This problem exists because applications constantly need up-to-date information to enable users to make decisions. In addition, these requests for information are normally occurring in an oversubscribed network. In such a heterogeneous distributed computing environment, each data storage location and intermediate node may have different data available, storage limitations, and communication links available. Sites in the heterogeneous network request data items and each request has an associated deadline and priority. This work extends the research presented in [ThT00] where a basic version of the data staging problem with static information was presented. This work introduces three new cost criteria and two new bounds on performance that were designed taking into account results from [ThT00]. A subset of the possible procedure/cost criterion combinations are evaluated in simulation studies considering a different priority weighting scheme, different average number of links used to satisfy each data request, and different network loadings, than was considered in [ThT00]. This paper also introduces a variable time, variable accuracy approach for using data items with "more desirable" and "less desirable" versions.


2019 ◽  
Vol 8 (3) ◽  
pp. 8124-8126

Provision of highly efficient storage for dynamically growing data is considered problem to be solved in data mining. Few research works have been designed for big data storage analytics. However, the storage efficiency using conventional techniques was not sufficient as where data duplication and storage overhead problem was not addressed. In order to overcome such limitations, Tanimoto Regressive Decision Support Based Blake2 Hashing Space Efficient Quotient Data Structure (TRDS-BHSEQDS) Model is proposed. Initially, TRDS-BHSEQDS technique gets larger number of input data as input. Then, TRDS-BHSEQDS technique computes 512 bits Blake2 hash value for each data to be stored. Consequently, TRDS-BHSEQDS technique applies Tanimoto Regressive Decision Support Model (TRDSM) where it carried outs regression analysis with application of Tanimoto similarity coefficient. During this process, proposed TRDS-BHSEQDS technique finds relationship between hash values of data by determining Tanimoto similarity coefficient value. If similarity value is ‘+1’, then TRDS-BHSEQDS technique considered that input data is already stored in BHSEQF memory. TRDSBHSEQDS technique enhances the storage efficiency of big data when compared to state-of-the-art works. The performance of TRDS-BHSEQDS technique is measured in terms of storage efficiency, time complexity and space complexity and storage overhead with respect to different numbers of input big data.


2021 ◽  
Author(s):  
Hans Olav Hygen ◽  
Abigail Louise Aller ◽  
Anette Lauen Borg ◽  
Line Båserud ◽  
Louise Oram ◽  
...  

<p>The meteorological observation networks are in rapid change. Among other trends, these changes include: increased frequency in observations, increased spatial resolution of observations, and increased heterogeneity in observation platforms. These changes challenge the current data storage and quality control. MET Norway has implemented a new data storage, ODA, to be able to receive a significant amount more data. </p><p>A significant challenge is that the current quality system doesn’t scale to the new world of observations. The current quality control system is not built to be modular, thus requires significant work to integrate improvements.</p><p>MET Norway is rising to the challenge of the new observation structure and storage renewing the handling of observational networks and the quality control system. Previously there have been strict criteria on how MET Norway should handle data from an observational station, this is changing with the emergence of new, cheap observational platforms. To accommodate this we are structuring the handling of the station in a hierarchical system where some stations will have fully populated metadata and be treated at the highest level, whilst others will have less information down to unknown stations with unknown setup, e.g. Netatmo.</p><p>The new quality control system will be modular to ensure the ability to change and upgrade different parts. One major module of the system is an in-house developed library for spatial quality control, Titan (presented at EMS 2019).</p><p>Unlike the present quality control, which is a separate entity to the data storage, CONFIDENT will be built to use ODA as data storage to ensure the best information is available for users and CONFIDENT at all times. We are also working on how we can integrate other software performing quality control of the data, e.g. for assimilation.</p><p>The project is planned to start in the autumn of 2021 for three years. Spring of 2021 was used to map relevant activities and modules as the foundation of the planned development of the new system. The plan is not to change the current quality system in one go, but to start implementing the different modules in 2022, and phasing out the current system throughout the project period of three years.</p>


Author(s):  
Igor Boyarshin ◽  
Anna Doroshenko ◽  
Pavlo Rehida

The article describes a new method of improving efficiency of the systems that deal with storage and providing access of shared data of many users by utilizing replication. Existing methods of load balancing in data storage systems are described, namely RR and WRR. A new method of request balancing among multiple data storage nodes is proposed, that is able to adjust to input request stream intensity in real time and utilize disk space efficiently while doing so.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Haiyan Zhao ◽  
Shuangxi Li

In order to enhance the load balance in the big data storage process and improve the storage efficiency, an intelligent classification method of low occupancy big data based on grid index is studied. A low occupancy big data classification platform was built, the infrastructure layer was designed using grid technology, grid basic services were provided through grid system management nodes and grid public service nodes, and grid application services were provided using local resource servers and enterprise grid application services. Based on each server node in the infrastructure layer, the basic management layer provides load forecasting, image backup, and other functional services. The application interface layer includes the interfaces required for the connection between the platform and each server node, and the advanced access layer provides the human-computer interaction interface for the operation of the platform. Finally, based on the obtained main structure, the depth confidence network is constructed by stacking several RBM layers, the new samples are expanded by adding adjacent values to obtain the mean value, and the depth confidence network is used to classify them. The experimental results show that the load of different virtual machines in the low occupancy big data storage process is less than 40%, and the load of each virtual machine is basically the same, indicating that this method can enhance the load balance in the data storage process and improve the storage efficiency.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xiaoyue Qin ◽  
Ruwei Huang ◽  
Huifeng Fan

Fully homomorphic encryption (FHE) supports arbitrary computations on ciphertexts without decryption to protect users’ privacy. However, currently, there are still some shortcomings in research studies on FHE. For example, the NTRU-based FHE scheme constructed using the approximate eigenvector method requires complex matrix multiplications, and the power-of-two cyclotomic ring cannot prevent subfield attacks. To address these problems, this paper proposed a NTRU-based FHE scheme constructed based on the power-of-prime cyclotomic ring and made the following improvements: (1) the power-of-prime cyclotomic ring is immune to subfield attacks; (2) complex matrix multiplications are replaced with matrix-vector multiplications to modify the ciphertext forms and decryption structures, so as to gain advantages in storage, transportation, and computations; (3) the single instruction multiple data (SIMD) technology is introduced, and homomorphic operations are executed through the Chinese remainder theorem, further improving the scheme computation and storage efficiency. The ciphertext of the scheme is in a form of a vector, and no key exchange is required for homomorphic operations. In addition, this scheme can eliminate the decisional small polynomial ratio (DSPR) assumption under certain conditions and only relies on the ring learning with errors (RLWE) assumption. The standard security model can prove that this scheme is secure against chosen-plaintext (IND-CPA) attacks. Compared with similar schemes, the proposed scheme improves the efficiency at least by a factor of l φ x / d +   1 and quadratically decreases the noise growth rate.


Sign in / Sign up

Export Citation Format

Share Document