scholarly journals Diverse Mobile System for Location-Based Mobile Data

2018 ◽  
Vol 2018 ◽  
pp. 1-17
Author(s):  
Qing Liao ◽  
Haoyu Tan ◽  
Wuman Luo ◽  
Ye Ding

The value of large amount of location-based mobile data has received wide attention in many research fields including human behavior analysis, urban transportation planning, and various location-based services. Nowadays, both scientific and industrial communities are encouraged to collect as much location-based mobile data as possible, which brings two challenges: (1) how to efficiently process the queries of big location-based mobile data and (2) how to reduce the cost of storage services, because it is too expensive to store several exact data replicas for fault-tolerance. So far, several dedicated storage systems have been proposed to address these issues. However, they do not work well when the ranges of queries vary widely. In this work, we design a storage system based on diverse replica scheme which not only can improve the query processing efficiency but also can reduce the cost of storage space. To the best of our knowledge, this is the first work to investigate the data storage and processing in the context of big location-based mobile data. Specifically, we conduct in-depth theoretical and empirical analysis of the trade-offs between different spatial-temporal partitioning and data encoding schemes. Moreover, we propose an effective approach to select an appropriate set of diverse replicas, which is optimized for the expected query loads while conforming to the given storage space budget. The experiment results show that using diverse replicas can significantly improve the overall query performance and the proposed algorithms for the replica selection problem are both effective and efficient.

2014 ◽  
Vol 556-562 ◽  
pp. 6179-6183
Author(s):  
Zhi Gang Chai ◽  
Ming Zhao ◽  
Xiao Yu

With the rapid development of information technology, the extensive use of cloud computing promotes technological change in the IT industry. The use of cloud storage industry is also one solution to the problem of an amount of data storing, which is traditionally large, and unimaginably redundant. The use of cloud computing in the storage system connects the user's data with network clients via the Internet. That is to say, it not only solves a lot of data storage space requirements in request, but also greatly reduces the cost of the storage system. But in the application of cloud storage, there are also many problems to be solved, even to some extent which has hindered the development of cloud storage. Among these issues, the most concerning one is cloud storage security. The following passages discuss the problem and propose a solution to it.


Author(s):  
Igor Boyarshin ◽  
Anna Doroshenko ◽  
Pavlo Rehida

The article describes a new method of improving efficiency of the systems that deal with storage and providing access of shared data of many users by utilizing replication. Existing methods of load balancing in data storage systems are described, namely RR and WRR. A new method of request balancing among multiple data storage nodes is proposed, that is able to adjust to input request stream intensity in real time and utilize disk space efficiently while doing so.


Webology ◽  
2021 ◽  
Vol 18 (Special Issue 01) ◽  
pp. 288-301
Author(s):  
G. Sujatha ◽  
Dr. Jeberson Retna Raj

Data storage is one of the significant cloud services available to the cloud users. Since the magnitude of information outsourced grows extremely high, there is a need of implementing data deduplication technique in the cloud storage space for efficient utilization. The cloud storage space supports all kind of digital data like text, audio, video and image. In the hash-based deduplication system, cryptographic hash value should be calculated for all data irrespective of its type and stored in the memory for future reference. Using these hash value only, duplicate copies can be identified. The problem in this existing scenario is size of the hash table. To find a duplicate copy, all the hash values should be checked in the worst case irrespective of its data type. At the same time, all kind of digital data does not suit with same structure of hash table. In this study we proposed an approach to have multiple hash tables for different digital data. By having dedicated hash table for each digital data type will improve the searching time of duplicate data.


2020 ◽  
Author(s):  
James L. Banal ◽  
Tyson R. Shepherd ◽  
Joseph Berleant ◽  
Hellen Huang ◽  
Miguel Reyes ◽  
...  

ABSTRACTDNA is an ultra-high-density storage medium that could meet exponentially growing worldwide demand for archival data storage if DNA synthesis costs declined sufficiently and random access of files within exabyte-to-yottabyte-scale DNA data pools were feasible. To overcome the second barrier, here we encapsulate data-encoding DNA file sequences within impervious silica capsules that are surface-labeled with single-stranded DNA barcodes. Barcodes are chosen to represent file metadata, enabling efficient and direct selection of sets of files with Boolean logic. We demonstrate random access of image files from an image database using fluorescence sorting with selection sensitivity of 1 in 106 files, which thereby enables 1 in 106N per N optical channels. Our strategy thereby offers retrieval of random file subsets from exabyte and larger-scale long-term DNA file storage databases, offering a scalable solution for random-access of archival files in massive molecular datasets.


2011 ◽  
Vol 14 (3) ◽  
pp. 412-422 ◽  
Author(s):  
SUJIN YANG ◽  
HWAJIN YANG ◽  
BARBARA LUST

This study investigated whether early especially efficient utilization of executive functioning in young bilinguals would transcend potential cultural benefits. To dissociate potential cultural effects from bilingualism, four-year-old U.S. Korean–English bilingual children were compared to three monolingual groups – English and Korean monolinguals in the U.S.A. and another Korean monolingual group, in Korea. Overall, bilinguals were most accurate and fastest among all groups. The bilingual advantage was stronger than that of culture in the speed of attention processing, inverse processing efficiency independent of possible speed-accuracy trade-offs, and the network of executive control for conflict resolution. A culture advantage favoring Korean monolinguals from Korea was found in accuracy but at the cost of longer response times.


2020 ◽  
Vol 245 ◽  
pp. 04008
Author(s):  
Andreas-Joachim Peters ◽  
Michal Kamil Simon ◽  
Elvin Alin Sindrilaru

The storage group of CERN IT operates more than 20 individual EOS[1] storage services with a raw data storage volume of more than 340 PB. Storage space is a major cost factor in HEP computing and the planned future LHC Run 3 and 4 increase storage space demands by at least an order of magnitude. A cost effective storage model providing durability is Erasure Coding (EC) [2]. The decommissioning of CERN’s remote computer center (Wigner/Budapest) allows a reconsideration of the currently configured dual-replica strategy where EOS provides one replica in each computer center. EOS allows one to configure EC on a per file bases and exposes four different redundancy levels with single, dual, triple and fourfold parity to select different quality of service and variable costs. This paper will highlight tests which have been performed to migrate files on a production instance from dual-replica to various EC profiles. It will discuss performance and operational impact, and highlight various policy scenarios to select the best file layout with respect to IO patterns, file age and file size. We will conclude with the current status and future optimizations, an evaluation of cost savings and discuss an erasure encoded EOS setup as a possible tape storage replacement.


2018 ◽  
Vol 10 (1) ◽  
pp. 34-42
Author(s):  
Muzafar Ahmad Bhat ◽  
Amit Jain

The Internet of Things (IoT) has arisen as a novel prospect in the recent years. This has presented the notion that all devices such as smartphones, public services, conveyance facilities, and home appliances can be viewed as data creator devices. The Cloud Computing framework for IoT highlighted in this work has potential to act as a data storage system supporting IoT devices utilized to improve data processing efficiency and offer a huge competitive advantage to the IoT applications. The purpose of this study is to examine the work done on IoT using big data as well as data mining methods to recognize focuses that must be highlighted further. Some focus is made on data mining technologies integrated with IoT technologies for decision making support and system optimization as data mining involves discovering novel, interesting, and potentially useful patterns from data and applying algorithms to the extraction of hidden information.


2013 ◽  
Vol 811 ◽  
pp. 520-524
Author(s):  
Zhu Min Chen

With the growing demand for mass data storage, cloud storage has become an inevitable trend of the development of the storage. In order to improve the efficiency and stability of cloud storage system, this paper presents an optimization algorithm based on cloud storage. Node idle zone and node resource usage view is divided, while integrating node and global scheduling method, which can improve the cost-effective and stability of cloud storage system.


2020 ◽  
Vol 15 (1) ◽  
pp. 143-156
Author(s):  
Jean-François Biasse ◽  
Benjamin Pring

AbstractIn this paper we provide a framework for applying classical search and preprocessing to quantum oracles for use with Grover’s quantum search algorithm in order to lower the quantum circuit-complexity of Grover’s algorithm for single-target search problems. This has the effect (for certain problems) of reducing a portion of the polynomial overhead contributed by the implementation cost of quantum oracles and can be used to provide either strict improvements or advantageous trade-offs in circuit-complexity. Our results indicate that it is possible for quantum oracles for certain single-target preimage search problems to reduce the quantum circuit-size from $O\left(2^{n/2}\cdot mC\right)$ (where C originates from the cost of implementing the quantum oracle) to $O(2^{n/2} \cdot m\sqrt{C})$ without the use of quantum ram, whilst also slightly reducing the number of required qubits.This framework captures a previous optimisation of Grover’s algorithm using preprocessing [21] applied to cryptanalysis, providing new asymptotic analysis. We additionally provide insights and asymptotic improvements on recent cryptanalysis [16] of SIKE [14] via Grover’s algorithm, demonstrating that the speedup applies to this attack and impacting upon quantum security estimates [16] incorporated into the SIKE specification [14].


Sign in / Sign up

Export Citation Format

Share Document