data store
Recently Published Documents


TOTAL DOCUMENTS

286
(FIVE YEARS 84)

H-INDEX

11
(FIVE YEARS 3)

2021 ◽  
Vol 4 ◽  
pp. 1-5
Author(s):  
Bárbara Cubillos ◽  
Ángela Ortíz ◽  
Germán Aguilera ◽  
Sergio Rozas ◽  
Claudio Reyes ◽  
...  

Abstract. The digital cartographic coverage at 1:25,000 that the Military Geographic Institute is creating has been worked on using international standards, so that it constitutes a standardized and interoperable tool, for the various areas of activity in Chile. In this context, the ISO TC 211 standards and the TDS (Topographic Data Store) data model developed by the National Geospatial-Intelligence Agency (NGA) are being used.Apart from using these standards, efforts have been aimed, from an early stage, at the determination of the quality of this product, starting this process with the study for a methodology to measure Positional Accuracy. The method defined conforms to the NSSDA test; for this, points measured in the terrain especially for this control are used, also the elimination of points that are out of range under the Chauvenet Criteria. Finally, the positional accuracy is declared in the metadata.


2021 ◽  
Author(s):  
Rafał Skowroński ◽  
Jerzy Brzeziński

AbstractDecentralized, open-access blockchain systems opened up new, exciting possibilities—all without reliance on trusted third parties. Regardless of the employed consensus protocol, the overall security, decentralization and effectiveness of such systems, largely depend upon properly structured incentives. Indeed, as has been previously spotted by Babaiaff et al. Bitcoin-like systems, oftentimes lack some of these. Specifically, current blockchain-systems fail to incentivize one of their crucial aspects–the underlying data exchange. As we rationalize, proper incentivization of that layer could lead to lower transactions’ confirmation-times, improved finalization guarantees and at the same time to discouragement of malicious behaviours such as block-withholding attacks. Indeed, incentivization of the data-exchange layer allows the system to remain operational when all agents, including routing nodes, are assumed to be rational. In this work, while focusing on the problem of sybil-proof data exchange, we revisit previous approaches, showcasing their shortcomings and lay forward the first information exchange framework; with integrated routing and reward-function mechanics, provably secure in thwarting Sybil-nodes in 1-connected or eclipsed networks. The framework neither requires nor assumes any kind of constraints in regard to the network’s topology (i.e. the network is modelled as a random-connected graph) and rewards information propagators through a system-intrinsic virtual asset maintained by the decentralized state-machine. The proposal, while being storage and transmission efficient is suitable for rewarding not only consensus-related datagrams (both data-blocks and transactions) but consensus-extrinsic information as well, thus facilitating an universal sybil-proof data-exchange apparatus, provably valid under the assumption of existence of a data store whose property of non-malleability emerges as time approaches infinity. Our research was conducted under two scenarios—with round leader known and unknown in advance of each transactional round.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-31
Author(s):  
Nouraldin Jaber ◽  
Christopher Wagner ◽  
Swen Jacobs ◽  
Milind Kulkarni ◽  
Roopsha Samanta

The last decade has sparked several valiant efforts in deductive verification of distributed agreement protocols such as consensus and leader election. Oddly, there have been far fewer verification efforts that go beyond the core protocols and target applications that are built on top of agreement protocols. This is unfortunate, as agreement-based distributed services such as data stores, locks, and ledgers are ubiquitous and potentially permit modular, scalable verification approaches that mimic their modular design. We address this need for verification of distributed agreement-based systems through our novel modeling and verification framework, QuickSilver, that is not only modular, but also fully automated. The key enabling feature of QuickSilver is our encoding of abstractions of verified agreement protocols that facilitates modular, decidable, and scalable automated verification. We demonstrate the potential of QuickSilver by modeling and efficiently verifying a series of tricky case studies, adapted from real-world applications, such as a data store, a lock service, a surveillance system, a pathfinding algorithm for mobile robots, and more.


2021 ◽  
Vol 8 (3) ◽  
pp. 1-25
Author(s):  
Soheil Behnezhad ◽  
Laxman Dhulipala ◽  
Hossein Esfandiari ◽  
Jakub Łącki ◽  
Vahab Mirrokni ◽  
...  

We introduce the Adaptive Massively Parallel Computation (AMPC) model, which is an extension of the Massively Parallel Computation (MPC) model. At a high level, the AMPC model strengthens the MPC model by storing all messages sent within a round in a distributed data store. In the following round, all machines are provided with random read access to the data store, subject to the same constraints on the total amount of communication as in the MPC model. Our model is inspired by the previous empirical studies of distributed graph algorithms [8, 30] using MapReduce and a distributed hash table service [17]. This extension allows us to give new graph algorithms with much lower round complexities compared to the best-known solutions in the MPC model. In particular, in the AMPC model we show how to solve maximal independent set in O (1) rounds and connectivity/minimum spanning tree in O (log log m / n n rounds both using O ( n δ ) space per machine for constant δ < 1. In the same memory regime for MPC, the best-known algorithms for these problems require poly log n rounds. Our results imply that the 2-C YCLE conjecture, which is widely believed to hold in the MPC model, does not hold in the AMPC model.


2021 ◽  
Author(s):  
Mark Jessell ◽  
Jiateng Guo ◽  
Yunqiang Li ◽  
Mark Lindsay ◽  
Richard Scalzo ◽  
...  

Abstract. Unlike some other well-known challenges such as facial recognition, where Machine Learning and Inversion algorithms are widely developed, the geosciences suffer from a lack of large, labelled datasets that can be used to validate or train robust Machine Learning and inversion schemes. Publicly available 3D geological models are far too restricted in both number and the range of geological scenarios to serve these purposes. With reference to inverting geophysical data this problem is further exacerbated as in most cases real geophysical observations result from unknown 3D geology, and synthetic test datasets are often not particularly geological, nor geologically diverse. To overcome these limitations, we have used the Noddy modelling platform to generate one million models, which represent the first publicly accessible massive training set for 3D geology and resulting gravity and magnetic datasets. This model suite can be used to train Machine Learning systems, and to provide comprehensive test suites for geophysical inversion. We describe the methodology for producing the model suite, and discuss the opportunities such a model suit affords, as well as its limitations, and how we can grow and access this resource.


2021 ◽  
Author(s):  
Éder Porto Ferreira Alves ◽  
Paul R. Burley ◽  
João Alexandre Peschanski

This chapter provides a step-by-step process for large-scale contributions of articles from scholarly publications to Wikidata, a collaborative data store project of the Wikimedia Foundation. Tools and processes in Wikidata, Zotero, and Google Sheets in particular are described; they relate both to the Wikidata platform and standard spreadsheet programs. The case of the Brazilian journal Anais do Museu Paulista is used to illustrate the process that can then be replicated with other publications and in other contexts.


Author(s):  
Mohammed M. Sultan ◽  
Amer T. Saeed ◽  
Ahmed M. Sana

Securing property plays a crucial role in human life. Therefore, an adaptive multilevel wireless security system (ML-WSS) based on the internet of things (IoT) has been proposed to observe and secure a certain place. ML-WSS consists of hardware and software components, such as a set of sensors, Wi-Fi module, and operation and monitoring mobile application (OMM). The OMM application is designed to remotely monitor and control the proposed system through the Internet and by using ThingSpeak cloud as a data store. The proposed scheme is based on dividing the required zone of the place into three regions (levels), low-risk region (LRR) as level-1, moderate-risk region (MRR) level-2, and high-risk region (HRR) as level-3. Each level may contain one or set of sensors, so the number of sensors, their placement, and under which level is labelled is specified according to the security requirements. Several processes are done based on these levels when a breach occurs in the system. Mathematical model and pseudocode were created to illustrate the mechanism of the proposed system. The results show that the proposed system has been implemented successfully and the number of breaches that occurs in level-3 area was reduced by 50% as compared to level-1.


2021 ◽  
Author(s):  
Peter Bradbury ◽  
Terry Casstevens ◽  
Sarah E Jensen ◽  
Lynn C Johnson ◽  
Zachary R Miller ◽  
...  

Motivation: Pangenomes provide novel insights for population and quantitative genetics, genomics, and breeding not available from studying a single reference genome. Instead, a species is better represented by a pangenome or collection of genomes. Unfortunately, managing and using pangenomes for genomically diverse species is computationally and practically challenging. We developed a trellis graph representation anchored to the reference genome that represents most pangenomes well and can be used to impute complete genomes from low density sequence or variant data. Results: The Practical Haplotype Graph (PHG) is a pangenome pipeline, database (PostGRES & SQLite), data model (Java, Kotlin, or R), and Breeding API (BrAPI) web service. The PHG has already been able to accurately represent diversity in four major crops including maize, one of the most genomically diverse species, with up to 1000-fold data compression. Using simulated data, we show that, at even 0.1X coverage, with appropriate reads and sequence alignment, imputation results in extremely accurate haplotype reconstruction. The PHG is a platform and environment for the understanding and application of genomic diversity. Availability: All resources listed here are freely available. The PHG Docker used to generate the simulation results is https://hub.docker.com/ as maizegenetics/phg:0.0.27. PHG source code is at https://bitbucket.org/bucklerlab/practicalhaplotypegraph/src/master/. The code used for the analysis of simulated data is at https://bitbucket.org/bucklerlab/phg-manuscript/src/master/. The PHG database of NAM parent haplotypes is in the CyVerse data store (https://de.cyverse.org/de/) and named /iplant/home/shared/panzea/panGenome/PHG_db_maize/phg_v5Assemblies_20200608.db.


Author(s):  
Yu Guo ◽  
Shenling Wang ◽  
Jianhui Huang

AbstractThe explosive growth of big data is pushing forward the paradigm of cloud-based data store today. Among other, distributed storage systems are widely adopted due to their superior performance and continuous availability. However, due to the potentially wide attacking surfaces of the public cloud, outsourcing data store inevitably raises new concerns on user privacy exposure and unauthorized data access. Besides, directly introducing a centralized third-party authority for query authorization management does not work because it still can be compromised. In this paper, we propose a blockchain-assisted framework that can support trustworthy data sharing services. In particular, data owners allow to outsource their sensitive data to distributed systems in encrypted form. By leveraging smart contracts of blockchain, a data owner can distribute secret keys for authorized users without extra round interaction to generate the permitted search tokens. Meanwhile, such blockchain-assisted framework naturally solves the trust issues of query authorization. Besides, we devise a secure local index framework to support encrypted keyword search with forward privacy and mitigate blockchain overhead. To validate our design, we implement the prototype and deploy it at Amazon Cloud. Extensive experiments demonstrate the security, efficiency, and effectiveness of the blockchain-assisted design.


Energies ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 5129
Author(s):  
Muhammad Junaid ◽  
Asadullah Shaikh ◽  
Mahmood Ul Hassan ◽  
Abdullah Alghamdi ◽  
Khairan Rajab ◽  
...  

This research proposes a generic smart cloud-based system in order to accommodate multiple scenarios where agriculture farms using Internet of Things (IoTs) need to be monitored remotely. The real-time and stored data are analyzed by specialists and farmers. The cloud acts as a central digital data store where information is collected from diverse sources in huge volumes and variety, such as audio, video, image, text, and digital maps. Artificial Intelligence (AI) based machine learning models such as Support Vector Machine (SVM), which is one of many classification types, are used to accurately classify the data. The classified data are assigned to the virtual machines where these data are processed and finally available to the end-users via underlying datacenters. This processed form of digital information is then used by the farmers to improve their farming skills and to update them as pre-disaster recovery for smart agri-food. Furthermore, it will provide general and specific information about international markets relating to their crops. This proposed system discovers the feasibility of the developed digital agri-farm using IoT-based cloud and provides solutions to problems. Overall, the approach works well and achieved performance efficiency in terms of execution time by 14%, throughput time by 5%, overhead time by 9%, and energy efficiency by 13.2% in the presence of competing smart farming baselines.


Sign in / Sign up

Export Citation Format

Share Document