eventual consistency
Recently Published Documents


TOTAL DOCUMENTS

76
(FIVE YEARS 15)

H-INDEX

14
(FIVE YEARS 2)

Author(s):  
Emin Karayel ◽  
Edgar Gonzàlez

AbstractCommutative Replicated Data Types (CRDTs) are a promising new class of data structures for large-scale shared mutable content in applications that only require eventual consistency. The WithOut Operational Transforms (WOOT) framework is the first CRDT for collaborative text editing introduced by Oster et al. (In: Conference on Computer Supported Cooperative Work (CSCW). ACM, New York, pp 259–268, 2006a). Its eventual consistency property was verified only for a bounded model to date. While the consistency of many other previously published CRDTs had been shown immediately with their publication, the property for WOOT remained open for 14 years. We use a novel approach identifying a previously unknown sort-key based protocol that simulates the WOOT framework to show its consistency. We formalize the proof using the Isabelle/HOL proof assistant to machine-check its correctness.


2021 ◽  
Vol 25 (2) ◽  
pp. 435-468
Author(s):  
Dániel Balázs Rátai ◽  
Zoltán Horváth ◽  
Zoltán Porkoláb ◽  
Melinda Tóth

Atomicity, consistency, isolation and durability are essential properties of many distributed systems. They are often abbreviated as the ACID properties. Ensuring ACID comes with a price: it requires extra computing and network capacity to ensure that the atomic operations are done perfectly, or they are rolled back. When we have higher requirements on performance, we need to give up the ACID properties entirely or settle for eventual consistency. Since the ambiguity of the order of the events, such algorithms can get very complicated since they have to be prepared for any possible contingencies. Traquest model is an attempt for creating a general concurrency model that can bring the ACID properties without sacrificing a too significant amount of performance.


2021 ◽  
pp. 187-225
Author(s):  
Hugo Filipe Oliveira Rocha
Keyword(s):  

2019 ◽  
Vol 25 (1) ◽  
Author(s):  
Duong Nguyen ◽  
Aleksey Charapko ◽  
Sandeep S. Kulkarni ◽  
Murat Demirbas

Abstract Consistency properties provided by most key-value stores can be classified into sequential consistency and eventual consistency. The former is easier to program with but suffers from lower performance whereas the latter suffers from potential anomalies while providing higher performance. We focus on the problem of what a designer should do if he/she has an algorithm that works correctly with sequential consistency but is faced with an underlying key-value store that provides a weaker (e.g., eventual or causal) consistency. We propose a detect-rollback based approach: The designer identifies a correctness predicate, say P, and continues to run the protocol, as our system monitors P. If P is violated (because the underlying key-value store provides a weaker consistency), the system rolls back and resumes the computation at a state where P holds.We evaluate this approach with graph-based applications running on the Voldemort key-value store. Our experiments with deployment on Amazon AWS EC2 instances show that using eventual consistency with monitoring can provide a 50–80% increase in throughput when compared with sequential consistency. We also observe that the overhead of the monitoring itself was low (typically less than 4%) and the latency of detecting violations was small. In particular, in a scenario designed to intentionally cause a large number of violations, more than 99.9% of violations were detected in less than 50 ms in regional networks (all clients and servers in the same Amazon AWS region) and in less than 3 s in global networks.We find that for some applications, frequent rollback can cause the program using eventual consistency to effectively stall. We propose alternate mechanisms for dealing with re-occurring rollbacks. Overall, for applications considered in this paper, we find that even with rollback, eventual consistency provides better performance than using sequential consistency.


The data produced nowadays is the Big data which is unstructured, semi-structured or structured in nature. It is difficult for SQL to handle such large amount of data with varied forms so NoSQL was introduced which gives many advantages over SQL. It is a schema less database which allows horizontal scaling, scalability and distributed framework. SQL is based on ACID properties (Atomicity, Consistency, Isolation, Durability) whereas NoSQL is based on BASE (Basic Availability, Soft state, Eventual consistency). This paper introduces the concepts of NoSQL, advantages of NoSQL over SQL, different types of NoSQL databases with special reference to the document store database, MongoDB. It explains in detail about the MongoDB and then experimentally evaluates the performance of the queries executed in SQL and NoSQL (MongoDB). The experiment conducted shows that NoSQL queries are executed faster as compared to SQL queries and can also handle huge amount of unstructured data very effortlessly and easily.


Information ◽  
2019 ◽  
Vol 10 (4) ◽  
pp. 141 ◽  
Author(s):  
Pwint Phyu Khine ◽  
Zhaoshun Wang

The inevitability of the relationship between big data and distributed systems is indicated by the fact that data characteristics cannot be easily handled by a standalone centric approach. Among the different concepts of distributed systems, the CAP theorem (Consistency, Availability, and Partition Tolerant) points out the prominent use of the eventual consistency property in distributed systems. This has prompted the need for other, different types of databases beyond SQL (Structured Query Language) that have properties of scalability and availability. NoSQL (Not-Only SQL) databases, mostly with the BASE (Basically Available, Soft State, and Eventual consistency), are gaining ground in the big data era, while SQL databases are left trying to keep up with this paradigm shift. However, none of these databases are perfect, as there is no model that fits all requirements of data-intensive systems. Polyglot persistence, i.e., using different databases as appropriate for the different components within a single system, is becoming prevalent in data-intensive big data systems, as they are distributed and parallel by nature. This paper reflects the characteristics of these databases from a conceptual point of view and describes a potential solution for a distributed system—the adoption of polyglot persistence in data-intensive systems in the big data era.


2019 ◽  
Vol 11 (2) ◽  
pp. 43 ◽  
Author(s):  
Miguel Diogo ◽  
Bruno Cabral ◽  
Jorge Bernardino

Internet has become so widespread that most popular websites are accessed by hundreds of millions of people on a daily basis. Monolithic architectures, which were frequently used in the past, were mostly composed of traditional relational database management systems, but quickly have become incapable of sustaining high data traffic very common these days. Meanwhile, NoSQL databases have emerged to provide some missing properties in relational databases like the schema-less design, horizontal scaling, and eventual consistency. This paper analyzes and compares the consistency model implementation on five popular NoSQL databases: Redis, Cassandra, MongoDB, Neo4j, and OrientDB. All of which offer at least eventual consistency, and some have the option of supporting strong consistency. However, imposing strong consistency will result in less availability when subject to network partition events.


Sign in / Sign up

Export Citation Format

Share Document