scholarly journals Checking Robustness Between Weak Transactional Consistency Models

Author(s):  
Sidi Mohamed Beillahi ◽  
Ahmed Bouajjani ◽  
Constantin Enea

AbstractConcurrent accesses to databases are typically encapsulated in transactions in order to enable isolation from other concurrent computations and resilience to failures. Modern databases provide transactions with various semantics corresponding to different trade-offs between consistency and availability. Since a weaker consistency model provides better performance, an important issue is investigating the weakest level of consistency needed by a given program (to satisfy its specification). As a way of dealing with this issue, we investigate the problem of checking whether a given program has the same set of behaviors when replacing a consistency model with a weaker one. This property known as robustness generally implies that any specification of the program is preserved when weakening the consistency. We focus on the robustness problem for consistency models which are weaker than standard serializability, namely, causal consistency, prefix consistency, and snapshot isolation. We show that checking robustness between these models is polynomial time reducible to a state reachability problem under serializability. We use this reduction to also derive a pragmatic proof technique based on Lipton’s reduction theory that allows to prove programs robust. We have applied our techniques to several challenging applications drawn from the literature of distributed systems and databases.

Author(s):  
Chris Köcher

AbstractWe study the reachability problem for queue automata and lossy queue automata. Concretely, we consider the set of queue contents which are forwards resp. backwards reachable from a given set of queue contents. Here, we prove the preservation of regularity if the queue automaton loops through some special sets of transformation sequences. This is a generalization of the results by Boigelot et al. and Abdulla et al. regarding queue automata looping through a single sequence of transformations. We also prove that our construction is possible in polynomial time.


2019 ◽  
Vol 25 (1) ◽  
Author(s):  
Duong Nguyen ◽  
Aleksey Charapko ◽  
Sandeep S. Kulkarni ◽  
Murat Demirbas

Abstract Consistency properties provided by most key-value stores can be classified into sequential consistency and eventual consistency. The former is easier to program with but suffers from lower performance whereas the latter suffers from potential anomalies while providing higher performance. We focus on the problem of what a designer should do if he/she has an algorithm that works correctly with sequential consistency but is faced with an underlying key-value store that provides a weaker (e.g., eventual or causal) consistency. We propose a detect-rollback based approach: The designer identifies a correctness predicate, say P, and continues to run the protocol, as our system monitors P. If P is violated (because the underlying key-value store provides a weaker consistency), the system rolls back and resumes the computation at a state where P holds.We evaluate this approach with graph-based applications running on the Voldemort key-value store. Our experiments with deployment on Amazon AWS EC2 instances show that using eventual consistency with monitoring can provide a 50–80% increase in throughput when compared with sequential consistency. We also observe that the overhead of the monitoring itself was low (typically less than 4%) and the latency of detecting violations was small. In particular, in a scenario designed to intentionally cause a large number of violations, more than 99.9% of violations were detected in less than 50 ms in regional networks (all clients and servers in the same Amazon AWS region) and in less than 3 s in global networks.We find that for some applications, frequent rollback can cause the program using eventual consistency to effectively stall. We propose alternate mechanisms for dealing with re-occurring rollbacks. Overall, for applications considered in this paper, we find that even with rollback, eventual consistency provides better performance than using sequential consistency.


2020 ◽  
Vol 105 ◽  
pp. 259-274
Author(s):  
Hesam Nejati Sharif Aldin ◽  
Hossein Deldari ◽  
Mohammad Hossein Moattar ◽  
Mostafa Razavi Ghods

Author(s):  
Rafael Pereira Pires ◽  
Pascal Felber ◽  
Marcelo Pasin

This extended abstract summarises my PhD thesis, which explores design strategies for distributed systems that leverage trusted execution environments (TEEs). We aim at achieving better security and privacy guarantees while maintaining or improving performance in comparison to existing equivalent approaches. To that end, we propose a few original systems that take advantage of TEEs. On top of prototypes built with Intel software guard extensions (SGX) and deployed on real hardware, we evaluate their limitations and discuss the outcomes of such an emergent technology.


2018 ◽  
Vol 8 (2) ◽  
Author(s):  
Francisco J. Torres-Rojas ◽  
Esteban Meneses

Given a distributed system with several shared objects and many processes concurrently updating and reading them, it is convenient that the system achieves convergence on the value of these objects. Such property can be guaranteed depending on the consistency model being employed. Causal Consistency is a weak consistency model that is easy and cheap to implement. However, due to the lack of real-time considerations, this model cannot offer convergence. A solution for overcoming that problem is to include time aspects within the framework of the model. This is the aim of Timed Causal Consistency.


Sign in / Sign up

Export Citation Format

Share Document