byzantine failures
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 7)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Dmitrijs Rjazanovs ◽  
Ernests Petersons ◽  
Aleksandrs Ipatovs ◽  
Loreta Juskaite ◽  
Roman Yeryomin

2021 ◽  
Vol 14 (11) ◽  
pp. 2230-2243
Author(s):  
Jelle Hellings ◽  
Mohammad Sadoghi

The emergence of blockchains has fueled the development of resilient systems that can deal with Byzantine failures due to crashes, bugs, or even malicious behavior. Recently, we have also seen the exploration of sharding in these resilient systems, this to provide the scalability required by very large data-based applications. Unfortunately, current sharded resilient systems all use system-specific specialized approaches toward sharding that do not provide the flexibility of traditional sharded data management systems. To improve on this situation, we fundamentally look at the design of sharded resilient systems. We do so by introducing BYSHARD, a unifying framework for the study of sharded resilient systems. Within this framework, we show how two-phase commit and two-phase locking ---two techniques central to providing atomicity and isolation in traditional sharded databases---can be implemented efficiently in a Byzantine environment, this with a minimal usage of costly Byzantine resilient primitives. Based on these techniques, we propose eighteen multi-shard transaction processing protocols. Finally, we practically evaluate these protocols and show that each protocol supports high transaction throughput and provides scalability while each striking its own trade-off between throughput, isolation level, latency , and abort rate. As such, our work provides a strong foundation for the development of ACID-compliant general-purpose and flexible sharded resilient data management systems.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7239
Author(s):  
Shlomi Hacohen ◽  
Oded Medina ◽  
Tal Grinshpoun ◽  
Nir Shvalb

Many tasks performed by swarms of unmanned aerial vehicles require localization. In many cases, the sensors that take part in the localization process suffer from inherent measurement errors. This problem is amplified when disruptions are added, either endogenously through Byzantine failures of agents within the swarm, or exogenously by some external source, such as a GNSS jammer. In this paper, we first introduce an improved localization method based on distance observation. Then, we devise schemes for detecting Byzantine agents, in scenarios of endogenous disruptions, and for detecting a disrupted area, in case the source of the problem is exogenous. Finally, we apply pool testing techniques to reduce the communication traffic and the computation time of our schemes. The optimal pool size should be chosen carefully, as very small or very large pools may impair the ability to identify the source/s of disruption. A set of simulated experiments demonstrates the effectiveness of our proposed methods, which enable reliable error estimation even amid disruptions. This work is the first, to the best of our knowledge, that embeds identification of endogenous and exogenous disruptions into the localization process.


Author(s):  
Andreas Bolfing

Chapter 5 considers distributed systems by their properties. The first section studies the classification of software systems, which is usually distinguished in centralized, decentralized and distributed systems. It studies the differences between these three major approaches, showing there is a rather multidimensional classification instead of a linear one. The most important case are distributed systems that enable spreading of computational tasks across several autonomous, independently acting computational entities. A very important result of this case is the CAP theorem that considers the trade-off between consistency, availability and partition tolerance. The last section deals with the possibility to reach consensus in distributed systems, discussing how fault tolerant consensus mechanisms enable mutual agreement among the individual entities in presence of failures. One very special case are so-called Byzantine failures that are discussed in great detail. The main result is the so-called FLP Impossibility Result which states that there is no deterministic algorithm that guarantees solution to the consensus problem in the asynchronous case. The chapter concludes by considering practical solutions that circumvent the impossibility result in order to reach consensus.


2020 ◽  
Vol 107 ◽  
pp. 54-71
Author(s):  
Ernesto Jiménez ◽  
José Luis López-Presa ◽  
Javier Martín-Rueda

2017 ◽  
Vol 15 (2) ◽  
pp. 61-72
Author(s):  
D O ABORISADE ◽  
A S SODIYA ◽  
A A ODUMOSU ◽  
O Y ALOWOSILE ◽  
A A ADEDEJI

Distributed Database Systems have been very useful technologies in making a wide range of information available to users across the World. However, there are now growing security concerns, arising from the use of distributed systems, particularly the ones attached to critical systems. More than ever before, data in distributed databases are more susceptible to attacks, failures or accidents owing to advanced knowledge explosions in network and database technologies. The imperfection of the existing security mechanisms coupled with the heightened and growing concerns for intrusion, attack, compromise or even failure owing to Byzantine failure are also contributing factors. The importance of  survivable distributed databases in the face of byzantine failure, to other emerging technologies is the motivation for this research. Furthermore, It has been observed that most of the existing works on distributed database only dwelled on maintaining data integrity and availability in the face of attack. There exist few on availability or survibability of distributed databases owing to internal factors such as internal sabotage or storage defects. In this paper, an architecture for entrenching survivability of Distributed Databases occasioned by Byzantine failures is proposed. The proposed architecture concept is based on re-creating data on failing database server based on a set  threshold value.The proposed architecture is tested and found to be capable of improving probability of survivability in distributed database where it is implemented to  99.6%  from 99.2%. 


2016 ◽  
Vol 340-341 ◽  
pp. 27-40 ◽  
Author(s):  
Gustavo Sousa Pavani ◽  
Anderson de França Queiroz ◽  
Jerônimo Cordoni Pellegrini

Sign in / Sign up

Export Citation Format

Share Document