Distributed Computing Column 81

2021 ◽  
Vol 52 (1) ◽  
pp. 70-70
Author(s):  
Dan Alistarh

Overview. In this edition of the column, we have an exciting contribution from Shir Cohen, Idit Keidar, and Oded Naor (Technion), who provide an in-depth perspective on communication-efficient Byzantine Agreement. With the huge popularity of blockchains, the classic area of Byzantine Agreement has seen a surge of interest, and, given the large-scale and widely-distributed deployments of such mechanisms, communication efficiency is a chief concern. This column provides a gentle introduction to the area, by rst the early work in the 80s. This provides the necessary context for the recent exciting work on communication reduction, from the King & Saia algorithm to Algorand. One very useful feature in this column's contribution is the fact that it provides a uni ed view of these results, along with the mathematical background to understand and di erentiate the underlying results. Many thanks to Shir, Idit and Oded for their contribution!

2013 ◽  
Vol 765-767 ◽  
pp. 1087-1091
Author(s):  
Hong Lin ◽  
Shou Gang Chen ◽  
Bao Hui Wang

Recently, with the development of Internet and the coming of new application modes, data storage has some new characters and new requirements. In this paper, a Distributed Computing Framework Mass Small File storage System (For short:Dnet FS) based on Windows Communication Foundation in .Net platform is presented, which is lightweight, good-expansibility, running in cheap hardware platform, supporting Large-scale concurrent access, and having certain fault-tolerance. The framework of this system is analyzed and the performance of this system is tested and compared. All of these prove this system meet requirements.


2021 ◽  
Vol 36 (10) ◽  
pp. 2150070
Author(s):  
Maria Grigorieva ◽  
Dmitry Grin

Large-scale distributed computing infrastructures ensure the operation and maintenance of scientific experiments at the LHC: more than 160 computing centers all over the world execute tens of millions of computing jobs per day. ATLAS — the largest experiment at the LHC — creates an enormous flow of data which has to be recorded and analyzed by a complex heterogeneous and distributed computing environment. Statistically, about 10–12% of computing jobs end with a failure: network faults, service failures, authorization failures, and other error conditions trigger error messages which provide detailed information about the issue, which can be used for diagnosis and proactive fault handling. However, this analysis is complicated by the sheer scale of textual log data, and often exacerbated by the lack of a well-defined structure: human experts have to interpret the detected messages and create parsing rules manually, which is time-consuming and does not allow identifying previously unknown error conditions without further human intervention. This paper is dedicated to the description of a pipeline of methods for the unsupervised clustering of multi-source error messages. The pipeline is data-driven, based on machine learning algorithms, and executed fully automatically, allowing categorizing error messages according to textual patterns and meaning.


Cloud computing is the on-request accessibility of computer system resources, specially data storage and computing power, without direct dynamic management by the client. In the simplest terms, cloud computing means storing and accessing data and programs over the Internet instead of your computer’s hard drive. Along the improvement of cloud computing, more and more applications are migrated into the cloud. A significant element of distributed computing is pay-more only as costs arise. Distributed computing gives strong computational capacity to the general public at diminished cost that empowers clients with least computational assets to redistribute their huge calculation outstanding burdens to the cloud, and monetarily appreciate the monstrous computational force, transmission capacity, stockpiling, and even reasonable programming that can be partaken in a compensation for each utilization way Tremendous bit of leeway is the essential objective that forestalls the wide scope of registering model for clients when their secret information are expended during the figuring procedure. Critical thinking is a system to arrive at the pragmatic objective of specific instruments that tackles the issues as well as shield from pernicious practices.. In this paper, we examine secure outsourcing for large-scale systems of linear equations, which are the most popular problems in various engineering disciplines. Linear programming is an operation research technique formulates private data by the customer for LP problem as a set of matrices and vectors, to develop a set of efficient privacypreserving problem transformation techniques, which allow customers to transform original LP problem into some arbitrary one while protecting sensitive input/output information. Identify that LP problem solving in Cloud component is efficient extra cost on cloud server. In this paper we are utilizing Homomorphic encryption system to increase the performance and time efficiency


2018 ◽  
Vol 7 (4.6) ◽  
pp. 13
Author(s):  
Mekala Sandhya ◽  
Ashish Ladda ◽  
Dr. Uma N Dulhare ◽  
. . ◽  
. .

In this generation of Internet, information and data are growing continuously. Even though various Internet services and applications. The amount of information is increasing rapidly. Hundred billions even trillions of web indexes exist. Such large data brings people a mass of information and more difficulty discovering useful knowledge in these huge amounts of data at the same time. Cloud computing can provide infrastructure for large data. Cloud computing has two significant characteristics of distributed computing i.e. scalability, high availability. The scalability can seamlessly extend to large-scale clusters. Availability says that cloud computing can bear node errors. Node failures will not affect the program to run correctly. Cloud computing with data mining does significant data processing through high-performance machine. Mass data storage and distributed computing provide a new method for mass data mining and become an effective solution to the distributed storage and efficient computing in data mining. 


Symmetry ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 911 ◽  
Author(s):  
Md Azher Uddin ◽  
Aftab Alam ◽  
Nguyen Anh Tu ◽  
Md Siyamul Islam ◽  
Young-Koo Lee

In recent years, the amount of intelligent CCTV cameras installed in public places for surveillance has increased enormously and as a result, a large amount of video data is produced every moment. Due to this situation, there is an increasing request for the distributed processing of large-scale video data. In an intelligent video analytics platform, a submitted unstructured video undergoes through several multidisciplinary algorithms with the aim of extracting insights and making them searchable and understandable for both human and machine. Video analytics have applications ranging from surveillance to video content management. In this context, various industrial and scholarly solutions exist. However, most of the existing solutions rely on a traditional client/server framework to perform face and object recognition while lacking the support for more complex application scenarios. Furthermore, these frameworks are rarely handled in a scalable manner using distributed computing. Besides, existing works do not provide any support for low-level distributed video processing APIs (Application Programming Interfaces). They also failed to address a complete service-oriented ecosystem to meet the growing demands of consumers, researchers and developers. In order to overcome these issues, in this paper, we propose a distributed video analytics framework for intelligent video surveillance known as SIAT. The proposed framework is able to process both the real-time video streams and batch video analytics. Each real-time stream also corresponds to batch processing data. Hence, this work correlates with the symmetry concept. Furthermore, we introduce a distributed video processing library on top of Spark. SIAT exploits state-of-the-art distributed computing technologies with the aim to ensure scalability, effectiveness and fault-tolerance. Lastly, we implant and evaluate our proposed framework with the goal to authenticate our claims.


Author(s):  
Steve Sawyer ◽  
William Gibbons

This teaching case describes the efforts of one department in a large organization to migrate from an internally developed, mainframe-based, computing system to a system based on purchased software running on a client/server architecture. The case highlights issues with large scale software implementations such as those demanded by enterprise resource package (ERP) installations. Often, the ERP selected by an organization does not have all the required functionality. This demands purchasing and installing additional packages (known colloquially as “bolt-ons”) to provide the needed functionality. These implementations lead to issues regarding oversight of the technical architecture, both project and technology governance, and user department capability for managing the installation of new systems.


Sign in / Sign up

Export Citation Format

Share Document