Keysystems in large systems implementing distributed data processing and storage technologies

Author(s):  
V.G. Belenkov ◽  
V.I. Korolev ◽  
V.I. Budzko ◽  
D.A. Melnikov

The article discusses the features of the use of the cryptographic information protection means (CIPM)in the environment of distributed processing and storage of data of large information and telecommunication systems (LITS).A brief characteristic is given of the properties of the cryptographic protection control subsystem - the key system (CS). A description is given of symmetric and asymmetric cryptographic systems, required to describe the problem of using KS in LITS.Functional and structural models of the use of KS and CIPM in LITS, are described. Generalized information about the features of using KS in LITS is given. The obtained results form the basis for further work on the development of the architecture and principles of KS construction in LITS that implement distributed data processing and storage technologies. They can be used both as a methodological guide, and when carrying out specific work on the creation and development of systems that implement these technologies, as well as when forming technical specifications for the implementation of work on the creation of such systems.

2019 ◽  
Vol 214 ◽  
pp. 04010
Author(s):  
Álvaro Fernández Casaní ◽  
Dario Barberis ◽  
Javier Sánchez ◽  
Carlos García Montoro ◽  
Santiago González de la Hoz ◽  
...  

The ATLAS EventIndex currently runs in production in order to build a complete catalogue of events for experiments with large amounts of data. The current approach is to index all final produced data files at CERN Tier0, and at hundreds of grid sites, with a distributed data collection architecture using Object Stores to temporarily maintain the conveyed information, with references to them sent with a Messaging System. The final backend of all the indexed data is a central Hadoop infrastructure at CERN; an Oracle relational database is used for faster access to a subset of this information. In the future of ATLAS, instead of files, the event should be the atomic information unit for metadata, in order to accommodate future data processing and storage technologies. Files will no longer be static quantities, possibly dynamically aggregating data, and also allowing event-level granularity processing in heavily parallel computing environments. It also simplifies the handling of loss and or extension of data. In this sense the EventIndex may evolve towards a generalized whiteboard, with the ability to build collections and virtual datasets for end users. This proceedings describes the current Distributed Data Collection Architecture of the ATLAS EventIndex project, with details of the Producer, Consumer and Supervisor entities, and the protocol and information temporarily stored in the ObjectStore. It also shows the data flow rates and performance achieved since the new Object Store as temporary store approach was put in production in July 2017. We review the challenges imposed by the expected increasing rates that will reach 35 billion new real events per year in Run 3, and 100 billion new real events per year in Run 4. For simulated events the numbers are even higher, with 100 billion events/year in run 3, and 300 billion events/year in run 4. We also outline the challenges we face in order to accommodate future use cases in the EventIndex.


1982 ◽  
Vol 65 (5) ◽  
pp. 1279-1282
Author(s):  
Thomas J Birkel ◽  
Laurence R Dusold

Abstract Distributed data processing has been accomplished by a computer system in which laboratory instrument data are collected on a PEAK-11 system for preliminary processing and generation of initial reports. When further processing is required, or when archival storage of raw or processed data is desired, data are transferred over telephone lines to an IBM 3033; an IBM 7406 Device Coupler is used to handle protocol conversion and ″handshaking.″ User-written programs in APL.SV on the IBM machine and in Assembly Language on the PEAK-11 system effect the transfer of bidirectional data. The distributed processing approach allows efficient use of expensive peripherals while maintaining short response times.


BMC Genomics ◽  
2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Onur Yukselen ◽  
Osman Turkyilmaz ◽  
Ahmet Rasit Ozturk ◽  
Manuel Garber ◽  
Alper Kucukural

1979 ◽  
Vol 21 (2) ◽  
Author(s):  
L. J. Heinrich

Der Beitrag erläutert das subjektive Verständnis des Begriffes ,,Computerleistung am Arbeitsplatz" als Schlagwort für eine progressive Gestaltungsphilosophie computergestützter Informationssysteme. Sie impliziert sowohl die Anwendung moderner Hard- und Softwaretechnologien, wie sie für die 80er Jahre bestimmend sein werden, als auch die in den Vordergrund rückende Berücksichtigung der sowohl von der Arbeitsaufgabe bestimmten als auch der subjektiven Benutzerbedürfnisse. Sie verbindet damit ,, Distributed Data Processing" als ein technologisches Konzept mit ..Benutzerorientierung". Die Gestaltungsbereiche der Benutzerorientierung - Arbeitsmittel und Arbeitsumwelt, Mensch- Computer-Interaktionsschnittstelle sowie die Arbeitsorganisation - werden erläutert. Gestaltungsmaßnahmen werden beispielhaft angegeben, und es wird auf die weiterführende Literatur verwiesen; dabei steht das im Oldenbourg- Verlag erschienene Buch ,,Computerleistung am Arbeitsplatz - benutzerorientiertes Distributed Data Processing" im Vordergrund.


Sign in / Sign up

Export Citation Format

Share Document