scholarly journals Novel, highly-parallel software for the online storage system of the ATLAS experiment at CERN: Design and performances

Author(s):  
Tommaso Colombo ◽  
Wainer Vandelli
Author(s):  
Anjani Tiwari ◽  
Prof. Jatin Patel ◽  
Dr. Priyanka Sharma

Cloud solutions provide a powerful computing platform that enables individuals and Cloud users to perform a wide range of tasks, such as using an online storage system, implementing business applications, developing customized computer software, and establishing a realistic network environment. The number of people using cloud services has increased dramatically in recent years, and a massive amount of data has been stored in cloud computing environments. As a result, data breaches in cloud services are increasing year after year as a result of hackers who are constantly attempting to exploit cloud architecture's security vulnerabilities. In this paper, we investigate and analyse real-world cloud attacks in order to demonstrate the techniques used by hackers against cloud computing systems and how to prevent such malicious activities.


2015 ◽  
Vol 77 (5) ◽  
Author(s):  
Fadhilah Mat Yamin ◽  
Wan Hussain Wan Ishak

This paper discusses the use of online storage for document sharing to support teaching and learning purposes.  To date, online storage has become one of important tools for document storage and management.  Online storage has reduced the dependency to the storage devices that are bound to size limit, cost and risk.  A part of the storage capability, online storage can be used to share documents by allowing others to access the individual or a group of documents.   In this study online storage namely Dropbox has been used to share digital media such as notes, presentation materials and handouts to students.  Thus, the use of printed materials can be reduced.  In addition the documents can be safely kept and access at any time and location that are connected to the internet.   This study adapted Unified Theory of Acceptance and Use of Technology (UTAUT) to assess students’ perception and continuous use of Dropbox towards document sharing.  The findings revealed that students have positive perception towards Dropbox.  Furthermore, students have indicated that they are keen on continue using the Dropbox to support their learning.   


2021 ◽  
Vol 251 ◽  
pp. 02006
Author(s):  
Mikhail Borodin ◽  
Alessandro Di Girolamo ◽  
Edward Karavakis ◽  
Alexei Klimentov ◽  
Tatiana Korchuganova ◽  
...  

The High Luminosity upgrade to the LHC, which aims for a tenfold increase in the luminosity of proton-proton collisions at an energy of 14 TeV, is expected to start operation in 2028/29 and will deliver an unprecedented volume of scientific data at the multi-exabyte scale. This amount of data has to be stored, and the corresponding storage system must ensure fast and reliable data delivery for processing by scientific groups distributed all over the world. The present LHC computing and data management model will not be able to provide the required infrastructure growth, even taking into account the expected hardware technology evolution. To address this challenge, the Data Carousel R&D project was launched by the ATLAS experiment in the fall of 2018. State-of-the-art data and workflow management technologies are under active development, and their current status is presented here.


2018 ◽  
Vol 174 ◽  
pp. 07003
Author(s):  
Michele Quinto ◽  
Francesco S. Cafagna ◽  
Adrian Fiergolski ◽  
Emilio Radicioni

The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC’s Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ∼ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC’s Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.


2008 ◽  
Vol 55 (1) ◽  
pp. 278-283 ◽  
Author(s):  
Sai Suman Cherukuwada ◽  
Niko Neufeld

2019 ◽  
Vol 214 ◽  
pp. 04013
Author(s):  
Jan Erik Sundermann ◽  
Jolanta Bubeliene ◽  
Ludmilla Obholz ◽  
Andreas Petzold

The computing center GridKa is serving the ALICE, ATLAS, CMS and LHCb experiments as one of the biggest WLCG Tier-1 centers world wide with compute and storage resources. It is operated by the Steinbuch Centre for Computing at Karlsruhe Institute of Technology in Germany. In April 2017 a new online storage system was put into operation. In its current stage of expansion it offers the HEP experiments a capacity of 34 PB of online storage. The whole storage is partitioned into few large file systems, one for each experiment, using IBM Spectrum Scale as software-defined-storage base layer. The system offers a combined read-write performance of 100 GB/s. It can be scaled transparently both in size and performance allowing to fulfill the growing needs especially of the LHC experiments for online storage in the coming years. In this article we discuss the general architecture of the storage system and present first experiences with the performance of the system in production use.


Author(s):  
Y. Kokubo ◽  
W. H. Hardy ◽  
J. Dance ◽  
K. Jones

A color coded digital image processing is accomplished by using JEM100CX TEM SCAN and ORTEC’s LSI-11 computer based multi-channel analyzer (EEDS-II-System III) for image analysis and display. Color coding of the recorded image enables enhanced visualization of the image using mathematical techniques such as compression, gray scale expansion, gamma-processing, filtering, etc., without subjecting the sample to further electron beam irradiation once images have been stored in the memory.The powerful combination between a scanning electron microscope and computer is starting to be widely used 1) - 4) for the purpose of image processing and particle analysis. Especially, in scanning electron microscopy it is possible to get all information resulting from the interactions between the electron beam and specimen materials, by using different detectors for signals such as secondary electron, backscattered electrons, elastic scattered electrons, inelastic scattered electrons, un-scattered electrons, X-rays, etc., each of which contains specific information arising from their physical origin, study of a wide range of effects becomes possible.


Sign in / Sign up

Export Citation Format

Share Document