atlas software
Recently Published Documents


TOTAL DOCUMENTS

33
(FIVE YEARS 7)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
pp. 108591
Author(s):  
Shisheng Chen ◽  
Nyuk Hien Wong ◽  
Marcel Ignatius ◽  
Wen Zhang ◽  
Yang He ◽  
...  

2021 ◽  
Vol 251 ◽  
pp. 02017
Author(s):  
Nurcan Ozturk ◽  
Alex Undrus ◽  
Marcelo Vogel ◽  
Alessandra Forti

The ATLAS experiment’s software production and distribution on the grid benefits from a semi-automated infrastructure that provides up-to-date information about software usability and availability through the CVMFS distribution service for all relevant systems. The software development process uses a Continuous Integration pipeline involving testing, validation, packaging and installation steps. For opportunistic sites that can not access CVMFS, containerized releases are needed. These standalone containers are currently created manually to support Monte-Carlo data production at such sites. In this paper we will describe an automated procedure for the containerization of ATLAS software releases in the existing software development infrastructure, its motivation, integration and testing in the distributed computing system.


Author(s):  
Agnes-Kriszta Szabo ◽  
Zalan Raduly ◽  
Ors-Krisztian Patakfalvi ◽  
Csaba Sulyok ◽  
Karoly Simon

2019 ◽  
Vol 14 (4) ◽  
pp. 116
Author(s):  
Maura Campra ◽  
Silvana Secinaro ◽  
Valerio Brescia

The network is a model that may be able to respond to public needs by overcoming some limitations of other approaches. In literature, a generalizable model is often absent and not applicable to more than one productive sector. The case study uses the "Torino Model" to highlight the most frequent features and measurable elements of the network through a bottom-up coding approach by ATLAS software. The case is analyzed through interviews, documents analysis and observation of the functioning of the network. Sustainability, management and the main network outcomes are the elements that the study examines the case study. The analysis responds to the gap identified in the literature concerning the application to a system composed of institutions. The essential elements linked to know-how, the exchange of training and information and therefore the growth of intangible value constitute the essential basis for the establishment of a successful network, and this is also highlighted by the case study. The case study highlights how the network between institutions reduces costs by eliminating the duplication of services offered and increasing effectiveness and efficiency through increasing other factors such as the professional ability to respond to needs by immediately putting institutions and professionals in communication. The model confirms the ability to overcome the gap related to the network between institutions and between public and private, increasing the well-being of the local system.


2019 ◽  
Vol 72 (suppl 1) ◽  
pp. 189-196
Author(s):  
Elaine Cristina Novatzki Forte ◽  
Denise Elvira Pires de Pires ◽  
Maria Manuela Ferreira Pereira da Silva Martins ◽  
Maria Itayra Coelho de Souza Padilha ◽  
Dulcinéia Ghizoni Schneider ◽  
...  

ABSTRACT Objective: To analyze the nursing errors reported by the journalistic media and interpret the main implications of this communication for the visibility of this problem. Method: Documental research, qualitative, descriptive and exploratory, with data collected in news reports from Brazil and Portugal, analyzed through hermeneutics with resources of Atlas Software. Results: We analyzed 112 news items published between 2012 and 2016 that resulted in six categories: Year - highest occurrence in 2012; Age group of the patient - children; Professional category - nurses; Type of error - medication; Outcome - death; Possible attributed cause - occupational conditions. Final considerations: Nursing mistakes are a challenge for the profession, and the way they are communicated by the media is not very explanatory, contributing to a negative visibility of the profession, and to making society insecure. Improving the way they are served in the media contributes to the visibility of the problem without affecting the professional image.


2019 ◽  
Vol 214 ◽  
pp. 03040 ◽  
Author(s):  
Alexander Undrus

PowerPC and high-performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they utilize hundreds of thousands of CPUs joined together. However, their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This paper describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS simulation release code with 0.7 million C++ and Python lines as well as the source code of more than 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computing Facility (OLCF).


2019 ◽  
Vol 214 ◽  
pp. 07005
Author(s):  
Douglas Benjamin ◽  
Taylor Childers ◽  
David Lesny ◽  
Danila Oleynik ◽  
Sergey Panitkin ◽  
...  

The HPC environment presents several challenges to the ATLAS experiment in running their automated computational workflows smoothly and efficiently, in particular regarding issues such as software distribution and I/O load. A vital component of the LHC Computing Grid, CVMFS, is not always available in HPC environments. ATLAS computing has experimented with all-inclusive containers, and later developed an environment to produce such containers for both Shifter and Singularity. The all-inclusive containers include most of the recent ATLAS software releases, database releases, and other tools extracted from CVMFS. This helped ATLAS to distribute software automatically to HPC centres with an environment identical to those in CVMFS. It also significantly reduced the metadata I/O load to HPC shared file systems. The production operation at NERSC has proved that by using this type of containers, we can transparently fit into the previously developed ATLAS operation methods, and at the same time scale up to run many more jobs.


2018 ◽  
Vol 1085 ◽  
pp. 032033
Author(s):  
E Ritsch ◽  
G Gaycken ◽  
C Gumpert ◽  
A Krasznahorkay ◽  
W Lampl ◽  
...  
Keyword(s):  

2018 ◽  
Vol 1085 ◽  
pp. 032036
Author(s):  
A J Gamel ◽  
U Schnoor ◽  
K Meier ◽  
F Bührer ◽  
M Schumacher

2017 ◽  
Vol 898 ◽  
pp. 072009
Author(s):  
J Elmsheuser ◽  
A Krasznahorkay ◽  
E Obreshkov ◽  
A Undrus ◽  

Sign in / Sign up

Export Citation Format

Share Document