scholarly journals Computing activities at the Spanish Tier-1 and Tier-2s for the ATLAS experiment towards the LHC Run3 and High-Luminosity periods

2020 ◽  
Vol 245 ◽  
pp. 07027
Author(s):  
Santiago González de la Hoz ◽  
Carles Acosta-Silva ◽  
Javier Aparisi Pozo ◽  
Jose del Peso ◽  
Álvaro Fernández Casani ◽  
...  

The ATLAS Spanish Tier-1 and Tier-2s have more than 15 years of experience in the deployment and development of LHC computing components and their successful operations. The sites are already actively participating in, and even coordinating, emerging R&D computing activities and developing new computing models needed for the Run3 and HighLuminosity LHC periods. In this contribution, we present details on the integration of new components, such as High Performance Computing resources to execute ATLAS simulation workflows. The development of new techniques to improve efficiency in a cost-effective way, such as storage and CPU federations is shown in this document. Improvements in data organization, management and access through storage consolidations (“data-lakes”), the use of data caches, and improving experiment data catalogs, like Event Index, are explained in this proceeding. The design and deployment of new analysis facilities using GPUs together with CPUs and techniques like Machine Learning will also be presented. Tier-1 and Tier-2 sites, are, and will be, contributing to significant R&D in computing, evaluating different models for improving performance of computing and data storage capacity in the High-Luminosity LHC era.

2018 ◽  
Vol 1 (1) ◽  
pp. 16
Author(s):  
Rahmat Fadhli ◽  
Rifqi Zaeni Achmad Syam

Manajemen Data mencakup semua kegiatan yang berhubungan dengan data selain penggunaan langsung dari data, termasuk organisasi data; back up data; pengarsipan data; berbagi data dan penerbitan; menjamin keamanan data rahasia dan sinkronisasi data. Kegiatan manajemen data adalah suatu kegiatan penting yang dilakukan oleh individu ataupun organisasi terhadap data agar mudah di akses, aman dan tersedia bagi user/ pemakainya. Kegiatan manajemen data di ASEAN Youth Friendship Network dilakukan oleh Project officer karena berkenaan dan berhubungan langsung dalam proses manajerial data, penyimpanan, dan pengolahan data untuk mendapatkan metadata. Proses manajemen data yang dilakukan AYFN terdiri atas lima tahapan yakni perencanaan (planning), pengumpulan (collecting), pengolahan (processing), organisasi data (organizing), penyajian dan penyampaian (presentation). ABSTRACTData Management covers all activities related to data other than direct use of data, including data organization; back up data; data archiving; data sharing and publishing; ensure confidential data security and data synchronization. Data management activities are an important activity carried out by individuals or organizations on data so that they are easy to access, secure and available to the user / user. Data management activities in the ASEAN Youth Friendship Network are carried out by Project officers because they pertain to and relate directly to data managerial processes, data storage and processing to obtain metadata. The data management process carried out by AYFN consists of five stages, namely planning, collecting, processing, data organization, presentation and presentation.


2021 ◽  
Author(s):  
Herman Cheung

The intent of this thesis research is to develop a new methodology to improve existing nuclear process in an efficient, precise, and cost-effective way. This thesis presents three new designs: Secure Trilateral Access Control (STAC), Network-Integrated Nuclear Operation (NINO), and network-driven Condition-Based Nuclear Maintenance (CBNM). STAC design has three tiers: Tier-1 ensures security controls of external accesses to the new nuclear network. Tier-2 ensures qualification controls for carrying nuclear operations. Tier-3 ensures qualification controls for nuclear maintenances. NINO design is to increase efficiency of conducting nuclear operations and ensure correctness of executing targeted operations. CBNM design is to increase efficiency and cost savings for conducting nuclear maintenance and schedule maintenance based on equipment conditions to avoid extremely expensive forced outages. Feasibility and practicality of these new designs are illustrated analytically and numerically in the thesis. The significance of these designs is tremendous, resulting in huge nuclear operation cost savings.


2020 ◽  
Vol 245 ◽  
pp. 04045
Author(s):  
Claire Adam Bourdarios ◽  
Jean-Claude Chevaleyre ◽  
Frédérique Chollet ◽  
Sabine Crépé-Renaudin ◽  
Christine Gondrand ◽  
...  

With the increase of storage needs at the High-Luminosity LHC horizon, data management and access will be very challenging. The evaluation of possible solutions within the WLCG Data Organization, Management and Access (DOMA) is a major activity to select the most optimal from the experiment and site point of views. Four teams hosting Tier-2s for ATLAS with storage based on DPM technology have put their expertise and computing infrastructures in common to build a testbed hosting a DPM federated storage called FR-ALPES. This note describes the infrastructure put in place, its integration within the ATLAS Grid infrastructure and presents the first results.


2020 ◽  
Vol 17 (9) ◽  
pp. 4411-4418
Author(s):  
S. Jagannatha ◽  
B. N. Tulasimala

In the world of information communication technology (ICT) the term Cloud Computing has been the buzz word. Cloud computing is changing its definition the way technocrats are using it according to the environment. Cloud computing as a definition remains very contentious. Definition is stated liable to a particular application with no unanimous definition, making it altogether elusive. In spite of this, it is this technology which is revolutionizing the traditional usage of computer hardware, software, data storage media, processing mechanism with more of benefits to the stake holders. In the past, the use of autonomous computers and the nodes that were interconnected forming the computer networks with shared software resources had minimized the cost on hardware and also on the software to certain extent. Thus evolutionary changes in computing technology over a few decades has brought in the platform and environment changes in machine architecture, operating system, network connectivity and application workload. This has made the commercial use of technology more predominant. Instead of centralized systems, parallel and distributed systems will be more preferred to solve computational problems in the business domain. These hardware are ideal to solve large-scale problems over internet. This computing model is data-intensive and networkcentric. Most of the organizations with ICT used to feel storing of huge data, maintaining, processing of the same and communication through internet for automating the entire process a challenge. In this paper we explore the growth of CC technology over several years. How high performance computing systems and high throughput computing systems enhance computational performance and also how cloud computing technology according to various experts, scientific community and also the service providers is going to be more cost effective through different dimensions of business aspects.


2020 ◽  
Vol 15 (1) ◽  
pp. 15
Author(s):  
Felix Bach ◽  
Björn Schembera ◽  
Jos Van Wezel

Research data as the true valuable good in science must be saved and subsequently kept findable, accessible and reusable for reasons of proper scientific conduct for a time span of several years. However, managing long-term storage of research data is a burden for institutes and researchers. Because of the sheer size and the required retention time apt storage providers are hard to find. Aiming to solve this puzzle, the bwDataArchive project started development of a long-term research data archive that is reliable, cost effective and able store multiple petabytes of data. The hardware consists of data storage on magnetic tape, interfaced with disk caches and nodes for data movement and access. On the software side, the High Performance Storage System (HPSS) was chosen for its proven ability to reliably store huge amounts of data. However, the implementation of bwDataArchive is not dependant on HPSS. For authentication the bwDataArchive is integrated into the federated identity management for educational institutions in the State of Baden-Württemberg in Germany. The archive features data protection by means of a dual copy at two distinct locations on different tape technologies, data accessibility by common storage protocols, data retention assurance for more than ten years, data preservation with checksums, and data management capabilities supported by a flexible directory structure allowing sharing and publication. As of September 2019, the bwDataArchive holds over 9 PB and 90 million files and sees a constant increase in usage and users from many communities.


2021 ◽  
Vol 251 ◽  
pp. 02014
Author(s):  
Haykuhi Musheghyan ◽  
Samuel Ambroj Pérez ◽  
Andreas Petzold ◽  
Doris Ressmann ◽  
Jan Erik Sundermann

Tape storage remains the most cost-effective system for safe long-term storage of petabytes of data and reliably accessing it on demand. It has long been widely used by Tier-1 centers in WLCG. GridKa uses tape storage systems for LHC and non-LHC HEP experiments. The performance requirements on the tape storage systems are increasing every year, creating an increasing number of challenges in providing a scalable and reliable system. Therefore, providing high-performance, scalable and reliable tape storage systems is a top priority for Tier-1 centers in WLCG. At GridKa, various performance tests were recently done to investigate the existence of bottlenecks in the tape storage setup. As a result, several bottlenecks were identified and resolved, leading to a significant improvement in the overall tape storage performance. These results were achieved in a test environment and introduction of these achievements in to the production environment required a great effort, among many other things, a new software had to be developed to interact with the tape management software. This contribution provides detailed information on the latest improvements and changes on the GridKa tape storage setup.


2021 ◽  
Author(s):  
Herman Cheung

The intent of this thesis research is to develop a new methodology to improve existing nuclear process in an efficient, precise, and cost-effective way. This thesis presents three new designs: Secure Trilateral Access Control (STAC), Network-Integrated Nuclear Operation (NINO), and network-driven Condition-Based Nuclear Maintenance (CBNM). STAC design has three tiers: Tier-1 ensures security controls of external accesses to the new nuclear network. Tier-2 ensures qualification controls for carrying nuclear operations. Tier-3 ensures qualification controls for nuclear maintenances. NINO design is to increase efficiency of conducting nuclear operations and ensure correctness of executing targeted operations. CBNM design is to increase efficiency and cost savings for conducting nuclear maintenance and schedule maintenance based on equipment conditions to avoid extremely expensive forced outages. Feasibility and practicality of these new designs are illustrated analytically and numerically in the thesis. The significance of these designs is tremendous, resulting in huge nuclear operation cost savings.


TAPPI Journal ◽  
2018 ◽  
Vol 17 (09) ◽  
pp. 507-515 ◽  
Author(s):  
David Skuse ◽  
Mark Windebank ◽  
Tafadzwa Motsi ◽  
Guillaume Tellier

When pulp and minerals are co-processed in aqueous suspension, the mineral acts as a grinding aid, facilitating the cost-effective production of fibrils. Furthermore, this processing allows the utilization of robust industrial milling equipment. There are 40000 dry metric tons of mineral/microfbrillated (MFC) cellulose composite production capacity in operation across three continents. These mineral/MFC products have been cleared by the FDA for use as a dry and wet strength agent in coated and uncoated food contact paper and paperboard applications. We have previously reported that use of these mineral/MFC composite materials in fiber-based applications allows generally improved wet and dry mechanical properties with concomitant opportunities for cost savings, property improvements, or grade developments and that the materials can be prepared using a range of fibers and minerals. Here, we: (1) report the development of new products that offer improved performance, (2) compare the performance of these new materials with that of a range of other nanocellulosic material types, (3) illustrate the performance of these new materials in reinforcement (paper and board) and viscosification applications, and (4) discuss product form requirements for different applications.


2011 ◽  
Vol 39 (3) ◽  
pp. 193-209 ◽  
Author(s):  
H. Surendranath ◽  
M. Dunbar

Abstract Over the last few decades, finite element analysis has become an integral part of the overall tire design process. Engineers need to perform a number of different simulations to evaluate new designs and study the effect of proposed design changes. However, tires pose formidable simulation challenges due to the presence of highly nonlinear rubber compounds, embedded reinforcements, complex tread geometries, rolling contact, and large deformations. Accurate simulation requires careful consideration of these factors, resulting in the extensive turnaround time, often times prolonging the design cycle. Therefore, it is extremely critical to explore means to reduce the turnaround time while producing reliable results. Compute clusters have recently become a cost effective means to perform high performance computing (HPC). Distributed memory parallel solvers designed to take advantage of compute clusters have become increasingly popular. In this paper, we examine the use of HPC for various tire simulations and demonstrate how it can significantly reduce simulation turnaround time. Abaqus/Standard is used for routine tire simulations like footprint and steady state rolling. Abaqus/Explicit is used for transient rolling and hydroplaning simulations. The run times and scaling data corresponding to models of various sizes and complexity are presented.


Sign in / Sign up

Export Citation Format

Share Document