scholarly journals The INFN scientific computing infrastructure: present status and future evolution

2019 ◽  
Vol 214 ◽  
pp. 03001
Author(s):  
T. Boccali ◽  
G. Carlino ◽  
L. dell’Agnello

The INFN scientific computing infrastructure is composed of more than 30 sites, ranging from CNAF (Tier-1 for LHC and main data center for nearly 30 other experiments) and nine LHC Tier-2s, to ∼ 20 smaller sites, including LHC Tier-3s and not-LHC experiment farms. A comprehensive review of the installed resources, together with plans for the near future, has been collected during the second half of 2017, and provides a general view of the infrastructure, its costs and its potential for expansions; it also shows the general trends in software and hardware solutions utilized in a complex reality as INFN. As of the end of 2017, the total installed CPU power exceeded 800 kHS06 (∼ 80,000 cores) while the total storage net capacity was over 57 PB on disk and 97 PB on tape: the vast majority of resources (95% of cores and 95% of storage) are concentrated in the 16 largest centers. Future evolutions are explored and are towards the consolidation into big centers; this has required a rethinking of the access policies and protocols in order to enable diverse scientific communities, beyond LHC, to fruitfully exploit the INFN resources. On top of that, such an infrastructure will be used beyond INFN experiments, and will be part of the Italian infrastructure, comprising other research institutes, universities and HPC centers.

Author(s):  
Abraham Pouliakis ◽  
Stavros Archondakis ◽  
Efrossyni Karakitsou ◽  
Petros Karakitsos

Cloud computing is changing the way enterprises, institutions, and people understand, perceive, and use current software systems. Cloud computing is an innovative concept of creating a computer grid using the Internet facilities aiming at the shared use of resources such as computer software and hardware. Cloud-based system architectures provide many advantages in terms of scalability, maintainability, and massive data processing. By means of cloud computing technology, cytopathologists can efficiently manage imaging units by using the latest software and hardware available without having to pay for it at non-affordable prices. Cloud computing systems used by cytopathology departments can function on public, private, hybrid, or community models. Using cloud applications, infrastructure, storage services, and processing power, cytopathology laboratories can avoid huge spending on maintenance of costly applications and on image storage and sharing. Cloud computing allows imaging flexibility and may be used for creating a virtual mobile office. Security and privacy issues have to be addressed in order to ensure Cloud computing wide implementation in the near future. Nowadays, cloud computing is not widely used for the various tasks related to cytopathology; however, there are numerous fields for which it can be applied. The envisioned advantages for the everyday practice in laboratories' workflow and eventually for the patients are significant. This is explored in this chapter.


2021 ◽  
Vol 13 (9) ◽  
pp. 229
Author(s):  
David P. Anderson

Volunteer computing uses millions of consumer computing devices (desktop and laptop computers, tablets, phones, appliances, and cars) to do high-throughput scientific computing. It can provide Exa-scale capacity, and it is a scalable and sustainable alternative to data-center computing. Currently, about 30 science projects use volunteer computing in areas ranging from biomedicine to cosmology. Each project has application programs with particular hardware and software requirements (memory, GPUs, VM support, and so on). Each volunteered device has specific hardware and software capabilities, and each device owner has preferences for which science areas they want to support. This leads to a scheduling problem: how to dynamically assign devices to projects in a way that satisfies various constraints and that balances various goals. We describe the scheduling policy used in Science United, a global manager for volunteer computing.


Author(s):  
Gemma Marfany

Can humans control the future evolution of our species? Based on current knowledge in genetics, one can infer and extrapolate what may happen in the near future. After all, if we are to predict the future, we must first understand the foundations of our present. To answer the first question, I will briefly present what we know about our genome and whether we have enough data to infer who we are (known as the genotype–phenotype correlation), then I will present new technological advances and their potential impact on our evolution.


Author(s):  
Septian Sony Hermawan ◽  
RD Rohmat Saedudin

CV Media Smart is a company that involved in the procurement of IT tools in schools and offices. With wide range coverage of schools and companies, CV Media Smart want to add more business process, therefore data center is needed to support existing and added later business process. The focus of this research is on cooling system and air flow. To support this research, NDLC (Network Development Life Cycle) is used as research method. NDLC is a method that depend on development process, like design of business process and infrastructure design. The reason why this research is using NDLC method is because NDLC is method that depend on development process. The standard that used in this research is TIA-942. Result of this research is a design of data center that already meet TIA-942 standard tier 1.


Author(s):  
Frans Fernando Asali ◽  
Irawan Afrianto
Keyword(s):  

Puslitbang XYZ adalah salah satu litbang yang memperhatikan penggunaan dan pemanfaatan teknologi informasi (TI). Puslitbang XYZ memiliki divisi IT dan resource yang diletakan pada suatu tempat yang disebut dengan data center. Data center Puslitbang XYZ saat ini memiliki ukuran luas computer room yang kecil, konstruksi ruang yang belum ideal, pelabelan dan dokumentasi yang belum baik, serta server yang kadang mengalami restart. Akibat dari kondisi tersebut membuat proses kegiatan penelitian Puslitbang XYZ menjadi terganggu.  Berdasarkan masalah-masalah tersebut, maka untuk mengatasinya perlu dilakukan standarisasi data center, sehingga terbentuk kriteria data center yang baik, yaitu availability, scalability/flexibility dan security. Dalam melakukan penelitian menggunakan metode gap analysis sebagai acuan dan TIA-942 sebagai standar untuk perancangan data center. Adapun  hasil dari penelitian ini, yaitu menghasilkan dokumentasi keseluruhan terhadap kondisi infrastruktur data center saat ini, menghasilkan perbandingan antara kondisi saat ini, kondisi desain terhadap kondisi TIA tier 1 dan menghasilkan desain/rancangan data center berdasarkan pendekatan TIA tier 1.


2008 ◽  
Vol 07 (02) ◽  
pp. C03
Author(s):  
Stefano Cozzini

My intention is to analyze how, where and if grid computing technology is truly enabling a new way of doing science (so-called ‘e-science’). I will base my views on the experiences accumulated thus far in a number of scientific communities, which we have provided with the opportunity of using grid computing. I shall first define some basic terms and concepts and then discuss a number of specific cases in which the use of grid computing has actually made possible a new method for doing science. I will then present a case in which this did not result in a change in research methods. I will try to identify the reasons for these failures and analyze the future evolution of grid computing. I will conclude by introducing and commenting the concept of ‘cloud computing’, the approach offered and provided by major industrial actors (Google/IBM and Amazon being among the most important) and what impact this technology might have on the world of research.


2013 ◽  
Vol 6 (1) ◽  
pp. 719-726
Author(s):  
Fatemeh Binesh ◽  
Saravanan Muthaiyah

Abstract: Nowadays, ICT sector activities and in particular Data Centers are known as an important environmental hazard. With the increasing popularity of the Internet and cloud computing, this threat seems to even get worse in the near future. Despite this increasing importance, there is still little have been done about data centers environmental affects and in particular measuring their green compliance level including all three Rs of waste management (Reuse, Reuse and Recycle). This paper tries to introduce a dashboard for evaluating data centers level of green compliance regardless of their tier. However, the dashboard is proposed based on Malaysias data centers condition, it still can be beneficial to data center managers in other parts of the world and researchers to open up new research possibilities.  


2019 ◽  
Vol 214 ◽  
pp. 07030
Author(s):  
Marco Aldinucci ◽  
Stefano Bagnasco ◽  
Matteo Concas ◽  
Stefano Lusso ◽  
Sergio Rabellino ◽  
...  

Obtaining CPU cycles on an HPC cluster is nowadays relatively simple and sometimes even cheap for academic institutions. However, in most of the cases providers of HPC services would not allow changes on the configuration, implementation of special features or a lower-level control on the computing infrastructure, for example for testing experimental configurations. The variety of use cases proposed by several departments of the University of Torino, including ones from solid-state chemistry, computational biology, genomics and many others, called for different and sometimes conflicting configurations; furthermore, several R&D activities in the field of scientific computing, with topics ranging from GPU acceleration to Cloud Computing technologies, needed a platform to be carried out on. The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multi-purpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Torino branch of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible and reconfigurable infrastructure to cater to a wide range of different scientific computing needs, as well as a platform for R&D activities on computational technologies themselves. We describe some of the use cases that prompted the design and construction of the system, its architecture and a first characterisation of its performance by some synthetic benchmark tools and a few realistic use-case tests.


Sign in / Sign up

Export Citation Format

Share Document