scholarly journals ctapipe: A Low-level Data Processing Framework for the Cherenkov Telescope Array

2019 ◽  
Author(s):  
Michele Peresano ◽  
Karl Kosack ◽  
2021 ◽  
Vol 251 ◽  
pp. 02029
Author(s):  
Luisa Arrabito ◽  
Johan Bregeon ◽  
Patrick Maeght ◽  
Michèle Sanguillon ◽  
Andrei Tsaregorodtsev ◽  
...  

The Cherenkov Telescope Array (CTA) is the next-generation instrument in the very-high energy gamma ray astronomy domain. It will consist of tens of Cherenkov telescopes deployed in 2 arrays at La Palma (Spain) and Paranal (ESO, Chile) respectively. Currently under construction, CTA will start operations around 2023 for a duration of about 30 years. During operations CTA is expected to produce about 2 PB of raw data per year plus 5-20 PB of Monte Carlo data. The global data volume to be managed by the CTA archive, including all versions and copies, is of the order of 100 PB with a smooth growing profile. The associated processing needs are also very high, of the order of hundreds of millions of CPU HS06 hours per year. In order to optimize the instrument design and study its performances, during the preparatory phase (2010-2017) and the current construction phase, the CTA consortium has run massive Monte Carlo productions on the EGI grid infrastructure. In order to handle these productions and the future data processing, we have developed a production system based on the DIRAC framework. The current system is the result of several years of hardware infrastructure upgrades, software development and integration of different services like CVMFS and FTS. In this paper we present the current status of the CTA production system and its exploitation during the latest large-scale Monte Carlo campaigns.


2017 ◽  
Author(s):  
Julien Lefaucheur ◽  
Catherine Boisson ◽  
Zeljka Bosnkak ◽  
Matteo Cerruti ◽  
Christoph Deil ◽  
...  

2019 ◽  
Vol 214 ◽  
pp. 03052
Author(s):  
Luisa Arrabito ◽  
Konrad Bernlöhr ◽  
Johan Bregeon ◽  
Paolo Cumani ◽  
Tarek Hassan ◽  
...  

The Cherenkov Telescope Array (CTA) is the next-generation instrument in the field of very high energy gamma-ray astronomy. It will be composed of two arrays of Imaging Atmospheric Cherenkov Telescopes, located at La Palma (Spain) and Paranal (Chile). The construction of CTA has just started with the installation of the first telescope on site at La Palma and the first data expected by the end of 2018. The scientific operations should begin in 2022 for a duration of about 30 years. The overall amount of data produced during these operations is around 27 PB per year. The associated computing power for data processing and Monte Carlo (MC) simulations is of the order of hundreds of millions of CPU HS06 hours per year. In order to cope with these high computing requirements, we have developed a production system prototype based on the DIRAC framework, that we have intensively exploited during the past 6 years to handle massive MC simulations on the grid for the CTA design and prototyping phases. CTA workflows are composed of several inter-dependent steps, which we used to handle separately within our production system. In order to fully automatize the whole workflows execution, we have partially revised the production system by further enhancing the data-driven behavior and by extending the use of meta-data to link together the different steps of a workflow. In this contribution we present the application of the production system to the last years MC campaigns as well as the recent production system evolution, intended to obtain a fully data-driven and automatized workflow execution for efficient processing of real telescope data.


2020 ◽  
Vol 501 (1) ◽  
pp. 337-346
Author(s):  
E Mestre ◽  
E de Oña Wilhelmi ◽  
D Khangulyan ◽  
R Zanin ◽  
F Acero ◽  
...  

ABSTRACT Since 2009, several rapid and bright flares have been observed at high energies (>100 MeV) from the direction of the Crab nebula. Several hypotheses have been put forward to explain this phenomenon, but the origin is still unclear. The detection of counterparts at higher energies with the next generation of Cherenkov telescopes will be determinant to constrain the underlying emission mechanisms. We aim at studying the capability of the Cherenkov Telescope Array (CTA) to explore the physics behind the flares, by performing simulations of the Crab nebula spectral energy distribution, both in flaring and steady state, for different parameters related to the physical conditions in the nebula. In particular, we explore the data recorded by Fermi during two particular flares that occurred in 2011 and 2013. The expected GeV and TeV gamma-ray emission is derived using different radiation models. The resulting emission is convoluted with the CTA response and tested for detection, obtaining an exclusion region for the space of parameters that rule the different flare emission models. Our simulations show different scenarios that may be favourable for achieving the detection of the flares in Crab with CTA, in different regimes of energy. In particular, we find that observations with low sub-100 GeV energy threshold telescopes could provide the most model-constraining results.


2014 ◽  
Vol 42 (1) ◽  
pp. 671-686 ◽  
Author(s):  
Benjamin P. Wood ◽  
Luis Ceze ◽  
Dan Grossman

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Sonia Bansal ◽  
Vineet Mehan

Abstract Objectives The key test in Content-Based Medical Image Retrieval (CBMIR) frameworks for MRI (Magnetic Resonance Imaging) pictures is the semantic hole between the low-level visual data caught by the MRI machine and the elevated level data seen by the human evaluator. Methods The conventional component extraction strategies centre just on low-level or significant level highlights and utilize some handmade highlights to diminish this hole. It is important to plan an element extraction structure to diminish this hole without utilizing handmade highlights by encoding/consolidating low-level and elevated level highlights. The Fleecy gathering is another packing technique, which is applied in plan depiction here and SVM (Support Vector Machine) is applied. Remembering the predefinition of bunching amount and enlistment cross-section is until now a significant theme, a new predefinition advance is extended in this paper, in like manner, and another CBMIR procedure is suggested and endorsed. It is essential to design a part extraction framework to diminish this opening without using painstakingly gathered features by encoding/joining low-level and critical level features. Results SVM and FCM (Fuzzy C Means) are applied to the power structures. Consequently, the incorporate vector contains all the objectives of the image. Recuperation of the image relies upon the detachment among request and database pictures called closeness measure. Conclusions Tests are performed on the 200 Image Database. Finally, exploratory results are evaluated by the audit and precision.


2014 ◽  
Vol 49 (4) ◽  
pp. 671-686 ◽  
Author(s):  
Benjamin P. Wood ◽  
Luis Ceze ◽  
Dan Grossman

2013 ◽  
Vol 43 ◽  
pp. 189-214 ◽  
Author(s):  
M. Doro ◽  
J. Conrad ◽  
D. Emmanoulopoulos ◽  
M.A. Sànchez-Conde ◽  
J.A. Barrio ◽  
...  

2016 ◽  
Author(s):  
J. L. Dournaux ◽  
A. Abchiche ◽  
D. Allan ◽  
J. P. Amans ◽  
T. P. Armstrong ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document