Recent experiments with nuclear emulsions

It is unnecessary to stress the many significant contributions made during the past 20 years to nuclear and high-energy physics by means of the nuclear emulsion technique. One needs only to recall the new particles and decay modes that have been first observed with it. With the development of other powerful techniques, however, such as the spark-chamber and bubble-chamber, readily adaptable to automatic methods of analysis and data handling, nuclear emulsion has inevitably tended to fall into the position of a supplementary method. Nevertheless, there are still important experiments for which it is the most convenient, indeed in some cases the only, technique available, and this paper will discuss such experiments, either recently carried out or proposed for the future, using beams of particles from high-energy accelerators. Nuclear emulsion possesses one most significant advantage over all other tech­niques, namely, the extraordinarily high spatial resolution of which it is capable. Other techniques can resolve events separated by tenths of millimetres. Nuclear emulsion can resolve events separated by tenths of micrometres. This high spatial resolution has made possible the measurement of the lifetime of the π 0 -meson (ca.10 -16 s) and is the basis of our confidence that there are no other commonly occurring unstable particles with lifetimes in the range 10 -11 to 10 -16 s. Most of the experiments described in this paper are particularly suited to the nuclear emulsion technique because they make use of this characteristic feature.

2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Naveed Mahmud ◽  
Esam El-Araby

The high resolution of multidimensional space-time measurements and enormity of data readout counts in applications such as particle tracking in high-energy physics (HEP) is becoming nowadays a major challenge. In this work, we propose combining dimension reduction techniques with quantum information processing for application in domains that generate large volumes of data such as HEP. More specifically, we propose using quantum wavelet transform (QWT) to reduce the dimensionality of high spatial resolution data. The quantum wavelet transform takes advantage of the principles of quantum mechanics to achieve reductions in computation time while processing exponentially larger amount of information. We develop simpler and optimized emulation architectures than what has been previously reported, to perform quantum wavelet transform on high-resolution data. We also implement the inverse quantum wavelet transform (IQWT) to accurately reconstruct the data without any losses. The algorithms are prototyped on an FPGA-based quantum emulator that supports double-precision floating-point computations. Experimental work has been performed using high-resolution image data on a state-of-the-art multinode high-performance reconfigurable computer. The experimental results show that the proposed concepts represent a feasible approach to reducing dimensionality of high spatial resolution data generated by applications such as particle tracking in high-energy physics.


2018 ◽  
Vol 182 ◽  
pp. 02052
Author(s):  
Asma Hadef

The Higgs boson was discovered on the 4th of July 2012 with a mass around 125 GeV by ATLAS and CMS experiments at LHC. Determining the Higgs properties (production and decay modes, couplings,...) is an important part of the high-energy physics programme in this decade. A search for the Higgs boson production in association with a top quark pair (tt̄H) at ATLAS [1] is summarized in this paper at an unexplored center-of-mass energy of 13 TeV, which could allow a first direct measurement of the top quark Yukawa coupling and could reveal new physics. The tt̄H analysis in ATLAS is divided into 3 channels according to the Higgs decay modes: H → Hadrons, H → Leptons and H → Photons. The best-fit value of the ratio of observed and Standard Model cross sections of tt̄H production process, using 2015-2016 data and combining all tt̄H final states, is 1:8±0:7, corresponds to 2:8σ (1:8σ) observed (expected) significance.


I feel I should begin by pointing out that in at least two respects I am not qualified to give this talk. The first is that our machine at Liverpool is of course a 400 MeV machine, which only counts as a low-energy one these days, and I have not worked at C. E. R. N. where the real high-energy physics in Europe is now being done; I can only speak about it at second hand. I have, however, been making rather frequent visits to C. E. R. N. recently, thanks to an invitation from Professor Weisskopf, so that I can give some description of the counter experiments on the proton synchrotron there. The description is necessarily from a spectator’s point of view, and to that extent, superficial. The second lack of qualification comes from the fact that Professor Weisskopf has explained all the easy part about the significance of the most interesting counter experiments, so that I have to try and go a little further. Now, that necessarily involves me in the extremely sophisticated and conjectural ideas of the Regge pole analysis, which are not easy to explain to non-specialists. I shall try to convey the spirit if not the substance of that analysis. However, I should like to begin with a description of a different experiment, bearing on the elementary particle spectroscopy to which Professor Weisskopf drew your attention this morning. The main details of elementary-particle spectroscopy have of course come to use from bubble-chamber experiments, and, on the whole, the counter programme has not made a great contribution to it. One experiment, however, that is unusually clear is the counter experiment of Caldwell et al . on the production of associated bosons from peripheral collisions. Figure 37 shows the sort of process that is sought in this experiment. A highenergy pion beam is directed at a nucleon and glancing collisions are sought; in other words, collisions that take place at a long range and are probably associated with the exchange of one particle. Of course, the range of the interaction is longer when the mass of the exchange particles is small, so a single pion is most likely to be exchanged. The nucleon emits this pion and may itself break up into a number of particles, which the experiment does not investigate any further. At the other vertex the exchange particle joins the pion and, hopefully, makes a compound particle which later breaks up into associated bosons, either two pions or two kaons. If the particle exchanged is a pion, of course, this short-lived compound particle has strangeness zero, and therefore it can only break up into two pions or two kaons, but not into a kaon and a pion.


A summary of the work carried out at the Institute for High-Energy Physics, Serpukhov, U. S. S. R., on proton-proton interactions at energies between 10 and 70 GeV is given. The experiments comprise studies of small angle elastic scattering, of total cross-sections and of interactions in a hydrogen bubble chamber.


2021 ◽  
Vol 4 ◽  
Author(s):  
Zhihua Dong ◽  
Heather Gray ◽  
Charles Leggett ◽  
Meifeng Lin ◽  
Vincent R. Pascuzzi ◽  
...  

The High Energy Physics (HEP) experiments, such as those at the Large Hadron Collider (LHC), traditionally consume large amounts of CPU cycles for detector simulations and data analysis, but rarely use compute accelerators such as GPUs. As the LHC is upgraded to allow for higher luminosity, resulting in much higher data rates, purely relying on CPUs may not provide enough computing power to support the simulation and data analysis needs. As a proof of concept, we investigate the feasibility of porting a HEP parameterized calorimeter simulation code to GPUs. We have chosen to use FastCaloSim, the ATLAS fast parametrized calorimeter simulation. While FastCaloSim is sufficiently fast such that it does not impose a bottleneck in detector simulations overall, significant speed-ups in the processing of large samples can be achieved from GPU parallelization at both the particle (intra-event) and event levels; this is especially beneficial in conditions expected at the high-luminosity LHC, where extremely high per-event particle multiplicities will result from the many simultaneous proton-proton collisions. We report our experience with porting FastCaloSim to NVIDIA GPUs using CUDA. A preliminary Kokkos implementation of FastCaloSim for portability to other parallel architectures is also described.


2021 ◽  
Vol 9 ◽  
Author(s):  
Gabriele Giacomini

Low-Gain Avalanche Diodes are a recently-developed class of silicon sensors. Characterized by an internal moderate gain that enhances the signal amplitude and if built on thin silicon substrates of a few tens of microns, they feature fast signals and exhibit excellent timing performance. Thanks to their fast timing they are planned to be exploited in timing detectors in High-Energy Physics experiments, for example for the upgrades of the ATLAS and CMS detectors at the High Luminosity Large Hadron Collider (HL-LHC) at CERN. However, to achieve a spatially uniform multiplication a large pixel pitch is needed, preventing a fine spatial resolution. To overcome this limitation, the AC-coupled LGAD approach was introduced. In this type of device, metal electrodes are placed over an insulator at a fine pitch, and signals are capacitively induced on these electrodes. The fabrication technology is similar for the two LGAD families, although a fine tuning of a few process parameters needs to be carefully studied. Other R&D efforts towards detectors that can simultaneously provide good time and spatial resolution, based on the LGAD concept, are under way. These efforts aim also to mitigate the loss of performance at high irradiation fluences due to the acceptor removal within the gain layer. In this paper we describe the main points in the fabrication of LGADs and AC-LGADs in a clean-room. We also discuss novel efforts carried on related topics.


2019 ◽  
Vol 204 ◽  
pp. 05010
Author(s):  
Nizami Abdulov ◽  
Artem Lipatov ◽  
Gennady Lykasov ◽  
Maxim Malyshev

Unintegrated (or transverse momentum dependent, TMD) parton distributions in a proton are important in high-energy physics. Using the latest LHC data on the hadron production in pp collisions, we determine the parameters of the initial TMD gluon density, derived in the framework of quark-gluon string model at the low scale μ0 ~ 1 − 2 GeV and refine its large-x behavior using data on the tt̅ production at $\sqrt s = 13\,{\rm{TeV}}$. Then, by using the Catani-Ciafaloni-Fiorani-Marchesini (CCFM) evolution equation, we extend the obtained TMD gluon density to the whole kinematic region. We tested the proposed TMD gluon density to the inclusive Higgs production in different decay modes, t-channel single top production at the LHC and to the proton structure functions $F_2^c(x,\,{Q^2})$ and $F_2^b(x,\,{Q^2})$ in a wide region of x and Q2. Good agreement with the latest LHC and HERA data is achieved.


Sign in / Sign up

Export Citation Format

Share Document