scholarly journals LHCb detector performance

2015 ◽  
Vol 30 (07) ◽  
pp. 1530022 ◽  
Author(s):  

The LHCb detector is a forward spectrometer at the Large Hadron Collider (LHC) at CERN. The experiment is designed for precision measurements of CP violation and rare decays of beauty and charm hadrons. In this paper the performance of the various LHCb sub-detectors and the trigger system are described, using data taken from 2010 to 2012. It is shown that the design criteria of the experiment have been met. The excellent performance of the detector has allowed the LHCb collaboration to publish a wide range of physics results, demonstrating LHCb's unique role, both as a heavy flavour experiment and as a general purpose detector in the forward region.

2016 ◽  
Vol 31 (23) ◽  
pp. 1630034 ◽  
Author(s):  
Brigitte Vachon

The ATLAS and CMS collaborations have performed studies of a wide range of Standard Model processes using data collected at the Large Hadron Collider at center-of-mass energies of 7, 8 and 13 TeV. These measurements are used to explore the Standard Model in a new kinematic regime, perform precision tests of the model, determine some of its fundamental parameters, constrain the proton parton distribution functions, and study new rare processes observed for the first time. Examples of recent Standard Model measurements performed by the ATLAS and CMS collaborations are summarized in this report. The measurements presented span a wide range of event final states including jets, photons, W/Z bosons, top quarks, and Higgs bosons.


2021 ◽  
Vol 46 (1) ◽  
Author(s):  
I. Belyaev ◽  
G. Carboni ◽  
N. Harnew ◽  
C. Matteuzzi ◽  
F. Teubert

AbstractIn this paper, we describe the history of the LHCb experiment over the last three decades, and its remarkable successes and achievements. LHCb was conceived primarily as a $${b} $$ b -physics experiment, dedicated to $$CP$$ CP violation studies and measurements of very rare $${{b}} $$ b decays; however, the tremendous potential for $${c} $$ c -physics was also clear. At first data taking, the versatility of the experiment as a general-purpose detector in the forward region also became evident, with measurements achievable such as electroweak physics, jets and new particle searches in open states. These were facilitated by the excellent capability of the detector to identify muons and to reconstruct decay vertices close to the primary $${{p}} {{p}} $$ pp  interaction region. By the end of the LHC Run 2 in 2018, before the accelerator paused for its second long shut down, LHCb had measured the CKM quark mixing matrix elements and $$CP$$ CP violation parameters to world-leading precision in the heavy-quark systems. The experiment had also measured many rare decays of $${b} $$ b  and $${c} $$ c  quark mesons and baryons to below their Standard Model expectations, some down to branching ratios of order 10$$^{-9}$$ - 9 . In addition, world knowledge of $${{b}} $$ b and $${{c}} $$ c spectroscopy had improved significantly through discoveries of many new resonances already anticipated in the quark model, and also adding new exotic four and five quark states. The paper describes the evolution of the LHCb detector, from conception to its operation at the present time. The authors’ subjective summary of the experiment’s important contributions is then presented, demonstrating the wide domain of successful physics measurements that have been achieved over the years.


2020 ◽  
Vol 35 (10) ◽  
pp. 2050052 ◽  
Author(s):  
Takuya Mizoguchi ◽  
Minoru Biyajima

The Bose–Einstein correlation (BEC) in forward region [Formula: see text] measured at 7 TeV in the Large Hadron Collider (LHC) by the LHCb collaboration is analyzed using two conventional formulas of different types named CF[Formula: see text] and CF[Formula: see text]. The first formula is well known and contains the degree of coherence [Formula: see text] and the exchange function [Formula: see text] from the BE statistics. The second formula is an extended formula (CF[Formula: see text]) that contains the second degree of coherence [Formula: see text] and the second exchange function [Formula: see text] in addition to CF[Formula: see text]. To examine the physical meaning of the parameters estimated by CF[Formula: see text], we analyze the LHCb BEC data by using a stochastic approach of the three-negative binomial distribution and the three-generalized Glauber–Lachs formula. Our results reveal that the BEC at 7 TeV consisted of three activity intervals defined by the multiplicity [Formula: see text] ([8, 18], [19, 35], and [36, 96]) can be well explained by CF[Formula: see text].


Author(s):  
John Campbell ◽  
Joey Huston ◽  
Frank Krauss

At the core of any theoretical description of hadron collider physics is a fixed-order perturbative treatment of a hard scattering process. This chapter is devoted to a survey of fixed-order predictions for a wide range of Standard Model processes. These range from high cross-section processes such as jet production to much more elusive reactions, such as the production of Higgs bosons. Process by process, these sections illustrate how the techniques developed in Chapter 3 are applied to more complex final states and provide a summary of the fixed-order state-of-the-art. In each case, key theoretical predictions and ideas are identified that will be the subject of a detailed comparison with data in Chapters 8 and 9.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1031
Author(s):  
Joseba Gorospe ◽  
Rubén Mulero ◽  
Olatz Arbelaitz ◽  
Javier Muguerza ◽  
Miguel Ángel Antón

Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.


BMJ Open ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. e047007
Author(s):  
Mari Terada ◽  
Hiroshi Ohtsu ◽  
Sho Saito ◽  
Kayoko Hayakawa ◽  
Shinya Tsuzuki ◽  
...  

ObjectivesTo investigate the risk factors contributing to severity on admission. Additionally, risk factors of worst severity and fatality were studied. Moreover, factors were compared based on three points: early severity, worst severity and fatality.DesignAn observational cohort study using data entered in a Japan nationwide COVID-19 inpatient registry, COVIREGI-JP.SettingAs of 28 September 2020, 10480 cases from 802 facilities have been registered. Participating facilities cover a wide range of hospitals where patients with COVID-19 are admitted in Japan.ParticipantsParticipants who had a positive test result on any applicable SARS-CoV-2 diagnostic tests were admitted to participating healthcare facilities. A total of 3829 cases were identified from 16 January to 31 May 2020, of which 3376 cases were included in this study.Primary and secondary outcome measuresPrimary outcome was severe or nonsevere on admission, determined by the requirement of mechanical ventilation or oxygen therapy, SpO2 or respiratory rate. Secondary outcome was the worst severity during hospitalisation, judged by the requirement of oxygen and/orinvasive mechanical ventilation/extracorporeal membrane oxygenation.ResultsRisk factors for severity on admission were older age, men, cardiovascular disease, chronic respiratory disease, diabetes, obesity and hypertension. Cerebrovascular disease, liver disease, renal disease or dialysis, solid tumour and hyperlipidaemia did not influence severity on admission; however, it influenced worst severity. Fatality rates for obesity, hypertension and hyperlipidaemia were relatively lower.ConclusionsThis study segregated the comorbidities influencing severity and death. It is possible that risk factors for severity on admission, worst severity and fatality are not consistent and may be propelled by different factors. Specifically, while hypertension, hyperlipidaemia and obesity had major effect on worst severity, their impact was mild on fatality in the Japanese population. Some studies contradict our results; therefore, detailed analyses, considering in-hospital treatments, are needed for validation.Trial registration numberUMIN000039873. https://upload.umin.ac.jp/cgi-open-bin/ctr_e/ctr_view.cgi?recptno=R000045453


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Seyed Hossein Jafari ◽  
Amir Mahdi Abdolhosseini-Qomi ◽  
Masoud Asadpour ◽  
Maseud Rahgozar ◽  
Naser Yazdani

AbstractThe entities of real-world networks are connected via different types of connections (i.e., layers). The task of link prediction in multiplex networks is about finding missing connections based on both intra-layer and inter-layer correlations. Our observations confirm that in a wide range of real-world multiplex networks, from social to biological and technological, a positive correlation exists between connection probability in one layer and similarity in other layers. Accordingly, a similarity-based automatic general-purpose multiplex link prediction method—SimBins—is devised that quantifies the amount of connection uncertainty based on observed inter-layer correlations in a multiplex network. Moreover, SimBins enhances the prediction quality in the target layer by incorporating the effect of link overlap across layers. Applying SimBins to various datasets from diverse domains, our findings indicate that SimBins outperforms the compared methods (both baseline and state-of-the-art methods) in most instances when predicting links. Furthermore, it is discussed that SimBins imposes minor computational overhead to the base similarity measures making it a potentially fast method, suitable for large-scale multiplex networks.


2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Simuck F. Yuk ◽  
Krishna Chaitanya Pitike ◽  
Serge M. Nakhmanson ◽  
Markus Eisenbach ◽  
Ying Wai Li ◽  
...  

Abstract Using the van der Waals density functional with C09 exchange (vdW-DF-C09), which has been applied to describing a wide range of dispersion-bound systems, we explore the physical properties of prototypical ABO 3 bulk ferroelectric oxides. Surprisingly, vdW-DF-C09 provides a superior description of experimental values for lattice constants, polarization and bulk moduli, exhibiting similar accuracy to the modified Perdew-Burke-Erzenhoff functional which was designed specifically for bulk solids (PBEsol). The relative performance of vdW-DF-C09 is strongly linked to the form of the exchange enhancement factor which, like PBEsol, tends to behave like the gradient expansion approximation for small reduced gradients. These results suggest the general-purpose nature of the class of vdW-DF functionals, with particular consequences for predicting material functionality across dense and sparse matter regimes.


2010 ◽  
Vol 20 (02) ◽  
pp. 103-121 ◽  
Author(s):  
MOSTAFA I. SOLIMAN ◽  
ABDULMAJID F. Al-JUNAID

Technological advances in IC manufacturing provide us with the capability to integrate more and more functionality into a single chip. Today's modern processors have nearly one billion transistors on a single chip. With the increasing complexity of today's system, the designs have to be modeled at a high-level of abstraction before partitioning into hardware and software components for final implementation. This paper explains in detail the implementation and performance evaluation of a matrix processor called Mat-Core with SystemC (system level modeling language). Mat-Core is a research processor aiming at exploiting the increasingly number of transistors per IC to improve the performance of a wide range of applications. It extends a general-purpose scalar processor with a matrix unit. To hide memory latency, the extended matrix unit is decoupled into two components: address generation and data computation, which communicate through data queues. Like vector architectures, the data computation unit is organized in parallel lanes. However, on parallel lanes, Mat-Core can execute matrix-scalar, matrix-vector, and matrix-matrix instructions in addition to vector-scalar and vector-vector instructions. For controlling the execution of vector/matrix instructions on the matrix core, this paper extends the well known scoreboard technique. Furthermore, the performance of Mat-Core is evaluated on vector and matrix kernels. Our results show that the performance of four lanes Mat-Core with matrix registers of size 4 × 4 or 16 elements each, queues size of 10, start up time of 6 clock cycles, and memory latency of 10 clock cycles is about 0.94, 1.3, 2.3, 1.6, 2.3, and 5.5 FLOPs per clock cycle; achieved on scalar-vector multiplication, SAXPY, Givens, rank-1 update, vector-matrix multiplication, and matrix-matrix multiplication, respectively.


1989 ◽  
Vol 21 (8-9) ◽  
pp. 889-897 ◽  
Author(s):  
J. M. Lopez-Real ◽  
E. Witter ◽  
F. N. Midmer ◽  
B. A. O. Hewett

Collaborative research between Southern Water and Wye College, University of London, has led to the development of a static aerated pile composting process for the treatment of dewatered activated sludge cake/straw mixtures. The process reduces bulk volume of the sludge producing an environmentally acceptable, stabilised, odour and pathogen-free product. Characteristics of the compost make it a suitable general purpose medium for container grown plants, providing the salt concentration is reduced by washing the compost prior to planting. Compared with peat the compost has a higher bulk density, a lower waterholding capacity, a lower cation exchange capacity, a high content of soluble salts, and a higher content of plant nutrients. A compost mixture was successfully developed in the growing trials containing equal quantities of compost, Sphagnum peat, and horticultural vermiculite. The compost has been used successfully to grow a wide range of plants. Plants grown in mixtures based on the compost were in general similar to those grown in peat-based growing media. The compost is a valuable soil conditioner and slow release fertilizer.


Sign in / Sign up

Export Citation Format

Share Document