scholarly journals Boosted jet techniques for a supersymmetric scenario with gravitino LSP

2020 ◽  
Vol 2020 (10) ◽  
Author(s):  
Akanksha Bhardwaj ◽  
Juhi Dutta ◽  
Partha Konar ◽  
Biswarup Mukhopadhyaya ◽  
Santosh Kumar Rai

Abstract Search for compressed supersymmetry at multi-TeV scale, in the presence of a light gravitino dark matter, can get sizable uplift while looking into the associated fat- jets with missing transverse momenta as a signature of the boson produced in the decay process of much heavier next-to-lightest sparticle. We focus on the hadronic decay of the ensuing Higgs and/or Z boson giving rise to at least two fat-jets and "Image missing" in the final state. We perform a detailed background study adopting a multivariate analysis using a boosted decision tree to provide a robust investigation to explore the discovery potential for such signal at 14 TeV LHC considering different benchmark points satisfying all the theoretical and experimental constraints. This channel provides the best discovery prospects with most of the benchmarks discoverable within an integrated luminosity of $$ \mathrm{\mathcal{L}} $$ ℒ = 200 fb−1. Kinematic observables are investigated in order to distinguish between compressed and uncompressed spectra having similar event yields.

2010 ◽  
Vol 25 (11n12) ◽  
pp. 969-975
Author(s):  
CHUAN-REN CHEN ◽  
SOURAV K. MANDAL ◽  
FUMINOBU TAKAHASHI

The anomalies in positron fraction observed in PAMELA and total flux of electrons and positron reported by Fermi can be explained by a decaying dark matter. The agreement between astrophysical background and observed data in anti-proton indicates a lep-tophilic dark matter and constrains the hadronic decay branching ratios. In this work, we study the constrains on decaying rates of a dark matter to various 2-body final states using the Fermi and HESS gamma-ray data. We find that µ+µ- or τ+τ- final state is preferred to simultaneously explain the excesses and meet the gamma-ray constrains.


2021 ◽  
Vol 2021 (3) ◽  
Author(s):  
Vincenzo Cirigliano ◽  
Kaori Fuyuto ◽  
Christopher Lee ◽  
Emanuele Mereghetti ◽  
Bin Yan

Abstract We present a comprehensive analysis of the potential sensitivity of the Electron-Ion Collider (EIC) to charged lepton flavor violation (CLFV) in the channel ep→τX, within the model-independent framework of the Standard Model Effective Field Theory (SMEFT). We compute the relevant cross sections to leading order in QCD and electroweak corrections and perform simulations of signal and SM background events in various τ decay channels, suggesting simple cuts to enhance the associated estimated efficiencies. To assess the discovery potential of the EIC in τ-e transitions, we study the sensitivity of other probes of this physics across a broad range of energy scales, from pp→eτX at the Large Hadron Collider to decays of B mesons and τ leptons, such as τ→eγ, τ→eℓ+ℓ−, and crucially the hadronic modes τ→eY with Y∈π, K, ππ, Kπ, …. We find that electroweak dipole and four-fermion semi-leptonic operators involving light quarks are already strongly constrained by τ decays, while operators involving the c and b quarks present more promising discovery potential for the EIC. An analysis of three models of leptoquarks confirms the expectations based on the SMEFT results. We also identify future directions needed to maximize the reach of the EIC in CLFV searches: these include an optimization of the τ tagger in hadronic channels, an exploration of background suppression through tagging b and c jets in the final state, and a global fit by turning on all SMEFT couplings, which will likely reveal new discovery windows for the EIC.


2021 ◽  
Vol 2021 (2) ◽  
Author(s):  
Anna Mullin ◽  
Stuart Nicholls ◽  
Holly Pacey ◽  
Michael Parker ◽  
Martin White ◽  
...  

Abstract We present a novel technique for the analysis of proton-proton collision events from the ATLAS and CMS experiments at the Large Hadron Collider. For a given final state and choice of kinematic variables, we build a graph network in which the individual events appear as weighted nodes, with edges between events defined by their distance in kinematic space. We then show that it is possible to calculate local metrics of the network that serve as event-by-event variables for separating signal and background processes, and we evaluate these for a number of different networks that are derived from different distance metrics. Using a supersymmetric electroweakino and stop production as examples, we construct prototype analyses that take account of the fact that the number of simulated Monte Carlo events used in an LHC analysis may differ from the number of events expected in the LHC dataset, allowing an accurate background estimate for a particle search at the LHC to be derived. For the electroweakino example, we show that the use of network variables outperforms both cut-and-count analyses that use the original variables and a boosted decision tree trained on the original variables. The stop example, deliberately chosen to be difficult to exclude due its kinematic similarity with the top background, demonstrates that network variables are not automatically sensitive to BSM physics. Nevertheless, we identify local network metrics that show promise if their robustness under certain assumptions of node-weighted networks can be confirmed.


2021 ◽  
Vol 2021 (5) ◽  
Author(s):  
Wolfgang Kilian ◽  
Sichun Sun ◽  
Qi-Shu Yan ◽  
Xiaoran Zhao ◽  
Zhijie Zhao

Abstract We study the observability of new interactions which modify Higgs-pair production via vector-boson fusion processes at the LHC and at future proton-proton colliders. In an effective-Lagrangian approach, we explore in particular the effect of the operator $$ {h}^2{W}_{\mu \nu}^a{W}^{a,\mu \nu} $$ h 2 W μν a W a , μν , which describes the interaction of the Higgs boson with transverse vector-boson polarization modes. By tagging highly boosted Higgs bosons in the final state, we determine projected bounds for the coefficient of this operator at the LHC and at a future 27 TeV or 100 TeV collider. Taking into account unitarity constraints, we estimate the new-physics discovery potential of Higgs pair production in this channel.


2021 ◽  
Author(s):  
Thomas Weripuo Gyeera

<div>The National Institute of Standards and Technology defines the fundamental characteristics of cloud computing as: on-demand computing, offered via the network, using pooled resources, with rapid elastic scaling and metered charging. The rapid dynamic allocation and release of resources on demand to meet heterogeneous computing needs is particularly challenging for data centres, which process a huge amount of data characterised by its high volume, velocity, variety and veracity (4Vs model). Data centres seek to regulate this by monitoring and adaptation, typically reacting to service failures after the fact. We present a real cloud test bed with the capabilities of proactively monitoring and gathering cloud resource information for making predictions and forecasts. This contrasts with the state-of-the-art reactive monitoring of cloud data centres. We argue that the behavioural patterns and Key Performance Indicators (KPIs) characterizing virtualized servers, networks, and database applications can best be studied and analysed with predictive models. Specifically, we applied the Boosted Decision Tree machine learning algorithm in making future predictions on the KPIs of a cloud server and virtual infrastructure network, yielding an R-Square of 0.9991 at a 0.2 learning rate. This predictive framework is beneficial for making short- and long-term predictions for cloud resources.</div>


2018 ◽  
Vol 20 (3) ◽  
pp. 298-105 ◽  
Author(s):  
Shrawan Kumar Trivedi ◽  
Prabin Kumar Panigrahi

PurposeEmail spam classification is now becoming a challenging area in the domain of text classification. Precise and robust classifiers are not only judged by classification accuracy but also by sensitivity (correctly classified legitimate emails) and specificity (correctly classified unsolicited emails) towards the accurate classification, captured by both false positive and false negative rates. This paper aims to present a comparative study between various decision tree classifiers (such as AD tree, decision stump and REP tree) with/without different boosting algorithms (bagging, boosting with re-sample and AdaBoost).Design/methodology/approachArtificial intelligence and text mining approaches have been incorporated in this study. Each decision tree classifier in this study is tested on informative words/features selected from the two publically available data sets (SpamAssassin and LingSpam) using a greedy step-wise feature search method.FindingsOutcomes of this study show that without boosting, the REP tree provides high performance accuracy with the AD tree ranking as the second-best performer. Decision stump is found to be the under-performing classifier of this study. However, with boosting, the combination of REP tree and AdaBoost compares favourably with other classification models. If the metrics false positive rate and performance accuracy are taken together, AD tree and REP tree with AdaBoost were both found to carry out an effective classification task. Greedy stepwise has proven its worth in this study by selecting a subset of valuable features to identify the correct class of emails.Research limitations/implicationsThis research is focussed on the classification of those email spams that are written in the English language only. The proposed models work with content (words/features) of email data that is mostly found in the body of the mail. Image spam has not been included in this study. Other messages such as short message service or multi-media messaging service were not included in this study.Practical implicationsIn this research, a boosted decision tree approach has been proposed and used to classify email spam and ham files; this is found to be a highly effective approach in comparison with other state-of-the-art modes used in other studies. This classifier may be tested for different applications and may provide new insights for developers and researchers.Originality/valueA comparison of decision tree classifiers with/without ensemble has been presented for spam classification.


Author(s):  
Zeyi Wen ◽  
Bingsheng He ◽  
Ramamohanarao Kotagiri ◽  
Shengliang Lu ◽  
Jiashuai Shi

2019 ◽  
Vol 28 (13) ◽  
pp. 1941011 ◽  
Author(s):  
K. M. Belotsky ◽  
E. A. Esipova ◽  
A. Kh. Kamaletdinov ◽  
E. S. Shlepkina ◽  
M. L. Solovyov

Here, we briefly review possible indirect effects of dark matter (DM) of the universe. It includes effects in cosmic rays (CR): first of all, the positron excess at [Formula: see text]500[Formula: see text]GeV and possible electron–positron excess at 1–1.5[Formula: see text]TeV. We tell that the main and least model-dependent constraint on such possible interpretation of CR effects goes from gamma-ray background. Even ordinary [Formula: see text] mode of DM decay or annihilation produces prompt photons (FSR) so much that it leads to contradiction with data on cosmic gamma-rays. We present our attempts to possibly avoid gamma-ray constraint. They concern with peculiarities of both space distribution of DM and their physics. The latter involves complications of decay/annihilation modes of DM, modifications of Lagrangian of DM-ordinary matter interaction and inclusion of mode with identical fermions in final state. In this way, no possibilities to suppress were found except, possibly, the mode with identical fermions. While the case of spatial distribution variation allows achieving consistency between different data. Also, we consider stable form of DM which can interact with baryons. We show which constraint such DM candidate can get from the damping effect in plasma during large-scale structure (LSS) formation in comparison with other existing constraints.


Sign in / Sign up

Export Citation Format

Share Document