object tagging
Recently Published Documents


TOTAL DOCUMENTS

7
(FIVE YEARS 3)

H-INDEX

2
(FIVE YEARS 1)

Author(s):  
Matthew Klimek

Abstract We propose the study of the time substructure of jets, motivated by the fact that the next generation of detectors at particle colliders will resolve the time scale over which jet constituents arrive. This effect is directly related to the fragmentation and hadronization process, which transforms partons into massive hadrons with a distribution of velocities. We review the basic predictions for the velocity distribution of jet hadrons, and suggest an application for this information in the context of boosted object tagging. By noting that the velocity distribution is determined by the properties of the color string which ends on the parton that initiates the jet, we observe that jets originating from boosted color singlets, such as Standard Model electroweak bosons, will exhibit velocity distributions that are boosted relative to QCD jets of similar jet energy. We find that by performing a simple cut on the corresponding distribution of charged hadron arrival times at the detector, we can discriminate against QCD jets that would otherwise give a false positive under a traditional spatial substructure based boosted object tagger.


2019 ◽  
Vol 7 (3) ◽  
Author(s):  
Liam Moore ◽  
Karl Nordström ◽  
Sreedevi Varma ◽  
Malcolm Fairbairn

We compare the performance of a convolutional neural network (CNN) trained on jet images with dense neural networks (DNNs) trained on nn-subjettiness variables to study the distinguishing power of these two separate techniques applied to top quark decays. We find that they perform almost identically and are highly correlated once jet mass information is included, which suggests they are accessing the same underlying information which can be intuitively understood as being contained in 4-, 5-, 6-, and 8-body kinematic phase spaces depending on the sample. This suggests both of these methods are highly useful for heavy object tagging and provides a tentative answer to the question of what the image network is actually learning.


2017 ◽  
Vol 10 (13) ◽  
pp. 292
Author(s):  
Ankush Rai ◽  
Jagadeesh Kannan R

In comparison with the standard RGB or gray-scale images, the usual multispectral images (MSI) are intended to convey high definition and anauthentic representation for real world scenes to significantly enhance the performance measures of several other tasks involving with computervision, segmentation of image, object extraction, and object tagging operations. While procuring images form satellite, the MSI are often prone tonoises. Finding a good mathematical description of the learning-based denoising model is a difficult research question and many different researchesaccounted in the literature. Many have attempted its use with the application of neural network as a sparse learned dictionary of noisy patches.Furthermore, this approach allows several algorithm to optimize itself for the given task at hand using machine learning algorithm. However, inpractices, a MSI image is always prone to corruption by various sources of noises while procuring the images. In this survey, we studied the pasttechniques attempted for the noise influenced MSI images. The survey presents the outline of past techniques and their respective advantages incomparison with each other.


Sign in / Sign up

Export Citation Format

Share Document