scholarly journals A Machine Learning Approach for the Tune Estimation in the LHC

Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 197
Author(s):  
Leander Grech ◽  
Gianluca Valentino ◽  
Diogo Alves

The betatron tune in the Large Hadron Collider (LHC) is measured using a Base-Band Tune (BBQ) system. The processing of these BBQ signals is often perturbed by 50 Hz noise harmonics present in the beam. This causes the tune measurement algorithm, currently based on peak detection, to provide incorrect tune estimates during the acceleration cycle with values that oscillate between neighbouring harmonics. The LHC tune feedback (QFB) cannot be used to its full extent in these conditions as it relies on stable and reliable tune estimates. In this work, we propose new tune estimation algorithms, designed to mitigate this problem through different techniques. As ground-truth of the real tune measurement does not exist, we developed a surrogate model, which allowed us to perform a comparative analysis of a simple weighted moving average, Gaussian Processes and different deep learning techniques. The simulated dataset used to train the deep models was also improved using a variant of Generative Adversarial Networks (GANs) called SimGAN. In addition, we demonstrate how these methods perform with respect to the present tune estimation algorithm.

Author(s):  
Leander Grech ◽  
Gianluca Valentino ◽  
Diogo Miguel Louro Alves

The betatron tune in the Large Hadron Collider (LHC) is measured using a Base-Band Tune (BBQ) system. The processing of these BBQ signals is often perturbed by 50 Hz noise harmonics present in the beam. This causes the tune measurement algorithm, currently based on peak detection, to provide incorrect tune estimates during the acceleration cycle with values that oscillate between neighbouring harmonics. The LHC tune feedback (QFB) cannot be used to its full extent in these conditions as it relies on stable and reliable tune estimates. In this work we propose new tune estimation algorithms, designed to mitigate this problem through different techniques. As ground-truth of the real tune measurement does not exist, we developed a surrogate model, which allowed us to perform a comparative analysis of a simple weighted moving average, Gaussian Processes and different deep learning techniques. The simulated dataset used to train the deep models was also improved using a variant of Generative Adversarial Networks (GANs) called SimGAN. In addition we demonstrate how these methods perform with respect to the present tune estimation algorithm.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


2021 ◽  
Vol 251 ◽  
pp. 03043
Author(s):  
Fedor Ratnikov ◽  
Alexander Rogachev

Simulation is one of the key components in high energy physics. Historically it relies on the Monte Carlo methods which require a tremendous amount of computation resources. These methods may have difficulties with the expected High Luminosity Large Hadron Collider need, so the experiment is in urgent need of new fast simulation techniques. The application of Generative Adversarial Networks is a promising solution to speed up the simulation while providing the necessary physics performance. In this paper we propose the Self-Attention Generative Adversarial Network as a possible improvement of the network architecture. The application is demonstrated on the performance of generating responses of the LHCb type of the electromagnetic calorimeter.


2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Weili Xiong ◽  
Wei Fan ◽  
Rui Ding

This paper studies least-squares parameter estimation algorithms for input nonlinear systems, including the input nonlinear controlled autoregressive (IN-CAR) model and the input nonlinear controlled autoregressive autoregressive moving average (IN-CARARMA) model. The basic idea is to obtain linear-in-parameters models by overparameterizing such nonlinear systems and to use the least-squares algorithm to estimate the unknown parameter vectors. It is proved that the parameter estimates consistently converge to their true values under the persistent excitation condition. A simulation example is provided.


2021 ◽  
Author(s):  
Amin Heyrani Nobari ◽  
Muhammad Fathy Rashad ◽  
Faez Ahmed

Abstract Modern machine learning techniques, such as deep neural networks, are transforming many disciplines ranging from image recognition to language understanding, by uncovering patterns in big data and making accurate predictions. They have also shown promising results for synthesizing new designs, which is crucial for creating products and enabling innovation. Generative models, including generative adversarial networks (GANs), have proven to be effective for design synthesis with applications ranging from product design to metamaterial design. These automated computational design methods can support human designers, who typically create designs by a time-consuming process of iteratively exploring ideas using experience and heuristics. However, there are still challenges remaining in automatically synthesizing ‘creative’ designs. GAN models, however, are not capable of generating unique designs, a key to innovation and a major gap in AI-based design automation applications. This paper proposes an automated method, named CreativeGAN, for generating novel designs. It does so by identifying components that make a design unique and modifying a GAN model such that it becomes more likely to generate designs with identified unique components. The method combines state-of-art novelty detection, segmentation, novelty localization, rewriting, and generative models for creative design synthesis. Using a dataset of bicycle designs, we demonstrate that the method can create new bicycle designs with unique frames and handles, and generalize rare novelties to a broad set of designs. Our automated method requires no human intervention and demonstrates a way to rethink creative design synthesis and exploration. For details and code used in this paper please refer to http://decode.mit.edu/projects/creativegan/.


2020 ◽  
Vol 17 (169) ◽  
pp. 20200267
Author(s):  
Arghavan Arafati ◽  
Daisuke Morisawa ◽  
Michael R. Avendi ◽  
M. Reza Amini ◽  
Ramin A. Assadi ◽  
...  

A major issue in translation of the artificial intelligence platforms for automatic segmentation of echocardiograms to clinics is their generalizability. The present study introduces and verifies a novel generalizable and efficient fully automatic multi-label segmentation method for four-chamber view echocardiograms based on deep fully convolutional networks (FCNs) and adversarial training. For the first time, we used generative adversarial networks for pixel classification training, a novel method in machine learning not currently used for cardiac imaging, to overcome the generalization problem. The method's performance was validated against manual segmentations as the ground-truth. Furthermore, to verify our method's generalizability in comparison with other existing techniques, we compared our method's performance with a state-of-the-art method on our dataset in addition to an independent dataset of 450 patients from the CAMUS (cardiac acquisitions for multi-structure ultrasound segmentation) challenge. On our test dataset, automatic segmentation of all four chambers achieved a dice metric of 92.1%, 86.3%, 89.6% and 91.4% for LV, RV, LA and RA, respectively. LV volumes' correlation between automatic and manual segmentation were 0.94 and 0.93 for end-diastolic volume and end-systolic volume, respectively. Excellent agreement with chambers’ reference contours and significant improvement over previous FCN-based methods suggest that generative adversarial networks for pixel classification training can effectively design generalizable fully automatic FCN-based networks for four-chamber segmentation of echocardiograms even with limited number of training data.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3075 ◽  
Author(s):  
Baoqing Guo ◽  
Gan Geng ◽  
Liqiang Zhu ◽  
Hongmei Shi ◽  
Zujun Yu

Foreign object intrusion is a great threat to high-speed railway safety operations. Accurate foreign object intrusion detection is particularly important. As a result of the lack of intruding foreign object samples during the operational period, artificially generated ones will greatly benefit the development of the detection methods. In this paper, we propose a novel method to generate railway intruding object images based on an improved conditional deep convolutional generative adversarial network (C-DCGAN). It consists of a generator and multi-scale discriminators. Loss function is also improved so as to generate samples with a high quality and authenticity. The generator is extracted in order to generate foreign object images from input semantic labels. We synthesize the generated objects to the railway scene. To make the generated objects more similar to real objects, on scale in different positions of a railway scene, a scale estimation algorithm based on the gauge constant is proposed. The experimental results on the railway intruding object dataset show that the proposed C-DCGAN model outperforms several state-of-the-art methods and achieves a higher quality (the pixel-wise accuracy, mean intersection-over-union (mIoU), and mean average precision (mAP) are 80.46%, 0.65, and 0.69, respectively) and diversity (the Fréchet-Inception Distance (FID) score is 26.87) of generated samples. The mIoU of the real-generated pedestrian pairs reaches 0.85, and indicates a higher scale of accuracy for the generated intruding objects in the railway scene.


Image colorization is the process of taking an input gray- scale (black and white) image and then producing an output colorized image that represents the semantic color tones of the input. Since the past few years, the process of automatic image colorization has been of significant interest and a lot of progress has been made in the field by various researchers. Image colorization finds its application in many domains including medical imaging, restoration of historical documents, etc. There have been different approaches to solve this problem using Convolutional Neural Networks as well as Generative Adversarial Networks. These colorization networks are not only based on different architectures but also are tested on varied data sets. This paper aims to cover some of these proposed approaches through different techniques. The results between the generative models and traditional deep neural networks are compared along with presenting the current limitations in those. The paper proposes a summarized view of past and current advances in the field of image colorization contributed by different authors and researchers.


2018 ◽  
Author(s):  
Matthias Häring ◽  
Jörg Großhans ◽  
Fred Wolf ◽  
Stephan Eule

AbstractA central problem in biomedical imaging is the automated segmentation of images for further quantitative analysis. Recently, fully convolutional neural networks, such as the U-Net, were applied successfully in a variety of segmentation tasks. A downside of this approach is the requirement for a large amount of well-prepared training samples, consisting of image - ground truth mask pairs. Since training data must be created by hand for each experiment, this task can be very costly and time-consuming. Here, we present a segmentation method based on cycle consistent generative adversarial networks, which can be trained even in absence of prepared image - mask pairs. We show that it successfully performs image segmentation tasks on samples with substantial defects and even generalizes well to different tissue types.


Sign in / Sign up

Export Citation Format

Share Document