scholarly journals Using Spectator Matter for Centrality Determination in Nucleus-Nucleus Collisions

Particles ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 227-235
Author(s):  
Aleksandr Svetlichnyi ◽  
Roman Nepeyvoda ◽  
Igor Pshenichnov

One of the common methods to measure the centrality of nucleus-nucleus collision events consists of detecting forward spectator neutrons. Because of non-monotonic dependence of neutron numbers on centrality, other characteristics of spectator matter in 197Au–197Au collisions at NICA must be considered to improve the centrality determination. The numbers of spectator deuterons and α-particles and the forward–backward asymmetry of the numbers of free spectator nucleons were calculated with the Abrasion–Ablation Monte Carlo for Colliders (AAMCC) model as functions of event centrality. It was shown that the number of charged fragments per spectator nucleon decreases monotonically with an increase of the impact parameter, and thus can be used to estimate the collision centrality. The conditional probabilities that a given event with specific spectator characteristics belongs to a certain centrality class were calculated by means of AAMCC. Such probabilities can be used as an input to Bayesian or other machine-learning approaches to centrality determination in 197Au–197Au collisions.

2014 ◽  
Vol 100 ◽  
pp. 57-67 ◽  
Author(s):  
Annalaura Mancia ◽  
James C. Ryan ◽  
Frances M. Van Dolah ◽  
John R. Kucklick ◽  
Teresa K. Rowles ◽  
...  

Cancers ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2764
Author(s):  
Xin Yu Liew ◽  
Nazia Hameed ◽  
Jeremie Clos

A computer-aided diagnosis (CAD) expert system is a powerful tool to efficiently assist a pathologist in achieving an early diagnosis of breast cancer. This process identifies the presence of cancer in breast tissue samples and the distinct type of cancer stages. In a standard CAD system, the main process involves image pre-processing, segmentation, feature extraction, feature selection, classification, and performance evaluation. In this review paper, we reviewed the existing state-of-the-art machine learning approaches applied at each stage involving conventional methods and deep learning methods, the comparisons within methods, and we provide technical details with advantages and disadvantages. The aims are to investigate the impact of CAD systems using histopathology images, investigate deep learning methods that outperform conventional methods, and provide a summary for future researchers to analyse and improve the existing techniques used. Lastly, we will discuss the research gaps of existing machine learning approaches for implementation and propose future direction guidelines for upcoming researchers.


2020 ◽  
Vol 245 ◽  
pp. 02002
Author(s):  
Sean Gasiorowski ◽  
Heather Gray

The ATLAS physics program at the LHC relies on very large samples of simulated events. Most of these samples are produced with Geant4, which provides a highly detailed and accurate simulation of the ATLAS detector. However, this accuracy comes with a high price in CPU, and the sensitivity of many physics analyses is already limited by the available Monte Carlo statistics and will be even more so in the future as datasets grow. To solve this problem, sophisticated fast simulation tools are developed, and they will become the default tools in ATLAS production in Run 3 and beyond. The slowest component is the simulation of the calorimeter showers. Those are replaced by a new parametrised description of the longitudinal and lateral energy deposits, including machine learning approaches, achieving a fast but accurate description. In this talk we will describe the new tool for fast calorimeter simulation that has been developed by ATLAS, review its technical and physics performance, and demonstrate its potential to transform physics analyses.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Alireza Davoudi ◽  
Mohsen Ahmadi ◽  
Abbas Sharifi ◽  
Roshina Hassantabar ◽  
Narges Najafi ◽  
...  

Statins can help COVID-19 patients’ treatment because of their involvement in angiotensin-converting enzyme-2. The main objective of this study is to evaluate the impact of statins on COVID-19 severity for people who have been taking statins before COVID-19 infection. The examined research patients include people that had taken three types of statins consisting of Atorvastatin, Simvastatin, and Rosuvastatin. The case study includes 561 patients admitted to the Razi Hospital in Ghaemshahr, Iran, during February and March 2020. The illness severity was encoded based on the respiratory rate, oxygen saturation, systolic pressure, and diastolic pressure in five categories: mild, medium, severe, critical, and death. Since 69.23% of participants were in mild severity condition, the results showed the positive effect of Simvastatin on COVID-19 severity for people that take Simvastatin before being infected by the COVID-19 virus. Also, systolic pressure for this case study is 137.31, which is higher than that of the total patients. Another result of this study is that Simvastatin takers have an average of 95.77 mmHg O2Sat; however, the O2Sat is 92.42, which is medium severity for evaluating the entire case study. In the rest of this paper, we used machine learning approaches to diagnose COVID-19 patients’ severity based on clinical features. Results indicated that the decision tree method could predict patients’ illness severity with 87.9% accuracy. Other methods, including the K -nearest neighbors (KNN) algorithm, support vector machine (SVM), Naïve Bayes classifier, and discriminant analysis, showed accuracy levels of 80%, 68.8%, 61.1%, and 85.1%, respectively.


2021 ◽  
Author(s):  
Thiago Abdo ◽  
Fabiano Silva

The purpose of this paper is to analyze the use of different machine learning approaches and algorithms to be integrated as an automated assistance on a tool to aid the creation of new annotated datasets. We evaluate how they scale in an environment without dedicated machine learning hardware. In particular, we study the impact over a dataset with few examples and one that is being constructed. We experiment using deep learning algorithms (Bert) and classical learning algorithms with a lower computational cost (W2V and Glove combined with RF and SVM). Our experiments show that deep learning algorithms have a performance advantage over classical techniques. However, deep learning algorithms have a high computational cost, making them inadequate to an environment with reduced hardware resources. Simulations using Active and Iterative machine learning techniques to assist the creation of new datasets are conducted. For these simulations, we use the classical learning algorithms because of their computational cost. The knowledge gathered with our experimental evaluation aims to support the creation of a tool for building new text datasets.


2018 ◽  
Vol 7 (2) ◽  
pp. 917
Author(s):  
S Venkata Suryanarayana ◽  
G N. Balaji ◽  
G Venkateswara Rao

With the extensive use of credit cards, fraud appears as a major issue in the credit card business. It is hard to have some figures on the impact of fraud, since companies and banks do not like to disclose the amount of losses due to frauds. At the same time, public data are scarcely available for confidentiality issues, leaving unanswered many questions about what is the best strategy. Another problem in credit-card fraud loss estimation is that we can measure the loss of only those frauds that have been detected, and it is not possible to assess the size of unreported/undetected frauds. Fraud patterns are changing rapidly where fraud detection needs to be re-evaluated from a reactive to a proactive approach. In recent years, machine learning has gained lot of popularity in image analysis, natural language processing and speech recognition. In this regard, implementation of efficient fraud detection algorithms using machine-learning techniques is key for reducing these losses, and to assist fraud investigators. In this paper logistic regression, based machine learning approach is utilized to detect credit card fraud. The results show logistic regression based approaches outperforms with the highest accuracy and it can be effectively used for fraud investigators.  


2020 ◽  
Vol 15 (1) ◽  
Author(s):  
Julie Chih-yu Chen ◽  
Andrea D. Tyler

Abstract Background The advent of metagenomic sequencing provides microbial abundance patterns that can be leveraged for sample origin prediction. Supervised machine learning classification approaches have been reported to predict sample origin accurately when the origin has been previously sampled. Using metagenomic datasets provided by the 2019 CAMDA challenge, we evaluated the influence of variable technical, analytical and machine learning approaches for result interpretation and novel source prediction. Results Comparison between 16S rRNA amplicon and shotgun sequencing approaches as well as metagenomic analytical tools showed differences in normalized microbial abundance, especially for organisms present at low abundance. Shotgun sequence data analyzed using Kraken2 and Bracken, for taxonomic annotation, had higher detection sensitivity. As classification models are limited to labeling pre-trained origins, we took an alternative approach using Lasso-regularized multivariate regression to predict geographic coordinates for comparison. In both models, the prediction errors were much higher in Leave-1-city-out than in 10-fold cross validation, of which the former realistically forecasted the increased difficulty in accurately predicting samples from new origins. This challenge was further confirmed when applying the model to a set of samples obtained from new origins. Overall, the prediction performance of the regression and classification models, as measured by mean squared error, were comparable on mystery samples. Due to higher prediction error rates for samples from new origins, we provided an additional strategy based on prediction ambiguity to infer whether a sample is from a new origin. Lastly, we report increased prediction error when data from different sequencing protocols were included as training data. Conclusions Herein, we highlight the capacity of predicting sample origin accurately with pre-trained origins and the challenge of predicting new origins through both regression and classification models. Overall, this work provides a summary of the impact of sequencing technique, protocol, taxonomic analytical approaches, and machine learning approaches on the use of metagenomics for prediction of sample origin.


Energies ◽  
2020 ◽  
Vol 13 (23) ◽  
pp. 6308
Author(s):  
Carlos Ruiz ◽  
Carlos M. Alaíz ◽  
José R. Dorronsoro

Given the impact of renewable sources in the overall energy production, accurate predictions are becoming essential, with machine learning becoming a very important tool in this context. In many situations, the prediction problem can be divided into several tasks, more or less related between them but each with its own particularities. Multitask learning (MTL) aims to exploit this structure, training several models at the same time to improve on the results achievable either by a common model or by task-specific models. In this paper, we show how an MTL approach based on support vector regression can be applied to the prediction of photovoltaic and wind energy, problems where tasks can be defined according to different criteria. As shown experimentally with three different datasets, the MTL approach clearly outperforms the results of the common and specific models for photovoltaic energy, and are at the very least quite competitive for wind energy.


Sign in / Sign up

Export Citation Format

Share Document