noise mitigation
Recently Published Documents


TOTAL DOCUMENTS

633
(FIVE YEARS 194)

H-INDEX

27
(FIVE YEARS 6)

2022 ◽  
Vol 17 (01) ◽  
pp. P01018
Author(s):  
R. Acciarri ◽  
B. Baller ◽  
V. Basque ◽  
C. Bromberg ◽  
F. Cavanna ◽  
...  

Abstract The liquid argon time projection chamber (LArTPC) detector technology has an excellent capability to measure properties of low-energy neutrinos produced by the sun and supernovae and to look for exotic physics at very low energies. In order to achieve those physics goals, it is crucial to identify and reconstruct signals in the waveforms recorded on each TPC wire. In this paper, we report on a novel algorithm based on a one-dimensional convolutional neural network (CNN) to look for the region-of-interest (ROI) in raw waveforms. We test this algorithm using data from the ArgoNeuT experiment in conjunction with an improved noise mitigation procedure and a more realistic data-driven noise model for simulated events. This deep-learning ROI finder shows promising performance in extracting small signals and gives an efficiency approximately twice that of the traditional algorithm in the low energy region of ∼0.03–0.1 MeV. This method offers great potential to explore low-energy physics using LArTPCs.


2021 ◽  
Vol 13 (1) ◽  
pp. 2
Author(s):  
Jirat Bhanpato ◽  
Tejas G. Puranik ◽  
Dimitri N. Mavris

The mitigation of aviation environmental effects is one of the key requirements for sustainable aviation growth. Among various mitigation strategies, Noise Abatement Departure Procedures (NADPs) are a popular and effective measure undertaken by several operators. However, a large variation in departure procedures is observed in real operations. This study demonstrates the use of OpenSky ADS-B departure data for comparison and quantification of the differences in trajectories and the resulting community noise impact between real-world operations and NADPs. Trajectory comparison is accomplished in order to gain insights into the similarity between NADPs and real-world procedures. Clustering algorithms are employed to identify representative departure procedures, enabling efficient high-fidelity noise modeling. Finally, noise results are compared in order to quantify the difference in environmental impacts arising from variability in real-world trajectories. The methodology developed enables more efficient and accurate environmental analyses, thereby laying the foundation for future impact assessment and mitigation efforts.


Noise Mapping ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 10-22
Author(s):  
Francesco D’Alessandro ◽  
Paola Di Mascio ◽  
Lorenzo Lombardi ◽  
Benedetta Ridolfi

Abstract The aim of the paper is to define a method for evaluating infrastructural interventions for the mitigation of noise generated by roads based on multi-criteria analysis which considers a series of parameters (environmental, social, economic and health) that could give broader evaluations than just economic convenience. The research develops a guideline based on an already known methodology applied in other fields, which has been adapted to the above-mentioned topic: the multi-criteria analysis. The decision to use this method originates from an in-depth study of the state of the art regarding the issue of noise pollution related to transport infrastructures in Italy and at a European level. The Multi-criteria Analysis proved to be the best solution both for completeness and versatility. In particular, the developed methodology uses the Analytic Hierarchy Process as a multi-criteria analysis method. Through its hierarchical structure, this method offers a comparison not only between possible interventions, but also between the same criteria taken into consideration for the choice of the best intervention. The model was validated by analyzing a real noise mitigation project on an Italian main road. The results showed how the model could represent a valid support to decision-making processes.


Author(s):  
Bassant Selim ◽  
Md Sahabul Alam ◽  
Georges Kaddoum ◽  
Basile L. Agba

2021 ◽  
Vol 11 (22) ◽  
pp. 11040
Author(s):  
Quoc Nguyen ◽  
Tomoaki Shikina ◽  
Daichi Teruya ◽  
Seiji Hotta ◽  
Huy-Dung Han ◽  
...  

In training-based Machine Learning applications, the training data are frequently labeled by non-experts and expose substantial label noise which greatly alters the training models. In this work, a novel method for reducing the effect of label noise is introduced. The rules are created from expert knowledge to identify the incorrect non-expert training data. Using the gradient descent algorithm, the violating data samples are weighted less to mitigate their effects during model training. The proposed method is applied to the image classification problem using Manga109 and CIFAR-10 dataset. The experiments show that when the noise level is up to 50% our proposed method significantly increases the accuracy of the model compared to conventional learning methods.


Author(s):  
Neeraj Kumar Rajak ◽  
Neha Kondedan ◽  
Husna Jan ◽  
Muhammed Dilshah U ◽  
Navya S. D. ◽  
...  

Abstract We present high resolution thermal expansion measurement data obtained with high relative sensitivity of ΔL/L = 10-9 and accuracy of ±2% using closed cycle refrigerators employing two different dilatometers. Experimental details of the set-up utilizing the multi-function probe integrated with the cold head of two kinds of closed cycle refrigerators, namely, pulse tube and Gifford-McMahon cryocoolers, has been described in detail. The design consists of decoupling the bottom sample puck and taking connections from the top of the multi-function probe to mitigate the vibrational noise arising from the cold heads, using which smooth and high quality thermal expansion data could be obtained. It was found that dilatometer #2 performs a better noise mitigation than dilatometer #1 due to the constrained movement of the spring in dilatometer #2. This was confirmed by finite element method simulations that were performed for understanding the spring movement in each dilatometer using which the effect of different forces/pressures and vibrations on the displacement of the spring was studied. Linear thermal expansion coefficient α obtained using both dilatometers was evaluated using derivative of a polynomial fit. The resultant α obtained using dilatometer #2 and either of the closed cycle cryostats on standard metals silver and aluminium showed excellent match with published values obtained using wet cryostats. Finally, thermal expansion measurements is reported on single crystals of two high temperature superconductors YBa2Cu3-xAlxO6+δ and Bi2Sr2CaCu2O8+x along the c-axis with very good match found with published data obtained earlier using wet liquid helium based cryostats.


2021 ◽  
Author(s):  
◽  
Juan Rada-Vilela

<p>Particle Swarm Optimization (PSO) is a metaheuristic where a swarm of particles explores the search space of an optimization problem to find good solutions. However, if the problem is subject to noise, the quality of the resulting solutions significantly deteriorates. The literature has attributed such a deterioration to particles suffering from inaccurate memories and from the incorrect selection of their neighborhood best solutions. For both cases, the incorporation of noise mitigation mechanisms has improved the quality of the results, but the analyses beyond such improvements often fall short of empirical evidence supporting their claims in terms other than the quality of the results. Furthermore, there is not even evidence showing the extent to which inaccurate memories and incorrect selection affect the particles in the swarm. Therefore, the performance of PSO on noisy optimization problems remains largely unexplored. The overall goal of this thesis is to study the effect of noise on PSO beyond the known deterioration of its results in order to develop more efficient noise mitigation mechanisms. Based on the allocation of function evaluations by the noise mitigation mechanisms, we distinguish three groups of PSO algorithms as: single-evaluation, which sacrifice the accuracy of the objective values over performing more iterations; resampling-based, which sacrifice performing more iterations over better estimating the objective values; and hybrids, which merge methods from the previous two. With an empirical approach, we study and analyze the performance of existing and new PSO algorithms from each group on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. Throughout the search process, we compute a set of 16 population statistics that measure different characteristics of the swarms and provide useful information that we utilize to design better PSO algorithms. Our study identifies and defines deception, blindness and disorientation as three conditions from which particles suffer in noisy optimization problems. The population statistics for different PSO algorithms reveal that particles often suffer from large proportions of deception, blindness and disorientation, and show that reducing these three conditions would lead to better results. The sensitivity of PSO to noisy optimization problems is confirmed and highlights the importance of noise mitigation mechanisms. The population statistics for single-evaluation PSO algorithms show that the commonly used evaporation mechanism produces too much disorientation, leading to divergent behaviour and to the worst results within the group. Two better algorithms are designed, the first utilizes probabilistic updates to reduce disorientation, and the second computes a centroid solution as the neighborhood best solution to reduce deception. The population statistics for resampling-based PSO algorithms show that basic resampling still leads to large proportions of deception and blindness, and its results are the worst within the group. Two better algorithms are designed to reduce deception and blindness. The first provides better estimates of the personal best solutions, and the second provides even better estimates of a few solutions from which the neighborhood best solutions are selected. However, an existing PSO algorithm is the best within the group as it strives to asymptotically minimize deception by sequentially reducing both blindness and disorientation. The population statistics for hybrid PSO algorithms show that they provide the best results thanks to a combined reduction of deception, blindness and disorientation. Amongst the hybrids, we find a promising algorithm whose simplicity, flexibility and quality of results questions the importance of overly complex methods designed to minimize deception. Overall, our research presents a thorough study to design, evaluate and tune PSO algorithms to address optimization problems subject to noise.</p>


Sign in / Sign up

Export Citation Format

Share Document