scholarly journals Computation of Engine Noise Propagation and Scattering off An Aircraft

2002 ◽  
Vol 1 (4) ◽  
pp. 403-420 ◽  
Author(s):  
D. Stanescu ◽  
J. Xu ◽  
M.Y. Hussaini ◽  
F. Farassat

The purpose of this paper is to demonstrate the feasibility of computing the fan inlet noise field around a real twin-engine aircraft, which includes the radiation of the main spinning modes from the engine as well as the reflection and scattering by the fuselage and the wing. This first-cut large-scale computation is based on time domain and frequency domain approaches that employ spectral element methods for spatial discretization. The numerical algorithms are designed to exploit high-performance computers such as the IBM SP4. Although the simulations could not match the exact conditions of the only available experimental data set, they are able to predict the trends of the measured noise field fairly well.

2020 ◽  
Vol 643 ◽  
pp. A42 ◽  
Author(s):  
◽  
Y. Akrami ◽  
K. J. Andersen ◽  
M. Ashdown ◽  
C. Baccigalupi ◽  
...  

We present the NPIPE processing pipeline, which produces calibrated frequency maps in temperature and polarization from data from the Planck Low Frequency Instrument (LFI) and High Frequency Instrument (HFI) using high-performance computers. NPIPE represents a natural evolution of previous Planck analysis efforts, and combines some of the most powerful features of the separate LFI and HFI analysis pipelines. For example, following the LFI 2018 processing procedure, NPIPE uses foreground polarization priors during the calibration stage in order to break scanning-induced degeneracies. Similarly, NPIPE employs the HFI 2018 time-domain processing methodology to correct for bandpass mismatch at all frequencies. In addition, NPIPE introduces several improvements, including, but not limited to: inclusion of the 8% of data collected during repointing manoeuvres; smoothing of the LFI reference load data streams; in-flight estimation of detector polarization parameters; and construction of maximally independent detector-set split maps. For component-separation purposes, important improvements include: maps that retain the CMB Solar dipole, allowing for high-precision relative calibration in higher-level analyses; well-defined single-detector maps, allowing for robust CO extraction; and HFI temperature maps between 217 and 857 GHz that are binned into 0′.9 pixels (Nside = 4096), ensuring that the full angular information in the data is represented in the maps even at the highest Planck resolutions. The net effect of these improvements is lower levels of noise and systematics in both frequency and component maps at essentially all angular scales, as well as notably improved internal consistency between the various frequency channels. Based on the NPIPE maps, we present the first estimate of the Solar dipole determined through component separation across all nine Planck frequencies. The amplitude is (3366.6 ± 2.7) μK, consistent with, albeit slightly higher than, earlier estimates. From the large-scale polarization data, we derive an updated estimate of the optical depth of reionization of τ = 0.051 ± 0.006, which appears robust with respect to data and sky cuts. There are 600 complete signal, noise and systematics simulations of the full-frequency and detector-set maps. As a Planck first, these simulations include full time-domain processing of the beam-convolved CMB anisotropies. The release of NPIPE maps and simulations is accompanied with a complete suite of raw and processed time-ordered data and the software, scripts, auxiliary data, and parameter files needed to improve further on the analysis and to run matching simulations.


Author(s):  
Jack Dongarra ◽  
Laura Grigori ◽  
Nicholas J. Higham

A number of features of today’s high-performance computers make it challenging to exploit these machines fully for computational science. These include increasing core counts but stagnant clock frequencies; the high cost of data movement; use of accelerators (GPUs, FPGAs, coprocessors), making architectures increasingly heterogeneous; and multi- ple precisions of floating-point arithmetic, including half-precision. Moreover, as well as maximizing speed and accuracy, minimizing energy consumption is an important criterion. New generations of algorithms are needed to tackle these challenges. We discuss some approaches that we can take to develop numerical algorithms for high-performance computational science, with a view to exploiting the next generation of supercomputers. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


1996 ◽  
Vol 07 (03) ◽  
pp. 295-303 ◽  
Author(s):  
P. D. CODDINGTON

Large-scale Monte Carlo simulations require high-quality random number generators to ensure correct results. The contrapositive of this statement is also true — the quality of random number generators can be tested by using them in large-scale Monte Carlo simulations. We have tested many commonly-used random number generators with high precision Monte Carlo simulations of the 2-d Ising model using the Metropolis, Swendsen-Wang, and Wolff algorithms. This work is being extended to the testing of random number generators for parallel computers. The results of these tests are presented, along with recommendations for random number generators for high-performance computers, particularly for lattice Monte Carlo simulations.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. E47-E57 ◽  
Author(s):  
Douglas W. Oldenburg ◽  
Eldad Haber ◽  
Roman Shekhtman

We present a 3D inversion methodology for multisource time-domain electromagnetic data. The forward model consists of Maxwell’s equations in time where the permeability is fixed but electrical conductivity can be highly discontinuous. The goal of the inversion is to recover the conductivity-given measurements of the electric and/or magnetic fields. The availability of matrix-factorization software and high-performance computing has allowed us to solve the 3D time domain EM problem using direct solvers. This is particularly advantageous when data from many transmitters and over many decades are available. We first formulate Maxwell’s equations in terms of the magnetic field, [Formula: see text]. The problem is then discretized using a finite volume technique in space and backward Euler in time. The forward operator is symmetric positive definite and a Cholesky decomposition can be performed with the work distributed over an array of processors. The forward modeling is quickly carried out using the factored operator. Time savings are considerable and they make 3D inversion of large ground or airborne data sets feasible. This is illustrated by using synthetic examples and by inverting a multisource UTEM field data set acquired at San Nicolás, which is a massive sulfide deposit in Mexico.


2020 ◽  
Author(s):  
Markus Wiedemann ◽  
Bernhard S.A. Schuberth ◽  
Lorenzo Colli ◽  
Hans-Peter Bunge ◽  
Dieter Kranzlmüller

<p>Precise knowledge of the forces acting at the base of tectonic plates is of fundamental importance, but models of mantle dynamics are still often qualitative in nature to date. One particular problem is that we cannot access the deep interior of our planet and can therefore not make direct in situ measurements of the relevant physical parameters. Fortunately, modern software and powerful high-performance computing infrastructures allow us to generate complex three-dimensional models of the time evolution of mantle flow through large-scale numerical simulations.</p><p>In this project, we aim at visualizing the resulting convective patterns that occur thousands of kilometres below our feet and to make them "accessible" using high-end virtual reality techniques.</p><p>Models with several hundred million grid cells are nowadays possible using the modern supercomputing facilities, such as those available at the Leibniz Supercomputing Centre. These models provide quantitative estimates on the inaccessible parameters, such as buoyancy and temperature, as well as predictions of the associated gravity field and seismic wavefield that can be tested against Earth observations.</p><p>3-D visualizations of the computed physical parameters allow us to inspect the models such as if one were actually travelling down into the Earth. This way, convective processes that occur thousands of kilometres below our feet are virtually accessible by combining the simulations with high-end VR techniques.</p><p>The large data set used here poses severe challenges for real time visualization, because it cannot fit into graphics memory, while requiring rendering with strict deadlines. This raises the necessity to balance the amount of displayed data versus the time needed for rendering it.</p><p>As a solution, we introduce a rendering framework and describe our workflow that allows us to visualize this geoscientific dataset. Our example exceeds 16 TByte in size, which is beyond the capabilities of most visualization tools. To display this dataset in real-time, we reduce and declutter the dataset through isosurfacing and mesh optimization techniques.</p><p>Our rendering framework relies on multithreading and data decoupling mechanisms that allow to upload data to graphics memory while maintaining high frame rates. The final visualization application can be executed in a CAVE installation as well as on head mounted displays such as the HTC Vive or Oculus Rift. The latter devices will allow for viewing our example on-site at the EGU conference.</p>


2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Hanjing Jiang ◽  
Yabing Huang

Abstract Background Drug-disease associations (DDAs) can provide important information for exploring the potential efficacy of drugs. However, up to now, there are still few DDAs verified by experiments. Previous evidence indicates that the combination of information would be conducive to the discovery of new DDAs. How to integrate different biological data sources and identify the most effective drugs for a certain disease based on drug-disease coupled mechanisms is still a challenging problem. Results In this paper, we proposed a novel computation model for DDA predictions based on graph representation learning over multi-biomolecular network (GRLMN). More specifically, we firstly constructed a large-scale molecular association network (MAN) by integrating the associations among drugs, diseases, proteins, miRNAs, and lncRNAs. Then, a graph embedding model was used to learn vector representations for all drugs and diseases in MAN. Finally, the combined features were fed to a random forest (RF) model to predict new DDAs. The proposed model was evaluated on the SCMFDD-S data set using five-fold cross-validation. Experiment results showed that GRLMN model was very accurate with the area under the ROC curve (AUC) of 87.9%, which outperformed all previous works in terms of both accuracy and AUC in benchmark dataset. To further verify the high performance of GRLMN, we carried out two case studies for two common diseases. As a result, in the ranking of drugs that were predicted to be related to certain diseases (such as kidney disease and fever), 15 of the top 20 drugs have been experimentally confirmed. Conclusions The experimental results show that our model has good performance in the prediction of DDA. GRLMN is an effective prioritization tool for screening the reliable DDAs for follow-up studies concerning their participation in drug reposition.


2020 ◽  
Author(s):  
Charlotte Coosje Tanis ◽  
Nina Leach ◽  
Sandra Jeanette Geiger ◽  
Floor H Nauta ◽  
Fabian Dablander ◽  
...  

In the absence of a vaccine, social distancing behaviour is pivotal to mitigate COVID-19 virus spread. In this large-scale behavioural experiment, we gathered data during Smart Distance Lab: The Art Fair (n = 787) between August 28 and 30, 2020 in Amsterdam, the Netherlands. We varied walking directions (bidirectional, unidirectional, and no directions) and supplementary interventions (face mask and buzzer to alert visitors of 1.5 metres distance). We captured visitors' movements using cameras, registered their contacts (defined as within 1.5 metres) using wearable sensors, and assessed their attitudes toward COVID-19 as well as their experience during the event using questionnaires. We also registered environmental measures (e.g., humidity). In this paper, we describe this unprecedented, multi-modal experimental data set on social distancing, including psychological, behavioural, and environmental measures. The data set is available on Figshare and in a MySQL database. It can be used to gain insight into (attitudes toward) behavioural interventions promoting social distancing, to calibrate pedestrian models, and to inform new studies on behavioural interventions.


2020 ◽  
Vol 32 (1) ◽  
pp. 182-204 ◽  
Author(s):  
Xiping Ju ◽  
Biao Fang ◽  
Rui Yan ◽  
Xiaoliang Xu ◽  
Huajin Tang

A spiking neural network (SNN) is a type of biological plausibility model that performs information processing based on spikes. Training a deep SNN effectively is challenging due to the nondifferention of spike signals. Recent advances have shown that high-performance SNNs can be obtained by converting convolutional neural networks (CNNs). However, the large-scale SNNs are poorly served by conventional architectures due to the dynamic nature of spiking neurons. In this letter, we propose a hardware architecture to enable efficient implementation of SNNs. All layers in the network are mapped on one chip so that the computation of different time steps can be done in parallel to reduce latency. We propose new spiking max-pooling method to reduce computation complexity. In addition, we apply approaches based on shift register and coarsely grained parallels to accelerate convolution operation. We also investigate the effect of different encoding methods on SNN accuracy. Finally, we validate the hardware architecture on the Xilinx Zynq ZCU102. The experimental results on the MNIST data set show that it can achieve an accuracy of 98.94% with eight-bit quantized weights. Furthermore, it achieves 164 frames per second (FPS) under 150 MHz clock frequency and obtains 41[Formula: see text] speed-up compared to CPU implementation and 22 times lower power than GPU implementation.


Sign in / Sign up

Export Citation Format

Share Document