scholarly journals GPU-accelerated periodic source identification in large-scale surveys: measuring P and P

2021 ◽  
Vol 503 (2) ◽  
pp. 2665-2675
Author(s):  
Michael L Katz ◽  
Olivia R Cooper ◽  
Michael W Coughlin ◽  
Kevin B Burdge ◽  
Katelyn Breivik ◽  
...  

ABSTRACT Many inspiraling and merging stellar remnants emit both gravitational and electromagnetic radiation as they orbit or collide. These gravitational wave events together with their associated electromagnetic counterparts provide insight about the nature of the merger, allowing us to further constrain properties of the binary. With the future launch of the Laser Interferometer Space Antenna (LISA), follow-up observations and models are needed of ultracompact binary (UCB) systems. Current and upcoming long baseline time domain surveys will observe many of these UCBs. We present a new fast periodic object search tool capable of searching for generic periodic signals based on the conditional entropy algorithm. This new implementation allows for a grid search over both the period (P) and the time derivative of the period ($\dot{P}$). To demonstrate the usage of this tool, we use a small, hand-picked subset of a UCB population generated from the population synthesis code cosmic , as well as a custom catalogue for varying periods at fixed intrinsic parameters. We simulate light curves as likely to be observed by future time domain surveys by using an existing eclipsing binary light-curve model accounting for the change in orbital period due to gravitational radiation. We find that a search with $\dot{P}$ values is necessary for detecting binaries at orbital periods less than ∼10 min. We also show it is useful in finding and characterizing binaries with longer periods, but at a higher computational cost. Our code is called gce (GPU-accelerated Conditional Entropy). It is available on Github (https://github.com/mikekatz04/gce).

Geophysics ◽  
2013 ◽  
Vol 78 (4) ◽  
pp. E161-E171 ◽  
Author(s):  
M. Zaslavsky ◽  
V. Druskin ◽  
A. Abubakar ◽  
T. Habashy ◽  
V. Simoncini

Transient data controlled-source electromagnetic measurements are usually interpreted via extracting few frequencies and solving the corresponding inverse frequency-domain problem. Coarse frequency sampling may result in loss of information and affect the quality of interpretation; however, refined sampling increases computational cost. Fitting data directly in the time domain has similar drawbacks, i.e., its large computational cost, in particular, when the Gauss-Newton (GN) algorithm is used for the misfit minimization. That cost is mainly comprised of the multiple solutions of the forward problem and linear algebraic operations using the Jacobian matrix for calculating the GN step. For large-scale 2.5D and 3D problems with multiple sources and receivers, the corresponding cost grows enormously for inversion algorithms using conventional finite-difference time-domain (FDTD) algorithms. A fast 3D forward solver based on the rational Krylov subspace (RKS) reduction algorithm using an optimal subspace selection was proposed earlier to partially mitigate this problem. We applied the same approach to reduce the size of the time-domain Jacobian matrix. The reduced-order model (ROM) is obtained by projecting a discretized large-scale Maxwell system onto an RKS with optimized poles. The RKS expansion replaces the time discretization for forward and inverse problems; however, for the same or better accuracy, its subspace dimension is much smaller than the number of time steps of the conventional FDTD. The crucial new development of this work is the space-time data compression of the ROM forward operator and decomposition of the ROM’s time-domain Jacobian matrix via chain rule, as a product of time- and space-dependent terms, thus effectively decoupling the discretizations in the time and parameter spaces. The developed technique can be equivalently applied to finely sampled frequency-domain data. We tested our approach using synthetic 2.5D examples of hydrocarbon reservoirs in the marine environment.


Author(s):  
Chad H. Custer ◽  
Jonathan M. Weiss ◽  
Venkataramanan Subramanian ◽  
William S. Clark ◽  
Kenneth C. Hall

The harmonic balance method implemented within STAR-CCM+ is a mixed frequency/time domain computational fluid dynamic technique, which enables the efficient calculation of time-periodic flows. The unsteady solution is stored at a small number of fixed time levels over one temporal period of the unsteady flow in a single blade passage in each blade row; thus the solution is periodic by construction. The individual time levels are coupled to one another through a spectral operator representing the time derivative term in the Navier-Stokes equation, and at the boundaries of the computational domain through the application of periodic and nonreflecting boundary conditions. The blade rows are connected to one another via a small number of fluid dynamic spinning modes characterized by nodal diameter and frequency. This periodic solution is driven to the correct solution using conventional (steady) CFD acceleration techniques, and thus is computationally efficient. Upon convergence, the time level solutions are Fourier transformed to obtain spatially varying Fourier coefficients of the flow variables. We find that a small number of time levels (or, equivalently, Fourier coefficients) are adequate to model even strongly nonlinear flows. Consequently, the method provides an unsteady solution at a computational cost significantly lower than traditional unsteady time marching methods. The implementation of this nonlinear harmonic balance method within STAR-CCM+ allows for the simulation of multiple blade rows. This capability is demonstrated and validated using a 1.5 stage cold flow axial turbine developed by the University of Aachen. Results produced using the harmonic balance method are compared to conventional time domain simulations using STAR-CCM+, and are also compared to published experimental data. It is shown that the harmonic balance method is able to accurately model the unsteady flow structures at a computational cost significantly lower than unsteady time domain simulation.


2004 ◽  
Author(s):  
Eric Michielsssen ◽  
Weng C. Chew ◽  
Jianming Jin ◽  
Balasubramaniam Shanker

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daiji Ichishima ◽  
Yuya Matsumura

AbstractLarge scale computation by molecular dynamics (MD) method is often challenging or even impractical due to its computational cost, in spite of its wide applications in a variety of fields. Although the recent advancement in parallel computing and introduction of coarse-graining methods have enabled large scale calculations, macroscopic analyses are still not realizable. Here, we present renormalized molecular dynamics (RMD), a renormalization group of MD in thermal equilibrium derived by using the Migdal–Kadanoff approximation. The RMD method improves the computational efficiency drastically while retaining the advantage of MD. The computational efficiency is improved by a factor of $$2^{n(D+1)}$$ 2 n ( D + 1 ) over conventional MD where D is the spatial dimension and n is the number of applied renormalization transforms. We verify RMD by conducting two simulations; melting of an aluminum slab and collision of aluminum spheres. Both problems show that the expectation values of physical quantities are in good agreement after the renormalization, whereas the consumption time is reduced as expected. To observe behavior of RMD near the critical point, the critical exponent of the Lennard-Jones potential is extracted by calculating specific heat on the mesoscale. The critical exponent is obtained as $$\nu =0.63\pm 0.01$$ ν = 0.63 ± 0.01 . In addition, the renormalization group of dissipative particle dynamics (DPD) is derived. Renormalized DPD is equivalent to RMD in isothermal systems under the condition such that Deborah number $$De\ll 1$$ D e ≪ 1 .


2018 ◽  
Vol 140 (9) ◽  
Author(s):  
R. Maffulli ◽  
L. He ◽  
P. Stein ◽  
G. Marinescu

The emerging renewable energy market calls for more advanced prediction tools for turbine transient operations in fast startup/shutdown cycles. Reliable numerical analysis of such transient cycles is complicated by the disparity in time scales of the thermal responses in fluid and solid domains. Obtaining fully coupled time-accurate unsteady conjugate heat transfer (CHT) results under these conditions would require to march in both domains using the time-step dictated by the fluid domain: typically, several orders of magnitude smaller than the one required by the solid. This requirement has strong impact on the computational cost of the simulation as well as being potentially detrimental to the accuracy of the solution due to accumulation of round-off errors in the solid. A novel loosely coupled CHT methodology has been recently proposed, and successfully applied to both natural and forced convection cases that remove these requirements through a source-term based modeling (STM) approach of the physical time derivative terms in the relevant equations. The method has been shown to be numerically stable for very large time steps with adequate accuracy. The present effort is aimed at further exploiting the potential of the methodology through a new adaptive time stepping approach. The proposed method allows for automatic time-step adjustment based on estimating the magnitude of the truncation error of the time discretization. The developed automatic time stepping strategy is applied to natural convection cases under long (2000 s) transients: relevant to the prediction of turbine thermal loads during fast startups/shutdowns. The results of the method are compared with fully coupled unsteady simulations showing comparable accuracy with a significant reduction of the computational costs.


Author(s):  
Mahdi Esmaily Moghadam ◽  
Yuri Bazilevs ◽  
Tain-Yen Hsia ◽  
Alison Marsden

A closed-loop lumped parameter network (LPN) coupled to a 3D domain is a powerful tool that can be used to model the global dynamics of the circulatory system. Coupling a 0D LPN to a 3D CFD domain is a numerically challenging problem, often associated with instabilities, extra computational cost, and loss of modularity. A computationally efficient finite element framework has been recently proposed that achieves numerical stability without sacrificing modularity [1]. This type of coupling introduces new challenges in the linear algebraic equation solver (LS), producing an strong coupling between flow and pressure that leads to an ill-conditioned tangent matrix. In this paper we exploit this strong coupling to obtain a novel and efficient algorithm for the linear solver (LS). We illustrate the efficiency of this method on several large-scale cardiovascular blood flow simulation problems.


2006 ◽  
Vol 18 (12) ◽  
pp. 2959-2993 ◽  
Author(s):  
Eduardo Ros ◽  
Richard Carrillo ◽  
Eva M. Ortigosa ◽  
Boris Barbour ◽  
Rodrigo Agís

Nearly all neuronal information processing and interneuronal communication in the brain involves action potentials, or spikes, which drive the short-term synaptic dynamics of neurons, but also their long-term dynamics, via synaptic plasticity. In many brain structures, action potential activity is considered to be sparse. This sparseness of activity has been exploited to reduce the computational cost of large-scale network simulations, through the development of event-driven simulation schemes. However, existing event-driven simulations schemes use extremely simplified neuronal models. Here, we implement and evaluate critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics. This approach enables the use of more complex (and realistic) neuronal models or data in representing the neurons, while retaining the advantage of high-speed simulation. We demonstrate the method's application for neurons containing exponential synaptic conductances, thereby implementing shunting inhibition, a phenomenon that is critical to cellular computation. We also introduce an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays. Finally, the scheme readily accommodates implementation of synaptic plasticity mechanisms that depend on spike timing, enabling future simulations to explore issues of long-term learning and adaptation in large-scale networks.


Author(s):  
David Forbes ◽  
Gary Page ◽  
Martin Passmore ◽  
Adrian Gaylard

This study is an evaluation of the computational methods in reproducing experimental data for a generic sports utility vehicle (SUV) geometry and an assessment on the influence of fixed and rotating wheels for this geometry. Initially, comparisons are made in the wake structure and base pressures between several CFD codes and experimental data. It was shown that steady-state RANS methods are unsuitable for this geometry due to a large scale unsteadiness in the wake caused by separation at the sharp trailing edge and rear wheel wake interactions. unsteady RANS (URANS) offered no improvements in wake prediction despite a significant increase in computational cost. The detached-eddy simulation (DES) and Lattice–Boltzmann methods showed the best agreement with the experimental results in both the wake structure and base pressure, with LBM running in approximately a fifth of the time for DES. The study then continues by analysing the influence of rotating wheels and a moving ground plane over a fixed wheel and ground plane arrangement. The introduction of wheel rotation and a moving ground was shown to increase the base pressure and reduce the drag acting on the vehicle when compared to the fixed case. However, when compared to the experimental standoff case, variations in drag and lift coefficients were minimal but misleading, as significant variations to the surface pressures were present.


2013 ◽  
Vol 2013 ◽  
pp. 1-10
Author(s):  
Lei Luo ◽  
Chao Zhang ◽  
Yongrui Qin ◽  
Chunyuan Zhang

With the explosive growth of the data volume in modern applications such as web search and multimedia retrieval, hashing is becoming increasingly important for efficient nearest neighbor (similar item) search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.


Sign in / Sign up

Export Citation Format

Share Document