scholarly journals Dynamic zoom simulations: A fast, adaptive algorithm for simulating light-cones

2020 ◽  
Vol 499 (2) ◽  
pp. 2685-2700
Author(s):  
Enrico Garaldi ◽  
Matteo Nori ◽  
Marco Baldi

ABSTRACT The advent of a new generation of large-scale galaxy surveys is pushing cosmological numerical simulations in an uncharted territory. The simultaneous requirements of high resolution and very large volume pose serious technical challenges, due to their computational and data storage demand. In this paper, we present a novel approach dubbed dynamic zoom simulations – or dzs – developed to tackle these issues. Our method is tailored to the production of light-cone outputs from N-body numerical simulations, which allow for a more efficient storage and post-processing compared to standard comoving snapshots, and more directly mimic the format of survey data. In dzs, the resolution of the simulation is dynamically decreased outside the light-cone surface, reducing the computational work load, while simultaneously preserving the accuracy inside the light-cone and the large-scale gravitational field. We show that our approach can achieve virtually identical results to traditional simulations at half of the computational cost for our largest box. We also forecast this speedup to increase up to a factor of 5 for larger and/or higher resolution simulations. We assess the accuracy of the numerical integration by comparing pairs of identical simulations run with and without dzs. Deviations in the light-cone halo mass function, in the sky-projected light-cone, and in the 3D matter light-cone always remain below 0.1 per cent. In summary, our results indicate that the dzs technique may provide a highly valuable tool to address the technical challenges that will characterize the next generation of large-scale cosmological simulations.

2006 ◽  
Vol 04 (03) ◽  
pp. 639-647 ◽  
Author(s):  
ELEAZAR ESKIN ◽  
RODED SHARAN ◽  
ERAN HALPERIN

The common approaches for haplotype inference from genotype data are targeted toward phasing short genomic regions. Longer regions are often tackled in a heuristic manner, due to the high computational cost. Here, we describe a novel approach for phasing genotypes over long regions, which is based on combining information from local predictions on short, overlapping regions. The phasing is done in a way, which maximizes a natural maximum likelihood criterion. Among other things, this criterion takes into account the physical length between neighboring single nucleotide polymorphisms. The approach is very efficient and is applied to several large scale datasets and is shown to be successful in two recent benchmarking studies (Zaitlen et al., in press; Marchini et al., in preparation). Our method is publicly available via a webserver at .


Author(s):  
Darryl D. Holm ◽  
Lennon Ó Náraigh ◽  
Cesare Tronci

This paper exploits the theory of geometric gradient flows to introduce an alternative regularization of the thin-film equation valid in the case of large-scale droplet spreading—the geometric diffuse-interface method. The method possesses some advantages when compared with the existing models of droplet spreading, namely the slip model, the precursor-film method and the diffuse-interface model. These advantages are discussed and a case is made for using the geometric diffuse-interface method for the purpose of numerical simulations. The mathematical solutions of the geometric diffuse interface method are explored via such numerical simulations for the simple and well-studied case of large-scale droplet spreading for a perfectly wetting fluid—we demonstrate that the new method reproduces Tanner’s Law of droplet spreading via a simple and robust computational method, at a low computational cost. We discuss potential avenues for extending the method beyond the simple case of perfectly wetting fluids.


2020 ◽  
Vol 637 ◽  
pp. A18 ◽  
Author(s):  
Tony Bonnaire ◽  
Nabila Aghanim ◽  
Aurélien Decelle ◽  
Marian Douspis

Numerical simulations and observations show that galaxies are not uniformly distributed in the universe but, rather, they are spread across a filamentary structure. In this large-scale pattern, highly dense regions are linked together by bridges and walls, all of them surrounded by vast, nearly-empty areas. While nodes of the network are widely studied in the literature, simulations indicate that half of the mass budget comes from a more diffuse part of the network, which is made up of filaments. In the context of recent and upcoming large galaxy surveys, it becomes essential that we identify and classify features of the Cosmic Web in an automatic way in order to study their physical properties and the impact of the cosmic environment on galaxies and their evolution. In this work, we propose a new approach for the automatic retrieval of the underlying filamentary structure from a 2D or 3D galaxy distribution using graph theory and the assumption that paths that link galaxies together with the minimum total length highlight the underlying distribution. To obtain a smoothed version of this topological prior, we embedded it in a Gaussian mixtures framework. In addition to a geometrical description of the pattern, a bootstrap-like estimate of these regularised minimum spanning trees allowed us to obtain a map characterising the frequency at which an area of the domain is crossed. Using the distribution of halos derived from numerical simulations, we show that the proposed method is able to recover the filamentary pattern in a 2D or 3D distribution of points with noise and outliers robustness with a few comprehensible parameters.


2021 ◽  
Vol 850 (1) ◽  
pp. 012018
Author(s):  
T Renugadevi ◽  
D Hari Prasanth ◽  
Appili Yaswanth ◽  
K Muthukumar ◽  
M Venkatesan

Abstract Data centers are large-scale data storage and processing systems. It is made up of a number of servers that must be capable of handling large amount of data. As a result, data centers generate a significant quantity of heat, which must be cooled and kept at an optimal temperature to avoid overheating. To address this problem, thermal analysis of the data center is carried out using numerical methods. The CFD model consists of a micro data center, where conjugate heat transfer effects are studied. A micro data center consists of servers aligned with air gaps alternatively and cooling air is passed between the air gaps to remove heat. In the present work, the design of data center rack is made in such a way that the cold air is in close proximity to servers. The temperature and airflow in the data center are estimated using the model. The air gap is optimally designed for the cooling unit. Temperature distribution of various load configurations is studied. The objective of the study is to find a favorable loading configuration of the micro data center for various loads and effectiveness of distribution of load among the servers.


2008 ◽  
Vol 86 (4) ◽  
pp. 523-527 ◽  
Author(s):  
M Berciu

We present a novel approach for obtaining simple yet highly accurate approximations for interacting problems. We use the Holstein polaron as an example and show that the predictions of the so-called Momentum Average approximation are in good agreement with results of numerical simulations over most of the parameter space, and become exact in various asymptotic limits. The resulting Green’s function satisfies exactly the first six spectral weight sum rules, and all higher order sum rules are satisfied with great accuracy. Furthermore, the accuracy can be improved systematically, at a slightly increased computational cost. PACS Nos.: 71.38.–k, 72.10.Di, 63.20.Kr


In current scenario, the Big Data processing that includes data storage, aggregation, transmission and evaluation has attained more attraction from researchers, since there is an enormous data produced by the sensing nodes of large-scale Wireless Sensor Networks (WSNs). Concerning the energy efficiency and the privacy conservation needs of WSNs in big data aggregation and processing, this paper develops a novel model called Multilevel Clustering based- Energy Efficient Privacy-preserving Big Data Aggregation (MCEEP-BDA). Initially, based on the pre-defined structure of gradient topology, the sensor nodes are framed into clusters. Further, the sensed information collected from each sensor node is altered with respect to the privacy preserving model obtained from their corresponding sinks. The Energy model has been defined for determining the efficient energy consumption in the overall process of big data aggregation in WSN. Moreover, Cluster_head Rotation process has been incorporated for effectively reducing the communication overhead and computational cost. Additionally, algorithm has been framed for Least BDA Tree for aggregating the big sensor data through the selected cluster heads effectively. The simulation results show that the developed MCEEP-BDA model is more scalable and energy efficient. And, it shows that the Big Data Aggregation (BDA) has been performed here with reduced resource utilization and secure manner by the privacy preserving model, further satisfying the security concerns of the developing application-oriented needs.


Author(s):  
Junwei Han ◽  
Kai Xiong ◽  
Feiping Nie

Spectral clustering has been widely used due to its simplicity for solving graph clustering problem in recent years. However, it suffers from the high computational cost as data grow in scale, and is limited by the performance of post-processing. To address these two problems simultaneously, in this paper, we propose a novel approach denoted by orthogonal and nonnegative graph reconstruction (ONGR) that scales linearly with the data size. For the relaxation of Normalized Cut, we add nonnegative constraint to the objective. Due to the nonnegativity, ONGR offers interpretability that the final cluster labels can be directly obtained without post-processing. Extensive experiments on clustering tasks demonstrate the effectiveness of the proposed method.


2019 ◽  
Author(s):  
Chem Int

This research work presents a facile and green route for synthesis silver sulfide (Ag2SNPs) nanoparticles from silver nitrate (AgNO3) and sodium sulfide nonahydrate (Na2S.9H2O) in the presence of rosemary leaves aqueous extract at ambient temperature (27 oC). Structural and morphological properties of Ag2SNPs nanoparticles were analyzed by X-ray diffraction (XRD) and transmission electron microscopy (TEM). The surface Plasmon resonance for Ag2SNPs was obtained around 355 nm. Ag2SNPs was spherical in shape with an effective diameter size of 14 nm. Our novel approach represents a promising and effective method to large scale synthesis of eco-friendly antibacterial activity silver sulfide nanoparticles.


2021 ◽  
Vol 502 (3) ◽  
pp. 3942-3954
Author(s):  
D Hung ◽  
B C Lemaux ◽  
R R Gal ◽  
A R Tomczak ◽  
L M Lubin ◽  
...  

ABSTRACT We present a new mass function of galaxy clusters and groups using optical/near-infrared (NIR) wavelength spectroscopic and photometric data from the Observations of Redshift Evolution in Large-Scale Environments (ORELSE) survey. At z ∼ 1, cluster mass function studies are rare regardless of wavelength and have never been attempted from an optical/NIR perspective. This work serves as a proof of concept that z ∼ 1 cluster mass functions are achievable without supplemental X-ray or Sunyaev-Zel’dovich data. Measurements of the cluster mass function provide important contraints on cosmological parameters and are complementary to other probes. With ORELSE, a new cluster finding technique based on Voronoi tessellation Monte Carlo (VMC) mapping, and rigorous purity and completeness testing, we have obtained ∼240 galaxy overdensity candidates in the redshift range 0.55 < z < 1.37 at a mass range of 13.6 < log (M/M⊙) < 14.8. This mass range is comparable to existing optical cluster mass function studies for the local universe. Our candidate numbers vary based on the choice of multiple input parameters related to detection and characterization in our cluster finding algorithm, which we incorporated into the mass function analysis through a Monte Carlo scheme. We find cosmological constraints on the matter density, Ωm, and the amplitude of fluctuations, σ8, of $\Omega _{m} = 0.250^{+0.104}_{-0.099}$ and $\sigma _{8} = 1.150^{+0.260}_{-0.163}$. While our Ωm value is close to concordance, our σ8 value is ∼2σ higher because of the inflated observed number densities compared to theoretical mass function models owing to how our survey targeted overdense regions. With Euclid and several other large, unbiased optical surveys on the horizon, VMC mapping will enable optical/NIR cluster cosmology at redshifts much higher than what has been possible before.


Sign in / Sign up

Export Citation Format

Share Document