scholarly journals Impact of the reduced speed of light approximation on ionization front velocities in cosmological simulations of the epoch of reionization

2019 ◽  
Vol 622 ◽  
pp. A142 ◽  
Author(s):  
Nicolas Deparis ◽  
Dominique Aubert ◽  
Pierre Ocvirk ◽  
Jonathan Chardin ◽  
Joseph Lewis

Context. Coupled radiative-hydrodynamics simulations of the epoch of reionization aim to reproduce the propagation of ionization fronts during the transition before the overlap of HII regions. Many of these simulations use moment-based methods to track radiative transfer processes using explicit solvers and are therefore subject to strict stability conditions regarding the speed of light, which implies a great computational cost. The cost can be reduced by assuming a reduced speed of light, and this approximation is now widely used to produce large-scale simulations of reionization. Aims. We measure how ionization fronts propagate in simulations of the epoch of reionization. In particular, we want to distinguish between the different stages of the fronts’ progression into the intergalactic medium. We also investigate how these stages and their properties are impacted by the choice of a reduced speed of light. Methods. We introduce a new method for estimating and comparing the ionization front speeds based on maps of the reionization redshifts. We applied it to a set of cosmological simulations of the reionization using a set of reduced speeds of light, and measured the evolution of the ionization front speeds during the reionization process. We only considered models where the reionization is driven by the sources created within the simulations, without potential contributions of an external homogeneous ionizing background. Results. We find that ionization fronts progress via a two-stage process, the first stage at low velocity as the fronts emerge from high density regions and a second later stage just before the overlap, during which front speeds increase close to the speed of light. For example, using a set of small 8 Mpc h−3 simulations, we find that a minimal velocity of 0.3c is able to model these two stages in this specific context without significant impact. Values as low as 0.05c can model the first low velocity stage, but limit the acceleration at later times. Lower values modify the distribution of front speeds at all times. Using another set of simulations with larger 64 Mpc h−3 volumes that better account for distant sources, we find that reduced speed of light has a greater impact on reionization times and front speeds in underdense regions that are reionized at late times and swept by radiation produced by distant sources. Conversely, the same quantities measured in dense regions with slow fronts are less sensitive to c∼ values. While the discrepancies introduced by reduced speed of light could be reduced by the inclusion of an additional UV background, we expect these conclusions to be robust in the case of simulations with reionizations driven by inner sources.


2020 ◽  
Author(s):  
Andrew Whalen ◽  
John M Hickey

AbstractIn this paper we present a new imputation algorithm, AlphaImpute2, which performs fast and accurate pedigree and population based imputation for livestock populations of hundreds of thousands of individuals. Genetic imputation is a tool used in genetics to decrease the cost of genotyping a population, by genotyping a small number of individuals at high-density and the remaining individuals at low-density. Shared haplotype segments between the high-density and low-density individuals can then be used to fill in the missing genotypes of the low-density individuals. As the size of genetics datasets have grown, the computational cost of performing imputation has increased, particularly in agricultural breeding programs where there might be hundreds of thousands of genotyped individuals. To address this issue, we present a new imputation algorithm, AlphaImpute2, which performs population imputation by using a particle based approximation to the Li and Stephens which exploits the Positional Burrows Wheeler Transform, and performs pedigree imputation using an approximate version of multi-locus iterative peeling. We tested AlphaImpute2 on four simulated datasets designed to mimic the pedigrees found in a real pig breeding program. We compared AlphaImpute2 to AlphaImpute, AlphaPeel, findhap version 4, and Beagle 5.1. We found that AlphaImpute2 had the highest accuracy, with an accuracy of 0.993 for low-density individuals on the pedigree with 107,000 individuals, compared to an accuracy of 0.942 for Beagle 5.1, 0.940 for AlphaImpute, and 0.801 for findhap. AlphaImpute2 was also the fastest software tested, with a runtime of 105 minutes a pedigree of 107,000 individuals and 5,000 markers was 105 minutes, compared to 190 minutes for Beagle 5.1, 395 minutes for findhap, and 7,859 minutes AlphaImpute. We believe that AlphaImpute2 will enable fast and accurate large scale imputation for agricultural populations as they scale to hundreds of thousands or millions of genotyped individuals.



2020 ◽  
Vol 495 (4) ◽  
pp. 4227-4236 ◽  
Author(s):  
Doogesh Kodi Ramanah ◽  
Tom Charnock ◽  
Francisco Villaescusa-Navarro ◽  
Benjamin D Wandelt

ABSTRACT We present an extension of our recently developed Wasserstein optimized model to emulate accurate high-resolution (HR) features from computationally cheaper low-resolution (LR) cosmological simulations. Our deep physical modelling technique relies on restricted neural networks to perform a mapping of the distribution of the LR cosmic density field to the space of the HR small-scale structures. We constrain our network using a single triplet of HR initial conditions and the corresponding LR and HR evolved dark matter simulations from the quijote suite of simulations. We exploit the information content of the HR initial conditions as a well-constructed prior distribution from which the network emulates the small-scale structures. Once fitted, our physical model yields emulated HR simulations at low computational cost, while also providing some insights about how the large-scale modes affect the small-scale structure in real space.



2019 ◽  
Author(s):  
Mohsen Sadeghi ◽  
Frank Noé

Biomembranes are two-dimensional assemblies of phospholipids that are only a few nanometres thick, but form micrometer-sized structures vital to cellular function. Explicit modelling of biologically relevant membrane systems is computationally expensive, especially when the large number of solvent particles and slow membrane kinetics are taken into account. While highly coarse-grained solvent-free models are available to study equilibrium behaviour of membranes, their efficiency comes at the cost of sacrificing realistic kinetics, and thereby the ability to predict pathways and mechanisms of membrane processes. Here, we present a framework for integrating coarse-grained membrane models with anisotropic stochastic dynamics and continuum-based hydrodynamics, allowing us to simulate large biomembrane systems with realistic kinetics at low computational cost. This paves the way for whole-cell simulations that still include nanometer/nanosecond spatiotemporal resolutions. As a demonstration, we obtain and verify fluctuation spectrum of a full-sized human red blood cell in a 150-milliseconds-long single trajectory. We show how the kinetic effects of different cytoplasmic viscosities can be studied with such a simulation, with predictions that agree with single-cell experimental observations.



Author(s):  
Lin Lin ◽  
Xiaojie Wu

The Hartree-Fock-Bogoliubov (HFB) theory is the starting point for treating superconducting systems. However, the computational cost for solving large scale HFB equations can be much larger than that of the Hartree-Fock equations, particularly when the Hamiltonian matrix is sparse, and the number of electrons $N$ is relatively small compared to the matrix size $N_{b}$. We first provide a concise and relatively self-contained review of the HFB theory for general finite sized quantum systems, with special focus on the treatment of spin symmetries from a linear algebra perspective. We then demonstrate that the pole expansion and selected inversion (PEXSI) method can be particularly well suited for solving large scale HFB equations. For a Hubbard-type Hamiltonian, the cost of PEXSI is at most $\Or(N_b^2)$ for both gapped and gapless systems, which can be significantly faster than the standard cubic scaling diagonalization methods. We show that PEXSI can solve a two-dimensional Hubbard-Hofstadter model with $N_b$ up to $2.88\times 10^6$, and the wall clock time is less than $100$ s using $17280$ CPU cores. This enables the simulation of physical systems under experimentally realizable magnetic fields, which cannot be otherwise simulated with smaller systems.



2020 ◽  
Vol 39 (4) ◽  
pp. 5449-5458
Author(s):  
A. Arokiaraj Jovith ◽  
S.V. Kasmir Raja ◽  
A. Razia Sulthana

Interference in Wireless Sensor Network (WSN) predominantly affects the performance of the WSN. Energy consumption in WSN is one of the greatest concerns in the current generation. This work presents an approach for interference measurement and interference mitigation in point to point network. The nodes are distributed in the network and interference is measured by grouping the nodes in the region of a specific diameter. Hence this approach is scalable and isextended to large scale WSN. Interference is measured in two stages. In the first stage, interference is overcome by allocating time slots to the node stations in Time Division Multiple Access (TDMA) fashion. The node area is split into larger regions and smaller regions. The time slots are allocated to smaller regions in TDMA fashion. A TDMA based time slot allocation algorithm is proposed in this paper to enable reuse of timeslots with minimal interference between smaller regions. In the second stage, the network density and control parameter is introduced to reduce interference in a minor level within smaller node regions. The algorithm issimulated and the system is tested with varying control parameter. The node-level interference and the energy dissipation at nodes are captured by varying the node density of the network. The results indicate that the proposed approach measures the interference and mitigates with minimal energy consumption at nodes and with less overhead transmission.



2014 ◽  
Vol 3 (3) ◽  
pp. 257-266 ◽  
Author(s):  
Piero Chiarelli

This work shows that in the frame of the stochastic generalization of the quantum hydrodynamic analogy (QHA) the uncertainty principle is fully compatible with the postulate of finite transmission speed of light and information. The theory shows that the measurement process performed in the large scale classical limit in presence of background noise, cannot have a duration smaller than the time need to the light to travel the distance up to which the quantum non-local interaction extend itself. The product of the minimum measuring time multiplied by the variance of energy fluctuation due to presence of stochastic noise shows to lead to the minimum uncertainty principle. The paper also shows that the uncertainty relations can be also derived if applied to the indetermination of position and momentum of a particle of mass m in a quantum fluctuating environment.



2020 ◽  
Vol 26 (7) ◽  
pp. 1469-1495
Author(s):  
A.L. Sabinina ◽  
V.V. Sokolovskii ◽  
N.A. Shul'zhenko ◽  
N.A. Sychova

Subject. The article describes the findings of the authors of fundamental strategic decisions on the formation of multifunctional urban complexes, using the housing demand and supply criterion. Objectives. We undertake a comprehensive study aimed at perfecting the methodology for evaluating the options for city infrastructure development at two stages, i.e. strategic, when general targets of feasible commissioning are determined, and current, when parameters of demand for facilities are taken into account. Methods. The study employs methods of expert survey, statistical data processing, predictive and investigative analysis. Results. We explored factors of creating amenities and comfort in residential construction areas, developed an algorithm to calculate the volume of new living space commissioning on the basis of evaluating demands in the Smart City paradigm. Conclusions. The study shows the cost increase depending on the built-up area, number of floors, and the balance between the type of capacity and the number of residents in the quarter (linear relationship).



2000 ◽  
Vol 151 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Stephan Wild-Eck ◽  
Willi Zimmermann

Two large-scale surveys looking at attitudes towards forests, forestry and forest policy in the second half ofthe nineties have been carried out. This work was done on behalf of the Swiss Confederation by the Chair of Forest Policy and Forest Economics of the Federal Institute of Technology (ETH) in Zurich. Not only did the two studies use very different methods, but the results also varied greatly as far as infrastructure and basic conditions were concerned. One of the main differences between the two studies was the fact that the first dealt only with mountainous areas, whereas the second was carried out on the whole Swiss population. The results of the studies reflect these differences:each produced its own specific findings. Where the same (or similar) questions were asked, the answers highlight not only how the attitudes of those questioned differ, but also views that they hold in common. Both surveys showed positive attitudes towards forests in general, as well as a deep-seated appreciation ofthe forest as a recreational area, and a positive approach to tending. Detailed results of the two surveys will be available in the near future.



1999 ◽  
Vol 39 (10-11) ◽  
pp. 289-295
Author(s):  
Saleh Al-Muzaini

The Shuaiba Industrial Area (SIA) is located about 50 km south of Kuwait City. It accommodates most of the large-scale industries in Kuwait. The total area of the SIA (both eastern and western sectors) is about 22.98 million m2. Fifteen plants are located in the eastern sector and 23 in the western sector, including two petrochemical companies, three refineries, two power plants, a melamine company, an industrial gas corporation, a paper products company and, two steam electricity generating stations, in addition to several other industries. Therefore, only 30 percent of the land in the SIA's eastern sector and 70 percent of land in the SIA's western sector is available for future expansion. Presently, industries in the SIA generate approximately 204,000 t of solid waste. With future development in the industries in the SIA, the estimated quantities will reach 240,000 t. The Shuaiba Area Authority (SAA), a governmental regulatory body responsible for planning and development in the SIA, has recognized the problem of solid waste and has developed an industrial waste minimization program. This program would help to reduce the quantity of waste generated within the SIA and thereby reduce the cost of waste management. This paper presents a description of the waste minimization program and how it is to be implemented by major petroleum companies. The protocols employed in the waste minimization program are detailed.



Sign in / Sign up

Export Citation Format

Share Document