parameter ranges
Recently Published Documents


TOTAL DOCUMENTS

380
(FIVE YEARS 117)

H-INDEX

32
(FIVE YEARS 5)

2022 ◽  
Author(s):  
Christopher B Boyer ◽  
Eva Rumpler ◽  
Stephen M Kissler ◽  
Marc Lipsitch

Social gatherings can be an important locus of transmission for many pathogens including SARS-CoV-2. During an outbreak, restricting the size of these gatherings is one of several non-pharmaceutical interventions available to policy-makers to reduce transmission. Often these restrictions take the form of prohibitions on gatherings above a certain size. While it is generally agreed that such restrictions reduce contacts, the specific size threshold separating "allowed" from "prohibited" gatherings often does not have a clear scientific basis, which leads to dramatic differences in guidance across location and time. Building on the observation that gathering size distributions are often heavy-tailed, we develop a theoretical model of transmission during gatherings and their contribution to general disease dynamics. We find that a key, but often overlooked, determinant of the optimal threshold is the distribution of gathering sizes. Using data on pre-pandemic contact patterns from several sources as well as empirical estimates of transmission parameters for SARS-CoV-2, we apply our model to better understand relationship between restriction threshold and reduction in cases. We find that, under reasonable transmission parameter ranges, restrictions may have to be set quite low to have any demonstrable effect on cases due to relative frequency of smaller gatherings. We compare our conceptual model with observed changes in reported contacts during lockdown in March of 2020.


Author(s):  
Alexander Lopez ◽  
Solmar Varela ◽  
Ernesto Medina

Abstract The spin activity in macromolecules such as DNA and oligopeptides, in the context of the Chiral Induced Spin Selectivity (CISS) has been proposed to be due to the atomic Spin-Orbit Coupling (SOC) and the associated chiral symmetry of the structures. This coupling, associated with carbon, nitrogen, and oxygen atoms in biological molecules, albeit small (meV), can be enhanced by the geometry, and strong local polarization effects such as hydrogen bonding (HB). A novel way to manipulate the spin degree of freedom is by modifying the spectrum using a coupling to the appropriate electromagnetic radiation field. Here we use the Floquet formalism in order to show how the half-filled band Hamiltonian for DNA, can be modulated by the radiation to produce up to a tenfold increase of the effective SOC once the intrinsic coupling is present. On the other hand, the chiral model, once incorporating the orbital angular momentum of electron motion on the helix, opens a gap for different helicity states (helicity splitting) that chooses spin polarization according to transport direction and chirality, without breaking time-reversal symmetry. The observed effects are feasible in physically reasonable parameter ranges for the radiation field amplitude and frequency.


Author(s):  
Gonzalo Marcelo Ramírez-Ávila ◽  
Stéphanie Depickère ◽  
Imre M. Jánosi ◽  
Jason A. C. Gallas

AbstractLarge-scale brain simulations require the investigation of large networks of realistic neuron models, usually represented by sets of differential equations. Here we report a detailed fine-scale study of the dynamical response over extended parameter ranges of a computationally inexpensive model, the two-dimensional Rulkov map, which reproduces well the spiking and spiking-bursting activity of real biological neurons. In addition, we provide evidence of the existence of nested arithmetic progressions among periodic pulsing and bursting phases of Rulkov’s neuron. We find that specific remarkably complex nested sequences of periodic neural oscillations can be expressed as simple linear combinations of pairs of certain basal periodicities. Moreover, such nested progressions are robust and can be observed abundantly in diverse control parameter planes which are described in detail. We believe such findings to add significantly to the knowledge of Rulkov neuron dynamics and to be potentially helpful in large-scale simulations of the brain and other complex neuron networks.


2021 ◽  
Author(s):  
Adam Lampert ◽  
Raanan Sulitzeanu-Kenan ◽  
Pieter Vanhuysse ◽  
Markus Tepe

When will self-interested vaccine-rich countries voluntarily donate their surplus vaccines to vaccine-poor countries during a pandemic? We develop a game-theoretic approach to address this question. We identify vaccine-rich countries' optimal surplus donation strategies, and then examine whether these strategies are stable (Nash equilibrium or self-enforcing international agreement). We identify parameter ranges in which full or partial surplus stock donations are optimal for the donor countries. Within a more restrictive parameter region, these optimal strategies are also stable. This implies that, under certain conditions (notably a total amount of surplus vaccines that is sufficiently large), simple coordination can lead to significant donations by strictly self-interested vaccine-rich countries. On the other hand, if the total amount that the countries can donate is small, we expect no contribution from self-interested countries. The results of this analysis provide guidance to policy makers in identifying the circumstances in which coordination efforts are likely to be effective.


Author(s):  
Robert Noble ◽  
Dominik Burri ◽  
Cécile Le Sueur ◽  
Jeanne Lemant ◽  
Yannick Viossat ◽  
...  

AbstractCharacterizing the mode—the way, manner or pattern—of evolution in tumours is important for clinical forecasting and optimizing cancer treatment. Sequencing studies have inferred various modes, including branching, punctuated and neutral evolution, but it is unclear why a particular pattern predominates in any given tumour. Here we propose that tumour architecture is key to explaining the variety of observed genetic patterns. We examine this hypothesis using spatially explicit population genetics models and demonstrate that, within biologically relevant parameter ranges, different spatial structures can generate four tumour evolutionary modes: rapid clonal expansion, progressive diversification, branching evolution and effectively almost neutral evolution. Quantitative indices for describing and classifying these evolutionary modes are presented. Using these indices, we show that our model predictions are consistent with empirical observations for cancer types with corresponding spatial structures. The manner of cell dispersal and the range of cell–cell interactions are found to be essential factors in accurately characterizing, forecasting and controlling tumour evolution.


2021 ◽  
Author(s):  
Yana Hrytsenko ◽  
Noah M. Daniels ◽  
Rachel S. Schwartz

Abstract Background: Phylogenies enrich our understanding of how genes, genomes, and species evolve. Traditionally, alignment-based methods are used to construct phylogenies from genetic sequence data; however, this process can be time-consuming when analyzing the large amounts of genomic data available today. Additionally, these analyses face challenges due to differences in genome structure, synteny, and the need to identify similarities in the face of repeated substitutions resulting in loss of phylogenetic information contained in the sequence. Alignment Free (AF) approaches using k-mers (short subsequences) can be an efficient alternative due to their indifference to positional rearrangements in a sequence. However, these approaches may be sensitive to k-mer length and the distance between samples.Results: In this paper, we analyzed the sensitivity of an AF approach based on k-mer frequencies to these challenges using cosine and Euclidean distance metrics for both assembled genomes and unassembled sequencing reads. Quantification of the sensitivity of this AF approach for phylogeny reconstruction to branch length and k-mer length provides a better understanding of the necessary parameter ranges for accurate phylogeny reconstruction. Our results show that a frequency-based AF approach can result in accurate phylogeny reconstruction when using whole genomes, but not stochastically sequenced reads, so long as longer k-mers are used. Conclusions: In this study, we have shown an AF approach for phylogeny reconstruction is robust in analyzing assembled genome data for a range of numbers of substitutions using longer k-mers. Using simulated reads randomly selected from the genome by the Illumina sequencer had a detrimental effect on phylogeny estimation. Additionally, filtering out infrequent k-mers improved the computational efficiency of the method while preserving the accuracy of the results thus suggesting the feasibility of using only a subset of data to improve computational efficiency in cases where large sets of genome-scale data are analyzed.


Energies ◽  
2021 ◽  
Vol 14 (24) ◽  
pp. 8527
Author(s):  
Marica Eboli ◽  
Francesco Galleni ◽  
Nicola Forgione ◽  
Nicolò Badodi ◽  
Antonio Cammi ◽  
...  

The in-box LOCA (Loss of Coolant Accident) represents a major safety concern to be addressed in the design of the WCLL-BB (water-cooled lead-lithium breeding blanket). Research activities are ongoing to master the phenomena and processes that occur during the postulated accident, to enhance the predictive capability and reliability of numerical tools, and to validate computer models, codes, and procedures for their applications. Following these objectives, ENEA designed and built the new separate effects test facility LIFUS5/Mod3. Two experimental campaigns (Series D and Series E) were executed by injecting water at high pressure into a pool of PbLi in WCLL-BB-relevant parameter ranges. The obtained experimental data were used to check the capabilities of the RELAP5 system code to reproduce the pressure transient of a water system, to validate the chemical model of PbLi/water reactions implemented in the modified version of SIMMER codes for fusion application, to investigate the dynamic effects of energy release on the structures, and to provide relevant feedback for the follow-up experimental campaigns. This work presents the experimental data and the numerical simulations of Test E4.1. The results of the test are presented and critically discussed. The code simulations highlight that SIMMER code is able to reproduce the phenomena connected to PbLi/water interaction, and the relevant test parameters are in agreement with the acquired experimental signals. Moreover, the results obtained by the first approach to SIMMER-RELAP5 code-coupling demonstrate its capability of and strength for predicting the transient scenario in complex geometries, considering multiple physical phenomena and minimizing the computational cost.


2021 ◽  
pp. 108128652110576
Author(s):  
Julian Karl Bauer ◽  
Thomas Böhlke

Fiber orientation tensors are established descriptors of fiber orientation states in (thermo-)mechanical material models for fiber-reinforced composites. In this paper, the variety of fourth-order orientation tensors is analyzed and specified by parameterizations and admissible parameter ranges. The combination of parameterizations and admissible parameter ranges allows for studies on the mechanical response of different fiber architectures. Linear invariant decomposition with focus on index symmetry leads to a novel compact hierarchical parameterization, which highlights the central role of the isotropic state. Deviation from the isotropic state is given by a triclinic harmonic tensor with simplified structure in the orientation coordinate system, which is spanned by the second-order orientation tensor. Material symmetries reduce the number of independent parameters. The requirement of positive-semi-definiteness defines admissible ranges of independent parameters. Admissible parameter ranges for transversely isotropic and planar cases are given in a compact closed form and the orthotropic variety is visualized and discussed in detail. Sets of discrete unit vectors, leading to selected orientation states, are given.


2021 ◽  
Author(s):  
Ryan Santoso ◽  
Xupeng He ◽  
Marwa Alsinan ◽  
Ruben Figueroa Hernandez ◽  
Hyung Kwak ◽  
...  

Abstract History matching is a critical step within the reservoir management process to synchronize the simulation model with the production data. The history-matched model can be used for planning optimum field development and performing optimization and uncertainty quantifications. We present a novel history matching workflow based on a Bayesian framework that accommodates subsurface uncertainties. Our workflow involves three different model resolutions within the Bayesian framework: 1) a coarse low-fidelity model to update the prior range, 2) a fine low-fidelity model to represent the high-fidelity model, and 3) a high-fidelity model to re-construct the real response. The low-fidelity model is constructed by a multivariate polynomial function, while the high-fidelity model is based on the reservoir simulation model. We firstly develop a coarse low-fidelity model using a two-level Design of Experiment (DoE), which aims to provide a better prior. We secondly use Latin Hypercube Sampling (LHS) to construct the fine low-fidelity model to be deployed in the Bayesian runs, where we use the Metropolis-Hastings algorithm. Finally, the posterior is fed into the high-fidelity model to evaluate the matching quality. This work demonstrates the importance of including uncertainties in history matching. Bayesian provides a robust framework to allow uncertainty quantification within the reservoir history matching. Under uniform prior, the convergence of the Bayesian is very sensitive to the parameter ranges. When the solution is far from the mean of the parameter ranges, the Bayesian introduces bios and deviates from the observed data. Our results show that updating the prior from the coarse low-fidelity model accelerates the Bayesian convergence and improves the matching convergence. Bayesian requires a huge number of runs to produce an accurate posterior. Running the high-fidelity model multiple times is expensive. Our workflow tackles this problem by deploying a fine low-fidelity model to represent the high-fidelity model in the main runs. This fine low-fidelity model is fast to run, while it honors the physics and accuracy of the high-fidelity model. We also use ANOVA sensitivity analysis to measure the importance of each parameter. The ranking gives awareness to the significant ones that may contribute to the matching accuracy. We demonstrate our workflow for a geothermal reservoir with static and operational uncertainties. Our workflow produces accurate matching of thermal recovery factor and produced-enthalpy rate with physically-consistent posteriors. We present a novel workflow to account for uncertainty in reservoir history matching involving multi-resolution interaction. The proposed method is generic and can be readily applied within existing history-matching workflows in reservoir simulation.


Author(s):  
Chao Sun ◽  
Thomas Espitau ◽  
Mehdi Tibouchi ◽  
Masayuki Abe

The lattice reduction attack on (EC)DSA (and other Schnorr-like signature schemes) with partially known nonces, originally due to Howgrave-Graham and Smart, has been at the core of many concrete cryptanalytic works, side-channel based or otherwise, in the past 20 years. The attack itself has seen limited development, however: improved analyses have been carried out, and the use of stronger lattice reduction algorithms has pushed the range of practically vulnerable parameters further, but the lattice construction based on the signatures and known nonce bits remain the same.In this paper, we propose a new idea to improve the attack based on the same data in exchange for additional computation: carry out an exhaustive search on some bits of the secret key. This turns the problem from a single bounded distance decoding (BDD) instance in a certain lattice to multiple BDD instances in a fixed lattice of larger volume but with the same bound (making the BDD problem substantially easier). Furthermore, the fact that the lattice is fixed lets us use batch/preprocessing variants of BDD solvers that are far more efficient than repeated lattice reductions on non-preprocessed lattices of the same size. As a result, our analysis suggests that our technique is competitive or outperforms the state of the art for parameter ranges corresponding to the limit of what is achievable using lattice attacks so far (around 2-bit leakage on 160-bit groups, or 3-bit leakage on 256-bit groups).We also show that variants of this idea can also be applied to bits of the nonces (leading to a similar improvement) or to filtering signature data (leading to a data-time trade-off for the lattice attack). Finally, we use our technique to obtain an improved exploitation of the TPM–FAIL dataset similar to what was achieved in the Minerva attack.


Sign in / Sign up

Export Citation Format

Share Document