true solution
Recently Published Documents


TOTAL DOCUMENTS

169
(FIVE YEARS 48)

H-INDEX

21
(FIVE YEARS 2)

2022 ◽  
pp. 86-95

A system for ensuring the convertibility of a currency into specified commodities is also, ipso facto, a system for stabilizing the prices of those commodities in terms of the currency in question. This connection is widely ignored in discussions of these two subjects, but it links the two specialised fields of monetary economics and commodity price stabilization tightly together. Unfortunately, despite much work on the topic spanning many decades, almost all such work is made within a single paradigm – that of establishing an international institution to stabilize commodity prices. However, for a number of reasons, no international agreement can achieve more than a very partial solution to this problem: most importantly it cannot directly stabilize more than a single currency, thereby losing the most fundamental benefit of a true solution for all but one of the participating countries. A different approach is therefore needed.


2021 ◽  
Vol 8 (4) ◽  
pp. 1-19
Author(s):  
Xuejiao Kang ◽  
David F. Gleich ◽  
Ahmed Sameh ◽  
Ananth Grama

As parallel and distributed systems scale, fault tolerance is an increasingly important problem—particularly on systems with limited I/O capacity and bandwidth. Erasure coded computations address this problem by augmenting a given problem instance with redundant data and then solving the augmented problem in a fault oblivious manner in a faulty parallel environment. In the event of faults, a computationally inexpensive procedure is used to compute the true solution from a potentially fault-prone solution. These techniques are significantly more efficient than conventional solutions to the fault tolerance problem. In this article, we show how we can minimize, to optimality, the overhead associated with our problem augmentation techniques for linear system solvers. Specifically, we present a technique that adaptively augments the problem only when faults are detected. At any point in execution, we only solve a system whose size is identical to the original input system. This has several advantages in terms of maintaining the size and conditioning of the system, as well as in only adding the minimal amount of computation needed to tolerate observed faults. We present, in detail, the augmentation process, the parallel formulation, and evaluation of performance of our technique. Specifically, we show that the proposed adaptive fault tolerance mechanism has minimal overhead in terms of FLOP counts with respect to the original solver executing in a non-faulty environment, has good convergence properties, and yields excellent parallel performance. We also demonstrate that our approach significantly outperforms an optimized application-level checkpointing scheme that only checkpoints needed data structures.


Author(s):  
Jordan E. DeVylder ◽  
Deidre M. Anglin ◽  
Lisa Bowleg ◽  
Lisa Fedina ◽  
Bruce G. Link

Despite their enormous potential impact on population health and health inequities, police violence and use of excessive force have only recently been addressed from a public health perspective. Moving to change this state of affairs, this article considers police violence in the USA within a social determinants and health disparities framework, highlighting recent literature linking this exposure to mental health symptoms, physical health conditions, and premature mortality. The review demonstrates that police violence is common in the USA; is disproportionately directed toward Black, Latinx, and other marginalized communities; and exerts a significant and adverse effect on a broad range of health outcomes. The state-sponsored nature of police violence, its embedding within a historical and contemporary context of structural racism, and the unique circumstances of the exposure itself make it an especially salient and impactful form of violence exposure, both overlapping with and distinct from other forms of violence. We conclude by noting potential solutions that clinicaly psychology and allied fields may offer to alleviate the impact of police violence, while simultaneously recognizing that a true solution to this issue requires a drastic reformation or replacement of the criminal justice system, as well as addressing the broader context of structural and systemic racism in the USA. Expected final online publication date for the Annual Review of Clinical Psychology, Volume 18 is May 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2021 ◽  
Author(s):  
Vincent Wagner ◽  
Benjamin Castellaz ◽  
Marco Oesting ◽  
Nicole Radde

MotivationThe Chemical Master Equation is the most comprehensive stochastic approach to describe the evolution of a (bio-)chemical reaction system. Its solution is a time-dependent probability distribution on all possible configurations of the system. As the number of possible configurations is typically very large, the Master Equation is often practically unsolvable. The Method of Moments reduces the system to the evolution of a few moments of this distribution, which are described by a system of ordinary differential equations. Those equations are not closed, since lower order moments generally depend on higher order moments. Various closure schemes have been suggested to solve this problem, with different advantages and limitations. Two major problems with these approaches are first that they are open loop systems, which can diverge from the true solution, and second, some of them are computationally expensive.ResultsHere we introduce Quasi-Entropy Closure, a moment closure scheme for the Method of Moments which estimates higher order moments by reconstructing the distribution that minimizes the distance to a uniform distribution subject to lower order moment constraints. Quasi-Entropy closure is similar to Zero-Information closure, which maximizes the information entropy. Results show that both approaches outperform truncation schemes. Moreover, Quasi-Entropy Closure is computationally much faster than Zero-Information Closure. Finally, our scheme includes a plausibility check for the existence of a distribution satisfying a given set of moments on the feasible set of configurations. Results are evaluated on different benchmark problems.Abstract Figure


2021 ◽  
Vol 2086 (1) ◽  
pp. 012194
Author(s):  
V N Mironyuk ◽  
A J R Al-Hassani ◽  
A J K Al-Alwani ◽  
N N Begletsova ◽  
M V Gavrikov ◽  
...  

Abstract The article presents the data of a theoretical molecular dynamics study of the interaction between a pair of porphyrin molecules, i.e. symmetrically substituted 5,10,15,20-tetra (4-n-methyloxyphenyl) porphyrin (P) and asymmetrically substituted 5-(4 hydroxyphenyl)-10,15,20-tris (4-n-methyloxyphenyl) porphyrin (P-OH). We studied three systems, each of which consisted of a pair of porphyrin molecules (P || P, P-OH ↑↑ P-OH and P-OH ↑↓ P-OH) and chloroform molecules as a non-polar solvent. The effect of substitution, different orientations of asymmetrically substituted molecules and temperature on the geometry and energy of the system was investigated. It was shown that all three systems show signs of a true solution with chloroform as a solvent; the distance between asymmetrically substituted P-OH molecules was less than in the case of two P molecules. This may serve as an indirect evidence that the molecules are not prone to aggregation in the presence of chloroform.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Dilek Erkmen ◽  
Alexander E. Labovsky

Abstract We propose and investigate two regularization models for fluid flows at higher Reynolds numbers. Both models are based on the reduced ADM regularization (RADM). One model, which we call DC-RADM (deferred correction for reduced approximate deconvolution model), aims to improve the temporal accuracy of the RADM. The second model, denoted by RADC (reduced approximate deconvolution with correction), is created with a more systemic approach. We treat the RADM regularization as a defect in approximating the true solution of the Navier–Stokes equations (NSE) and then correct for this defect, using the defect correction algorithm. Thus, the resulting RADC model can be viewed as a first member of the class that we call “LESC-reduced”, where one starts with a regularization that resembles a Large Eddy Simulation turbulence model and then improves it with a defect correction technique. Both models are investigated theoretically and numerically, and the RADC is shown to outperform the DC-RADM model both in terms of convergence rates and in terms of the quality of the produced solution.


2021 ◽  
Vol 11 (11) ◽  
pp. 1450
Author(s):  
Till A. Dembek ◽  
Alexandra Hellerbach ◽  
Hannah Jergas ◽  
Markus Eichner ◽  
Jochen Wirths ◽  
...  

Directional deep brain stimulation (DBS) leads are now widely used, but the orientation of directional leads needs to be taken into account when relating DBS to neuroanatomy. Methods that can reliably and unambiguously determine the orientation of directional DBS leads are needed. In this study, we provide an enhanced algorithm that determines the orientation of directional DBS leads from postoperative CT scans. To resolve the ambiguity of symmetric CT artifacts, which in the past, limited the orientation detection to two possible solutions, we retrospectively evaluated four different methods in 150 Cartesia™ directional leads, for which the true solution was known from additional X-ray images. The method based on shifts of the center of mass (COM) of the directional marker compared to its expected geometric center correctly resolved the ambiguity in 100% of cases. In conclusion, the DiODe v2 algorithm provides an open-source, fully automated solution for determining the orientation of directional DBS leads.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2006
Author(s):  
Qi Luo ◽  
Shijian Lin ◽  
Hongxia Wang

Phase retrieval is a classical inverse problem with respect to recovering a signal from a system of phaseless constraints. Many recently proposed methods for phase retrieval such as PhaseMax and gradient-descent algorithms enjoy benign theoretical guarantees on the condition that an elaborate estimate of true solution is provided. Current initialization methods do not perform well when number of measurements are low, which deteriorates the success rate of current phase retrieval methods. We propose a new initialization method that can obtain an estimate of the original signal with uniformly higher accuracy which combines the advantages of the null vector method and maximal correlation method. The constructed spectral matrix for the proposed initialization method has a simple and symmetrical form. A lower error bound is proved theoretically as well as verified numerically.


2021 ◽  
Author(s):  
Georgy I. Shapiro ◽  
Jose M. Gonzalez-Ondina

Abstract. An effective and computationally efficient method is presented for data assimilation in a high-resolution (child) ocean model, which is nested into a coarse-resolution good quality data assimilating (parent) model. The method named Data Assimilation with Stochastic-Deterministic Downscaling (SDDA) reduces bias and root mean square errors (RMSE) of the child model and does not allow the child model to drift away from reality. The basic idea is to assimilate data from the parent model instead of actual observations. In this way, the child model is physically aware of observations via the parent model. The method allows to avoid a complex process of assimilating the same observations which were already assimilated into the parent model. The method consists of two stages: (1) downscaling the parent model output onto the child model grid using Stochastic-Deterministic Downscaling, and (2) applying a simplified Kalman gain formula to each of the fine grid nodes. The method is illustrated in a synthetic case where the true solution is known, and the child model forecast (before data assimilation) is simulated by adding various types of errors. The SDDA method reduces the child model bias to the same level as in the parent model and reduces the RMSE typically by a factor of 2 to 5.


Ocean Science ◽  
2021 ◽  
Vol 17 (4) ◽  
pp. 891-907
Author(s):  
Georgy I. Shapiro ◽  
Jose M. Gonzalez-Ondina ◽  
Vladimir N. Belokopytov

Abstract. High-resolution modelling of a large ocean domain requires significant computational resources. The main purpose of this study is to develop an efficient tool for downscaling the lower-resolution data such as those available from Copernicus Marine Environment Monitoring Service (CMEMS). Common methods of downscaling CMEMS ocean models utilise their lower-resolution output as boundary conditions for local, higher-resolution hydrodynamic ocean models. Such methods reveal greater details of spatial distribution of ocean variables; however, they increase the cost of computations and often reduce the model skill due to the so called “double penalty” effect. This effect is a common problem for many high-resolution models where predicted features are displaced in space or time. This paper presents a stochastic–deterministic downscaling (SDD) method, which is an efficient tool for downscaling of ocean models based on the combination of deterministic and stochastic approaches. The ability of the SDD method is first demonstrated in an idealised case when the true solution is known a priori. Then the method is applied to create an operational Stochastic Model of the Red Sea (SMORS), with the parent model being the Mercator Global Ocean Analysis and Forecast System at 1/12∘ resolution. The stochastic component of the model is data-driven rather than equation-driven, and it is applied to the areas smaller than the Rossby radius, within which distributions of ocean variables are more coherent than over a larger distance. The method, based on objective analysis, is similar to what is used for data assimilation in ocean models and stems from the philosophy of 2-D turbulence. SMORS produces finer-resolution (1/24∘ latitude mesh) oceanographic data using the output from a coarser-resolution (1/12∘ mesh) parent model available from CMEMS. The values on the fine-resolution mesh are computed under conditions of minimisation of the cost function, which represents the error between the model and true solution. SMORS has been validated against sea surface temperature and ARGO float observations. Comparisons show that the model and observations are in good agreement and SMORS is not subject to the “double penalty” effect. SMORS is very fast to run on a typical desktop PC and can be relocated to another area of the ocean.


Sign in / Sign up

Export Citation Format

Share Document