sufficient precision
Recently Published Documents


TOTAL DOCUMENTS

126
(FIVE YEARS 29)

H-INDEX

13
(FIVE YEARS 1)

Author(s):  
Jamie Levine Daniel ◽  
Fredrik Andersson

The question of when a new nonprofit is founded has not been pursued with sufficient precision. Specifically, a fundamental challenge facing any nonprofit researcher planning to detect, isolate, and analyze new nonprofits is that nonprofit founding is a process, not a discrete event. This study uses administrative data that includes three different founding indicators from more than 4,000 arts organizations, supplemented with survey data from 242 organizations, to illustrate some of the problems inherent in treating the founding process as one discrete event. It also elevates the voices of founders to demonstrate their conceptualization of the concept and offer insights into the multidimensionality of founding.  


Author(s):  
Roman Eremciuc ◽  
◽  
Octavian Bivol ◽  
◽  
◽  
...  

In some cases, the operation of legislative additions and amendments to the Criminal Code may raise uncertainties regarding the interpretation and application of the criminal law. In this regard, the will of the legislator is to be expressed in a manner that meets the recognized criteria of quality of the law, namely: accessibility, predictability and clarity. In particular, the rule of criminal law must be worded with sufficient precision so as to enable the person to decide on his conduct and to reasonably foresee, in the light of the circumstances of the case, the consequences of such conduct. In this article, the authors set out to interpret the last sentence of art. 123 para. (2) of the Criminal Code, i.e. – to determine whether the members of the governing body and the persons holding key positions in the bank can be considered persons authorized or invested by the state to provide, on their behalf, public services or perform activities of interest public and implicitly, if they can be considered public persons, within the meaning of art. 123 paragraph (2) of the Criminal Code, since, before the exercise of responsibilities, the candidatures of such persons must be approved by the National Bank of Moldova, or either, are to be considered persons managing a commercial organization, within the meaning of article 124 of the Criminal Code.


Author(s):  
Joseph Cabeza Lainez

Unlike the volume, the expression for the lateral area of a regular conoid has not yet been obtained by means of direct integration or a differential geometry procedure. As this form is relatively used in engineering, the inability to determine its surface, represents a serious hindrance for several problems which arise in radiative transfer, lighting and construction, to cite just a few. Since this particular shape can be conceived as a set of linearly dwindling ellipses which remain parallel to a circular directrix, a typical problem appears when looking for the length of such ellipses. We conceived a new procedure which, in principle, consists in dividing the surface into infinitesimal elliptic strips to which we have subsequently applied Ramanujan’s second approximation. In this fashion, we can obtain the perimeter of any ellipse pertaining to the said form as a function of the radius of the directrix and the position of the ellipse’s center on the X-axis. Integrating the so-found perimeters of the differential strips for the whole span of the conoid, an unexpected solution emerges through the newly found number psi (ψ). As the strips are slanted in the symmetry axis, their width is not uniform and we need to perform some adjustments in order to complete the problem with sufficient precision. Relevant implications for technology, building science, radiation and structure are derived in the ensuing discussion.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Josu Etxezarreta Martinez ◽  
Patricio Fuentes ◽  
Pedro Crespo ◽  
Javier Garcia-Frias

AbstractThe decoherence effects experienced by the qubits of a quantum processor are generally characterized using the amplitude damping time (T1) and the dephasing time (T2). Quantum channel models that exist at the time of writing assume that these parameters are fixed and invariant. However, recent experimental studies have shown that they exhibit a time-varying (TV) behaviour. These time-dependant fluctuations of T1 and T2, which become even more pronounced in the case of superconducting qubits, imply that conventional static quantum channel models do not capture the noise dynamics experienced by realistic qubits with sufficient precision. In this article, we study how the fluctuations of T1 and T2 can be included in quantum channel models. We propose the idea of time-varying quantum channel (TVQC) models, and we show how they provide a more realistic portrayal of decoherence effects than static models in some instances. We also discuss the divergence that exists between TVQCs and their static counterparts by means of a metric known as the diamond norm. In many circumstances this divergence can be significant, which indicates that the time-dependent nature of decoherence must be considered, in order to construct models that capture the real nature of quantum devices.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2917
Author(s):  
Mateusz Turkowski ◽  
Artur Szczecki ◽  
Maciej Szudarek ◽  
Krzysztof Janiszowski

In previous works, a non-linear equation describing variable area (VA) flowmeters in transient was presented. The use of a full nonlinear equation, despite giving accurate results, can be difficult and time-consuming and it requires having specific software and knowledge at one’s disposal. The goal of this paper was to simplify the existing model so that it could be used in applications where ease of use and ease of implementation are more important than accuracy. The existing model was linearized and simple formulae describing natural frequency and damping coefficients were derived. With these parameters, it is possible to assess the dynamic properties of a variable area flowmeter. The step response form can be identified and natural frequency and settling time can be estimated. The linearized model and the experiment were in reasonable agreement. The step response type was captured correctly for each of the six VA meter types. The error in the undamped natural frequency was not larger than 15%, which means that the VA meter sensor’s dynamic properties can be predicted at the design stage with sufficient precision.


Author(s):  
Peter Zweifel

AbstractThis contribution evokes Orio Giarini’s courage to think ‘outside the box’. It proposes a practical way to bridge the gap between risk (where probabilities of occurrence are fully known) and uncertainty (where these probabilities are unknown). However, in the context of insurance, neither extreme applies: the risk type of a newly enrolled customer is not fully known, loss distributions (especially their tails) are difficult to estimate with sufficient precision, the diversification properties of a block of policies acquired from another company can be assessed only to an approximation, and rates of return on investment depend on decisions of central banks that cannot be predicted too well. This contribution revolves around the launch of an innovative insurance product, where the company has a notion of whether a favourable market reception is more likely than an unfavourable one, of the chance of obtaining approval from the regulatory authority and the risk of a competitor launching a similar innovation. Linear partial information theory is proposed and applied as a particular practical way to systematically exploit the imprecise information that may exist for all of these aspects. The decision-making criterion is maxEmin, an intuitive modification of the maximin rule known from games against nature.


2021 ◽  
Author(s):  
František Hrouda ◽  
Jan Franěk ◽  
Martin Chadima ◽  
Josef Ježek ◽  
Štěpánka Mrázová ◽  
...  

<p>Magnetostatic susceptibility of single crystals of graphite is negative (the mineral is diamagnetic) and strongly anisotropic. The in-phase component of dynamic susceptibility (measured in alternating magnetic field) is also negative, but an order-of-magnitude stronger than the magnetostatic susceptibility. The out-of-phase component, which is no doubt due to electrical eddy currents, is positive and strong. Consequently, if the graphite crystals in graphite ore are oriented preferentially by crystal lattice (LPO), one would expect strong anisotropy of magnetic susceptibility (AMS) of graphite ore in both in-phase (ipAMS) and out-of-phase (opAMS) components. The ipAMS is controlled not only by the LPO of graphite, but also by the preferred orientation of paramagnetic and ferromagnetic minerals of the barren rock, while the opAMS indicates only the LPO of graphite. In graphite ores occurring in the Moldanubian Unit of Southern Bohemia, the in-phase susceptibility ranges from negative values in the order of 10<sup>-5</sup> [SI units] to positive values in the order of 10<sup>-4</sup>. This probably indicates simultaneous control by graphite and paramagnetic and/or ferromagnetic minerals. On the other hand, the out-of-phase susceptibility is much higher, in the order of 10<sup>-4</sup>, and no doubt indicates its graphite control. The degree of ipAMS is moderate, that of opAMS is truly high. The ipAMS foliation is roughly parallel to the metamorphic foliation in ores and wall rocks and the ipAMS lineation is parallel to the mesoscopic lineation. The opAMS is inverse to the ipAMS with the opAMS lineation being perpendicular to the metamorphic foliation. All this indicates a conspicuous LPO of graphite in the ore that was probably created during Variscan regional metamorphism and associated ductile deformation. The opAMS has therefore shown an effective tool for the investigation of the LPO of graphite in graphite ore or graphite-bearing rocks provided that the opAMS is strong enough to be determined with sufficient precision and graphite is the only conductive mineral in the samples investigated.</p>


2021 ◽  
Vol 3 (1) ◽  
pp. 1-46
Author(s):  
Alexander Krüger ◽  
Jan Tünnermann ◽  
Lukas Stratmann ◽  
Lucas Briese ◽  
Falko Dressler ◽  
...  

Abstract As a formal theory, Bundesen’s theory of visual attention (TVA) enables the estimation of several theoretically meaningful parameters involved in attentional selection and visual encoding. As of yet, TVA has almost exclusively been used in restricted empirical scenarios such as whole and partial report and with strictly controlled stimulus material. We present a series of experiments in which we test whether the advantages of TVA can be exploited in more realistic scenarios with varying degree of stimulus control. This includes brief experimental sessions conducted on different mobile devices, computer games, and a driving simulator. Overall, six experiments demonstrate that the TVA parameters for processing capacity and attentional weight can be measured with sufficient precision in less controlled scenarios and that the results do not deviate strongly from typical laboratory results, although some systematic differences were found.


Author(s):  
Stepan Nebaba

The paper considers the previously developed automated algorithm for determining the interplanar distances of the crystalline structure of a substance from images of transmission electron microscopy (TEM images), and also proposes a modification of the algorithm that allows to increase the automation level of obtaining the result. The question of automation of the process of normalization of images of crystal structures by angles of rotation is considered. Also, an alternative is proposed at the step of image binarization with a given threshold in the form of adaptive binarization with an automatically selected binarization window size. The improved algorithm was tested on a number of publicly available TEM images, and the interplanar distance measurements were compared with the measurements in the specialized software package named Digital Micrograph GMS 1.8. Comparison of the results showed that the proposed improved algorithm determines the distance with sufficient precision and fits within the range of measurement error for the considered images.


2021 ◽  
Vol 247 ◽  
pp. 12004
Author(s):  
J Malec ◽  
K Ambrožič ◽  
M Kromar

The feasibility of using Origen+ARP code for depletion and decay calculations for Krško NPP was tested by performing depletion and decay calculations using interpolated libraries and comparing the results to the ones calculated from non-interpolated libraries in order to evaluated the number of libraries needed in order to interpolate fuel properties with sufficient precision for any realistic burnup scenario. For Krško NPP fuel, using three interpolation libraries with different decay heat parameters was enough to bring the approximation error bellow 0:5 % when comparing fuel decay heat through the decay interval.


Sign in / Sign up

Export Citation Format

Share Document