scholarly journals Risk Model of German Corona Warning App – Reloaded

Dependability ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 47-53
Author(s):  
J. Braband ◽  
H. Shäbe

Aim. In this paper we discuss the risk model of the German Corona Warning App in two versions. Both are based on a general semi-quantitative risk approach that is not state of the art anymore and for some application domains even deprecated. The main problem is that parameter estimates are often only ordinal scale or rank numbers for which such operations as multiplication or division are not clearly specified. Therefore, it may results in underestimation or overestimation of the associated risk. Methods. The risk models that are used in the apps are analyzed. Comparison of the nomenclature of model parameters, their influence on the result, approaches to the generation of a combined risk assessment is carried out. The effectiveness of the models is analyzed. Results. It is shown that most of the parameters in the model are used only as binary indicator variable. It has been found that the Corona Warning App uses a much more limited model that does not even assess risk, but relies on one parameter which is weighted exposure time. It has been shown that the application underestimates this parameter and therefore may erroneously reassure users. Thus, it may be concluded that the basic risk model implemented before version 1.7.1., is rather a dosimetric model that depends on the calculated virus concentration and does not depend on exposure and other parameters (excluding some threshold values). It is not even a risk model as defined by many standards. Changes of the risk model in the later version are not fundamental. In particular the later model also assesses not individual risk, but individual exposure according to the results. In addition, the model greatly underestimates the duration of exposure. Although it is reported that about 60% of the app’s users have shared positive test results, the absolute number of published results is less than 10% of all positive test results. Therefore, from an individual point of view, the application is effective only in 10% of cases, or even less. Conclusions. As the Corona Warning App also has other systematic limitations and shortcomings it is advised not to rely on its results but rather on Covid testing or vaccination. In addition, if there are enough virus tests available in the near future, the application will even become outdated. It will be better to develop an application that can assess risks a priori, as a kind of decision support for its users based on their individual risk profile.

2018 ◽  
Vol 46 (3) ◽  
pp. 174-219 ◽  
Author(s):  
Bin Li ◽  
Xiaobo Yang ◽  
James Yang ◽  
Yunqing Zhang ◽  
Zeyu Ma

ABSTRACT The tire model is essential for accurate and efficient vehicle dynamic simulation. In this article, an in-plane flexible ring tire model is proposed, in which the tire is composed of a rigid rim, a number of discretized lumped mass belt points, and numerous massless tread blocks attached on the belt. One set of tire model parameters is identified by approaching the predicted results with ADAMS® FTire virtual test results for one particular cleat test through the particle swarm method using MATLAB®. Based on the identified parameters, the tire model is further validated by comparing the predicted results with FTire for the static load-deflection tests and other cleat tests. Finally, several important aspects regarding the proposed model are discussed.


2020 ◽  
Vol 41 (S1) ◽  
pp. s33-s33
Author(s):  
Michihiko Goto ◽  
Erin Balkenende ◽  
Gosia Clore ◽  
Rajeshwari Nair ◽  
Loretta Simbartl ◽  
...  

Background: Enhanced terminal room cleaning with ultraviolet C (UVC) disinfection has become more commonly used as a strategy to reduce the transmission of important nosocomial pathogens, including Clostridioides difficile, but the real-world effectiveness remains unclear. Objectives: We aimed to assess the association of UVC disinfection during terminal cleaning with the incidence of healthcare-associated C. difficile infection and positive test results for C. difficile within the nationwide Veterans Health Administration (VHA) System. Methods: Using a nationwide survey of VHA system acute-care hospitals, information on UV-C system utilization and date of implementation was obtained. Hospital-level incidence rates of clinically confirmed hospital-onset C. difficile infection (HO-CDI) and positive test results with recent healthcare exposures (both hospital-onset [HO-LabID] and community-onset healthcare-associated [CO-HA-LabID]) at acute-care units between January 2010 and December 2018 were obtained through routine surveillance with bed days of care (BDOC) as the denominator. We analyzed the association of UVC disinfection with incidence rates of HO-CDI, HO-Lab-ID, and CO-HA-LabID using a nonrandomized, stepped-wedge design, using negative binomial regression model with hospital-specific random intercept, the presence or absence of UVC disinfection use for each month, with baseline trend and seasonality as explanatory variables. Results: Among 143 VHA acute-care hospitals, 129 hospitals (90.2%) responded to the survey and were included in the analysis. UVC use was reported from 42 hospitals with various implementation start dates (range, June 2010 through June 2017). We identified 23,021 positive C. difficile test results (HO-Lab ID: 5,014) with 16,213 HO-CDI and 24,083,252 BDOC from the 129 hospitals during the study period. There were declining baseline trends nationwide (mean, −0.6% per month) for HO-CDI. The use of UV-C had no statistically significant association with incidence rates of HO-CDI (incidence rate ratio [IRR], 1.032; 95% CI, 0.963–1.106; P = .65) or incidence rates of healthcare-associated positive C. difficile test results (HO-Lab). Conclusions: In this large quasi-experimental analysis within the VHA System, the enhanced terminal room cleaning with UVC disinfection was not associated with the change in incidence rates of clinically confirmed hospital-onset CDI or positive test results with recent healthcare exposure. Further research is needed to understand reasons for lack of effectiveness, such as understanding barriers to utilization.Funding: NoneDisclosures: None


2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, >0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


1991 ◽  
Vol 18 (2) ◽  
pp. 320-327 ◽  
Author(s):  
Murray A. Fitch ◽  
Edward A. McBean

A model is developed for the prediction of river flows resulting from combined snowmelt and precipitation. The model employs a Kalman filter to reflect uncertainty both in the measured data and in the system model parameters. The forecasting algorithm is used to develop multi-day forecasts for the Sturgeon River, Ontario. The algorithm is shown to develop good 1-day and 2-day ahead forecasts, but the linear prediction model is found inadequate for longer-term forecasts. Good initial parameter estimates are shown to be essential for optimal forecasting performance. Key words: Kalman filter, streamflow forecast, multi-day, streamflow, Sturgeon River, MISP algorithm.


Author(s):  
Mohammad-Reza Ashory ◽  
Farhad Talebi ◽  
Heydar R Ghadikolaei ◽  
Morad Karimpour

This study investigated the vibrational behaviour of a rotating two-blade propeller at different rotational speeds by using self-tracking laser Doppler vibrometry. Given that a self-tracking method necessitates the accurate adjustment of test setups to reduce measurement errors, a test table with sufficient rigidity was designed and built to enable the adjustment and repair of test components. The results of the self-tracking test on the rotating propeller indicated an increase in natural frequency and a decrease in the amplitude of normalized mode shapes as rotational speed increases. To assess the test results, a numerical model created in ABAQUS was used. The model parameters were tuned in such a way that the natural frequency and associated mode shapes were in good agreement with those derived using a hammer test on a stationary propeller. The mode shapes obtained from the hammer test and the numerical (ABAQUS) modelling were compared using the modal assurance criterion. The examination indicated a strong resemblance between the hammer test results and the numerical findings. Hence, the model can be employed to determine the other mechanical properties of two-blade propellers in test scenarios.


2011 ◽  
Vol 64 (S1) ◽  
pp. S3-S18 ◽  
Author(s):  
Yuanxi Yang ◽  
Jinlong Li ◽  
Junyi Xu ◽  
Jing Tang

Integrated navigation using multiple Global Navigation Satellite Systems (GNSS) is beneficial to increase the number of observable satellites, alleviate the effects of systematic errors and improve the accuracy of positioning, navigation and timing (PNT). When multiple constellations and multiple frequency measurements are employed, the functional and stochastic models as well as the estimation principle for PNT may be different. Therefore, the commonly used definition of “dilution of precision (DOP)” based on the least squares (LS) estimation and unified functional and stochastic models will be not applicable anymore. In this paper, three types of generalised DOPs are defined. The first type of generalised DOP is based on the error influence function (IF) of pseudo-ranges that reflects the geometry strength of the measurements, error magnitude and the estimation risk criteria. When the least squares estimation is used, the first type of generalised DOP is identical to the one commonly used. In order to define the first type of generalised DOP, an IF of signal–in-space (SIS) errors on the parameter estimates of PNT is derived. The second type of generalised DOP is defined based on the functional model with additional systematic parameters induced by the compatibility and interoperability problems among different GNSS systems. The third type of generalised DOP is defined based on Bayesian estimation in which the a priori information of the model parameters is taken into account. This is suitable for evaluating the precision of kinematic positioning or navigation. Different types of generalised DOPs are suitable for different PNT scenarios and an example for the calculation of these DOPs for multi-GNSS systems including GPS, GLONASS, Compass and Galileo is given. New observation equations of Compass and GLONASS that may contain additional parameters for interoperability are specifically investigated. It shows that if the interoperability of multi-GNSS is not fulfilled, the increased number of satellites will not significantly reduce the generalised DOP value. Furthermore, the outlying measurements will not change the original DOP, but will change the first type of generalised DOP which includes a robust error IF. A priori information of the model parameters will also reduce the DOP.


Sign in / Sign up

Export Citation Format

Share Document