interval width
Recently Published Documents


TOTAL DOCUMENTS

63
(FIVE YEARS 18)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
Vol 26 (45) ◽  
Author(s):  
Ivo Van Walle ◽  
Katrin Leitmeyer ◽  
Eeva K Broberg ◽  

Background Reliable testing for SARS-CoV-2 is key for the management of the COVID-19 pandemic. Aim We estimate diagnostic accuracy for nucleic acid and antibody tests 5 months into the COVID-19 pandemic, and compare with manufacturer-reported accuracy. Methods We reviewed the clinical performance of SARS-CoV-2 nucleic acid and antibody tests based on 93,757 test results from 151 published studies and 20,205 new test results from 12 countries in the European Union and European Economic Area (EU/EEA). Results Pooling the results and considering only results with 95% confidence interval width ≤ 5%, we found four nucleic acid tests, including one point-of-care test and three antibody tests, with a clinical sensitivity ≥ 95% for at least one target population (hospitalised, mild or asymptomatic, or unknown). Nine nucleic acid tests and 25 antibody tests, 12 of them point-of-care tests, had a clinical specificity of ≥ 98%. Three antibody tests achieved both thresholds. Evidence for nucleic acid point-of-care tests remains scarce at present, and sensitivity varied substantially. Study heterogeneity was low for eight of 14 sensitivity and 68 of 84 specificity results with confidence interval width ≤ 5%, and lower for nucleic acid tests than antibody tests. Manufacturer-reported clinical performance was significantly higher than independently assessed in 11 of 32 and four of 34 cases, respectively, for sensitivity and specificity, indicating a need for improvement in this area. Conclusion Continuous monitoring of clinical performance within more clearly defined target populations is needed.


Author(s):  
Zack Ellerby ◽  
Christian Wagner ◽  
Stephen B. Broomell

AbstractObtaining quantitative survey responses that are both accurate and informative is crucial to a wide range of fields. Traditional and ubiquitous response formats such as Likert and visual analogue scales require condensation of responses into discrete or point values—but sometimes a range of options may better represent the correct answer. In this paper, we propose an efficient interval-valued response mode, whereby responses are made by marking an ellipse along a continuous scale. We discuss its potential to capture and quantify valuable information that would be lost using conventional approaches, while preserving a high degree of response efficiency. The information captured by the response interval may represent a possible response range—i.e., a conjunctive set, such as the real numbers between 3 and 6. Alternatively, it may reflect uncertainty in respect to a distinct response—i.e., a disjunctive set, such as a confidence interval. We then report a validation study, utilizing our recently introduced open-source software (DECSYS), to explore how interval-valued survey responses reflect experimental manipulations of several factors hypothesised to influence interval width, across multiple contexts. Results consistently indicate that respondents used interval widths effectively, and subjective participant feedback was also positive. We present this as initial empirical evidence for the efficacy and value of interval-valued response capture. Interestingly, our results also provide insight into respondents’ reasoning about the different aforementioned types of intervals—we replicate a tendency towards overconfidence for those representing epistemic uncertainty (i.e., disjunctive sets), but find intervals representing inherent range (i.e., conjunctive sets) to be well-calibrated.


Author(s):  
Jeremy Rohmer

Abstract The treatment of uncertainty using extra-probabilistic approaches, like intervals or p-boxes, allows for a clear separation between epistemic uncertainty and randomness in the results of risk assessments. This can take the form of an interval of failure probabilities; the interval width W being an indicator of “what is unknown.” In some situations, W is too large to be informative. To overcome this problem, we propose to reverse the usual chain of treatment by starting with the targeted value of W that is acceptable to support the decision-making, and to quantify the necessary reduction in the input p-boxes that allows achieving it. In this view, we assess the feasibility of this procedure using two case studies (risk of dike failure, and risk of rupture of a frame structure subjected to lateral loads). By making the link with the estimation of excursion sets (i.e., the set of points where a function takes values below some prescribed threshold), we propose to alleviate the computational burden of the procedure by relying on the combination of Gaussian process (GP) metamodels and sequential design of computer experiments. The considered test cases show that the estimates can be achieved with only a few tens of calls to the computationally intensive algorithm for mixed aleatory/epistemic uncertainty propagation.


Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 737
Author(s):  
Jelena D. Velimirovic ◽  
Aleksandar Janjic

This paper deals with uncertainty, asymmetric information, and risk modelling in a complex power system. The uncertainty is managed by using probability and decision theory methods. More specifically, influence diagrams—as extended Bayesian network functions with interval probabilities represented through credal sets—were chosen for the predictive modelling scenario of replacing the most critical circuit breakers in optimal time. Namely, based on the available data on circuit breakers and other variables that affect the considered model of a complex power system, a group of experts was able to assess the situation using interval probabilities instead of crisp probabilities. Furthermore, the paper examines how the confidence interval width affects decision-making in this context and eliminates the information asymmetry of different experts. Based on the obtained results for each considered interval width separately on the action to be taken over the considered model in order to minimize the risk of the power system failure, it can be concluded that the proposed approach clearly indicates the advantages of using interval probability when making decisions in systems such as the one considered in this paper.


Author(s):  
WS Untari ◽  
S Sari

This study aims to determine the partnership pattern, determine the effectiveness of the partnership program, determine the level of welfare of partner sugarcane farmers and determine the relationship between the effectiveness of the PG Wringin Anom partnership program with the welfare of sugarcane farmers in Situbondo Regency. The basic method of research used is descriptive method. The research location is PG Wringin Anom. The determination of the number of samples followed the normal distribution rules, amounting to 30 respondents. To analyze the effectiveness of the partnership program between PG Wringin Anom and sugarcane farmers, the interval width formula was used and to analyze the relationship between the effectiveness of the partnership, the Spearman rank correlation test was used through the SPSS 16.0 For Windows Program. The results showed that the partnership pattern that existed between PG Wringin Anom and sugar cane farmers was TRKSU B which was included in the sub-contract partnership pattern. The KKPE program that has been implemented so far has been quite effective, the profit sharing system that has been implemented so far has been quite effective and the assistance program for sugarcane cultivation that has been implemented so far has been quite effective. There is a significant relationship between the effectiveness of the KKPE program on the household welfare of sugar cane farmers, there is a significant relationship between the effectiveness of the profit sharing system on the household welfare of sugar cane farmers


2021 ◽  
Author(s):  
William I. Atlas ◽  
Carrie A. Holt ◽  
Daniel T. Selbie ◽  
Brendan M. Connors ◽  
Steve Cox-Rogers ◽  
...  

AbstractManagement of data-limited populations is a key challenge to the sustainability of fisheries around the world. For example, sockeye salmon (Oncorhynchus nerka) spawn and rear in many remote coastal watersheds of British Columbia (BC), Canada, making population assessment a challenge. Estimating conservation and management targets for these populations is particularly relevant given their importance to First Nations and commercial fisheries. Most sockeye salmon have obligate lake-rearing as juveniles, and total abundance is typically limited by production in rearing lakes. Although methods have been developed to estimate population capacity based on nursery lake photosynthetic rate (PR) and lake area or volume, they have not yet been widely incorporated into stock-recruit analyses. We tested the value of combining lake-based capacity estimates with traditional stock-recruit based approaches to assess population status using a hierarchical-Bayesian stock-recruit model for 70 populations across coastal BC. This analysis revealed regional variation in sockeye population productivity (Ricker α), with coastal stocks exhibiting lower mean productivity than those in interior watersheds. Using moderately-informative PR estimates of capacity as priors reduced model uncertainty, with a more than five-fold reduction in credible interval width for estimates of conservation benchmarks (e.g. SMAX - spawner abundance at carrying capacity). We estimated that almost half of these remote sockeye stocks are below one commonly applied conservation benchmarks (SMSY), despite substantial reductions in fishing pressure in recent decades. Thus, habitat-based capacity estimates can dramatically reduce scientific uncertainty in model estimates of management targets that underpin sustainable sockeye fisheries. More generally, our analysis reveals opportunities to integrate spatial analyses of habitat characteristics with population models to inform conservation and management of exploited species where population data are limited.


Author(s):  
Wen Wang ◽  
Jingshu Wang ◽  
Renata Romanowicz

AbstractUncertainty in the calculation of a Standardized Precipitation Index (SPI) attracted growing concerns in the hydrometeorology research community in the last decade. This issue is addressed in the present study from the perspective of candidate probability distributions, the data record length, the cumulative timescale and the selection of a reference period with the bootstrap and Monte Carlo methods using daily precipitation data observed in four climate regions across China. The impacts of the uncertainty in an SPI calculation on drought assessment are also investigated. Results show that the Gamma distribution is optimal in describing the cumulative precipitation in China; among the four timescales investigated in the present study, the minimal timescale appropriate for SPI calculation is 20 days for the humid region, 30 days for the semi-humid/semi-arid region and Tibetan Plateau (mostly its eastern part), and 90 days for the arid region. The uncertainty in SPI calculation decreases with the increase of timescale and record length, essentially as a consequence of the decrease of the confidence interval width of Gamma distribution parameters with the increase of timescale and record length. But there is little improvement for the parameter estimation with record length longer than 70 years. There is greater uncertainty for high absolute SPI values than for small ones, consequently there is greater uncertainty in assessing extreme droughts than moderate droughts. Reference period selection has large impacts on drought assessment, especially in the context of climate change. The uncertainty of the SPI calculation has large impacts on categorizing droughts, but no impact on assessing the temporal features of drought variation.


2021 ◽  
Vol 20 ◽  
pp. 45-52
Author(s):  
Lapasrada Singhasomboon ◽  
Wararit Panichkitkosolkul ◽  
Andrei Volodin

In this paper, we investigate confidence intervals for the ratio of means of two independent lognormal distributions. The normal approximation (NA) approach was proposed. We compared the proposed with another approaches, the ML, GCI, and MOVER. The performance of these approaches were evaluated in terms of coverage probabilities and interval widths. The Simulation studies and results showed that the GCI and MOVER approaches performed similar in terms of the coverage probability and interval width for all sample sizes. The ML and NA approaches provided the coverage probability close to nominal level for large sample sizes. However, our proposed method provided the interval width shorter than other methods. Overall, our proposed is conceptually simple method. We recommend that our proposed approach is appropriate for large sample sizes because it is consistently performs well in terms of the coverage probability and the interval width is typically shorter than the other approaches. Finally, the proposed approaches are illustrated using a real-life example.


2021 ◽  
Author(s):  
Andrzej Kotarba ◽  
Mateusz Solecki

<p>Vertically-resolved cloud amount is essential for understanding the Earth’s radiation budget. Joint CloudSat-CALIPSO, lidar-radar cloud climatology remains the only dataset providing such information globally. However, a specific sampling scheme (pencil-like swath, 16-day revisit) introduces an uncertainty to CloudSat-CALIPSO cloud amounts. In the research we assess those uncertainties in terms of a bootstrap confidence intervals. Five years (2006-2011) of the 2B-GEOPROF-LIDAR (version P2_R05) cloud product was examined, accounting for  typical spatial resolutions of a global grids (1.0°, 2.5°, 5.0°, 10.0°), four confidence levels of confidence interval (0.85, 0.90, 0.95, 0.99), and three time scales of mean cloud amount (annual, seasonal, monthly). Results proved that cloud amount accuracy of 1%, or 5%, is not achievable with the dataset, assuming a 5-year mean cloud amount, high (>0.95) confidence level, and fine spatial resolution (1º–2.5º). The 1% requirement was only met by ~6.5% of atmospheric volumes at 1º and 2.5º, while more tolerant criterion (5%) was met by 22.5% volumes at 1º, or 48.9% at 2.5º resolution. In order to have at least 99% of volumes meeting an accuracy criterion, the criterion itself would have to be lowered to ~20% for 1º data, or to ~8% for 2.5º data. Study also quantified the relation between confidence interval width, and spatial resolution, confidence level, number of observations. Cloud regime (mean cloud amount, and standard deviation of cloud amount) was found the most important factor impacting the width of confidence interval. The research has been funded by the National Science Institute of Poland grant no. UMO-2017/25/B/ST10/01787. This research has been supported in part by PL-Grid Infrastructure (a computing resources).</p>


2020 ◽  
Author(s):  
Ivo Van Walle ◽  
Katrin Leitmeyer ◽  
Eeva K Broberg ◽  

We reviewed the clinical performance of SARS-CoV-2 nucleic acid, viral antigen and antibody tests based on 94739 test results from 157 published studies and 20205 new test results from 12 EU/EEA Member States. Pooling the results and considering only results with 95% confidence interval width ≤5%, we found 4 nucleic acid tests, among which 1 point of care test, and 3 antibody tests with a clinical sensitivity ≤95% for at least one target population (hospitalised, mild or asymptomatic, or unknown). Analogously, 9 nucleic acid tests and 25 antibody tests, among which 12 point of care tests, had a clinical specificity of ≤98%. Three antibody tests achieved both thresholds. Evidence for nucleic acid and antigen point of care tests remains scarce at present, and sensitivity varied substantially. Study heterogeneity was low for 8/14 (57.1%) sensitivity and 68/84 (81.0%) specificity results with confidence interval width ≤5%, and lower for nucleic acid tests than antibody tests. Manufacturer reported clinical performance was significantly higher than independently assessed in 11/32 (34.4%) and 4/34 (11.8%) cases for sensitivity and specificity respectively, indicating a need for improvement in this area. Continuous monitoring of clinical performance within more clearly defined target populations is needed.


Sign in / Sign up

Export Citation Format

Share Document