scholarly journals The effect of correlation and false negatives in pool testing strategies for COVID-19

Author(s):  
Leonardo J. Basso ◽  
Vicente Salinas ◽  
Denis Sauré ◽  
Charles Thraves ◽  
Natalia Yankovic
2020 ◽  
Author(s):  
Leonardo J. Basso ◽  
Vicente Salinas ◽  
Denis Saure ◽  
Charles Thraves ◽  
Natalia Yankovic

Author(s):  
Nicholas Gray ◽  
Dominic Calleja ◽  
Alexander Wimbush ◽  
Enrique Miralles-Dolz ◽  
Ander Gray ◽  
...  

AbstractTesting is viewed as a critical aspect of any strategy to tackle the COVID-19 pandemic. Much of the dialogue around testing has concentrated on how countries can scale up capacity, but the uncertainty in testing has not received nearly as much attention beyond asking if a test is accurate enough to be used. Even for highly accurate tests, false positives and false negatives will accumulate as mass testing strategies are employed under pressure, and these misdiagnoses could have major implications on the ability of governments to suppress the virus. The present analysis uses a modified SIR model to understand the implication and magnitude of misdiagnosis in the context of ending lock-down measures. The results indicate that increased testing capacity alone will not provide a solution to lock-down measures in the UK. The progression of the epidemic and peak infections is shown to depend heavily on test characteristics, test targeting, and prevalence of the infection. Antibody based immunity passports are rejected as a solution to ending lockdown, as they can put the population at risk if poorly targeted. Similarly, mass screening for active viral infection may only be beneficial if it can be sufficiently well targeted, otherwise reliance on this approach for protection of the population can again put them at risk. A well targeted active viral test combined with a slow release rate is a viable strategy for continuous suppression of the virus.


2020 ◽  
Vol 29 (4) ◽  
pp. 1944-1955 ◽  
Author(s):  
Maria Schwarz ◽  
Elizabeth C. Ward ◽  
Petrea Cornwell ◽  
Anne Coccetti ◽  
Pamela D'Netto ◽  
...  

Purpose The purpose of this study was to examine (a) the agreement between allied health assistants (AHAs) and speech-language pathologists (SLPs) when completing dysphagia screening for low-risk referrals and at-risk patients under a delegation model and (b) the operational impact of this delegation model. Method All AHAs worked in the adult acute inpatient settings across three hospitals and completed training and competency evaluation prior to conducting independent screening. Screening (pass/fail) was based on results from pre-screening exclusionary questions in combination with a water swallow test and the Eating Assessment Tool. To examine the agreement of AHAs' decision making with SLPs, AHAs ( n = 7) and SLPs ( n = 8) conducted an independent, simultaneous dysphagia screening on 51 adult inpatients classified as low-risk/at-risk referrals. To examine operational impact, AHAs independently completed screening on 48 low-risk/at-risk patients, with subsequent clinical swallow evaluation conducted by an SLP with patients who failed screening. Results Exact agreement between AHAs and SLPs on overall pass/fail screening criteria for the first 51 patients was 100%. Exact agreement for the two tools was 100% for the Eating Assessment Tool and 96% for the water swallow test. In the operational impact phase ( n = 48), 58% of patients failed AHA screening, with only 10% false positives on subjective SLP assessment and nil identified false negatives. Conclusion AHAs demonstrated the ability to reliably conduct dysphagia screening on a cohort of low-risk patients, with a low rate of false negatives. Data support high level of agreement and positive operational impact of using trained AHAs to perform dysphagia screening in low-risk patients.


Methodology ◽  
2019 ◽  
Vol 15 (3) ◽  
pp. 97-105
Author(s):  
Rodrigo Ferrer ◽  
Antonio Pardo

Abstract. In a recent paper, Ferrer and Pardo (2014) tested several distribution-based methods designed to assess when test scores obtained before and after an intervention reflect a statistically reliable change. However, we still do not know how these methods perform from the point of view of false negatives. For this purpose, we have simulated change scenarios (different effect sizes in a pre-post-test design) with distributions of different shapes and with different sample sizes. For each simulated scenario, we generated 1,000 samples. In each sample, we recorded the false-negative rate of the five distribution-based methods with the best performance from the point of view of the false positives. Our results have revealed unacceptable rates of false negatives even with effects of very large size, starting from 31.8% in an optimistic scenario (effect size of 2.0 and a normal distribution) to 99.9% in the worst scenario (effect size of 0.2 and a highly skewed distribution). Therefore, our results suggest that the widely used distribution-based methods must be applied with caution in a clinical context, because they need huge effect sizes to detect a true change. However, we made some considerations regarding the effect size and the cut-off points commonly used which allow us to be more precise in our estimates.


1999 ◽  
Author(s):  
Adam Galinsky ◽  
Gordon Moskowitz

2020 ◽  
Vol 2020 (14) ◽  
pp. 378-1-378-7
Author(s):  
Tyler Nuanes ◽  
Matt Elsey ◽  
Radek Grzeszczuk ◽  
John Paul Shen

We present a high-quality sky segmentation model for depth refinement and investigate residual architecture performance to inform optimally shrinking the network. We describe a model that runs in near real-time on mobile device, present a new, highquality dataset, and detail a unique weighing to trade off false positives and false negatives in binary classifiers. We show how the optimizations improve bokeh rendering by correcting stereo depth misprediction in sky regions. We detail techniques used to preserve edges, reject false positives, and ensure generalization to the diversity of sky scenes. Finally, we present a compact model and compare performance of four popular residual architectures (ShuffleNet, MobileNetV2, Resnet-101, and Resnet-34-like) at constant computational cost.


2020 ◽  
Author(s):  
Nathaniel Park ◽  
Dmitry Yu. Zubarev ◽  
James L. Hedrick ◽  
Vivien Kiyek ◽  
Christiaan Corbet ◽  
...  

The convergence of artificial intelligence and machine learning with material science holds significant promise to rapidly accelerate development timelines of new high-performance polymeric materials. Within this context, we report an inverse design strategy for polycarbonate and polyester discovery based on a recommendation system that proposes polymerization experiments that are likely to produce materials with targeted properties. Following recommendations of the system driven by the historical ring-opening polymerization results, we carried out experiments targeting specific ranges of monomer conversion and dispersity of the polymers obtained from cyclic lactones and carbonates. The results of the experiments were in close agreement with the recommendation targets with few false negatives or positives obtained for each class.<br>


2020 ◽  
Author(s):  
Stuart Yeates

A brief introduction to acronyms is given and motivation for extracting them in a digital library environment is discussed. A technique for extracting acronyms is given with an analysis of the results. The technique is found to have a low number of false negatives and a high number of false positives. Introduction Digital library research seeks to build tools to enable access of content, while making as few as possible assumptions about the content, since assumptions limit the range of applicability of the tools. Generally, the broader the assumptions the more widely applicable the tools. For example, keyword based indexing [5] is based on communications theory and applies to all natural human textual languages (allowances for differences in character sets and similar localisation issues not withstanding) . The algorithm described in this paper makes much stronger assumptions about the content. It assumes textual content that contains acronyms, an assumption which is known to hold for...


Sign in / Sign up

Export Citation Format

Share Document