scholarly journals Bayesian Estimation of the Seroprevalence of Antibodies to SARS-CoV-2

Author(s):  
Qunfeng Dong ◽  
Xiang Gao

Accurately estimating the seroprevalence of antibodies to SARS-CoV-2 requires the use of appropriate methods. Bayesian statistics provides a natural framework for considering the variabilities of specificity and sensitivity of the antibody tests, as well as for incorporating prior knowledge of viral infection prevalence. We present a full Bayesian approach for this purpose, and we demonstrate the utility of our approach using a recently published large-scale dataset from the U.S. CDC.

Author(s):  
Qunfeng Dong ◽  
Xiang Gao

Abstract Accurate estimations of the seroprevalence of antibodies to severe acute respiratory syndrome coronavirus 2 need to properly consider the specificity and sensitivity of the antibody tests. In addition, prior knowledge of the extent of viral infection in a population may also be important for adjusting the estimation of seroprevalence. For this purpose, we have developed a Bayesian approach that can incorporate the variabilities of specificity and sensitivity of the antibody tests, as well as the prior probability distribution of seroprevalence. We have demonstrated the utility of our approach by applying it to a recently published large-scale dataset from the US CDC, with our results providing entire probability distributions of seroprevalence instead of single-point estimates. Our Bayesian code is freely available at https://github.com/qunfengdong/AntibodyTest.


2020 ◽  
Author(s):  
Laetitia Zmuda ◽  
Charlotte Baey ◽  
Paolo Mairano ◽  
Anahita Basirat

It is well-known that individuals can identify novel words in a stream of an artificial language using statistical dependencies. While underlying computations are thought to be similar from one stream to another (e.g. transitional probabilities between syllables), performance are not similar. According to the “linguistic entrenchment” hypothesis, this would be due to the fact that individuals have some prior knowledge regarding co-occurrences of elements in speech which intervene during verbal statistical learning. The focus of previous studies was on task performance. The goal of the current study is to examine the extent to which prior knowledge impacts metacognition (i.e. ability to evaluate one’s own cognitive processes). Participants were exposed to two different artificial languages. Using a fully Bayesian approach, we estimated an unbiased measure of metacognitive efficiency and compared the two languages in terms of task performance and metacognition. While task performance was higher in one of the languages, the metacognitive efficiency was similar in both languages. In addition, a model assuming no correlation between the two languages better accounted for our results compared to a model where correlations were introduced. We discuss the implications of our findings regarding the computations which underlie the interaction between input and prior knowledge during verbal statistical learning.


2018 ◽  
Vol 68 (12) ◽  
pp. 2857-2859
Author(s):  
Cristina Mihaela Ghiciuc ◽  
Andreea Silvana Szalontay ◽  
Luminita Radulescu ◽  
Sebastian Cozma ◽  
Catalina Elena Lupusoru ◽  
...  

There is an increasing interest in the analysis of salivary biomarkers for medical practice. The objective of this article was to identify the specificity and sensitivity of quantification methods used in biosensors or portable devices for the determination of salivary cortisol and salivary a-amylase. There are no biosensors and portable devices for salivary amylase and cortisol that are used on a large scale in clinical studies. These devices would be useful in assessing more real-time psychological research in the future.


Author(s):  
Kahler W. Stone ◽  
Kristina W. Kintziger ◽  
Meredith A. Jagger ◽  
Jennifer A. Horney

While the health impacts of the COVID-19 pandemic on frontline health care workers have been well described, the effects of the COVID-19 response on the U.S. public health workforce, which has been impacted by the prolonged public health response to the pandemic, has not been adequately characterized. A cross-sectional survey of public health professionals was conducted to assess mental and physical health, risk and protective factors for burnout, and short- and long-term career decisions during the pandemic response. The survey was completed online using the Qualtrics survey platform. Descriptive statistics and prevalence ratios (95% confidence intervals) were calculated. Among responses received from 23 August and 11 September 2020, 66.2% of public health workers reported burnout. Those with more work experience (1–4 vs. <1 years: prevalence ratio (PR) = 1.90, 95% confidence interval (CI) = 1.08−3.36; 5–9 vs. <1 years: PR = 1.89, CI = 1.07−3.34) or working in academic settings (vs. practice: PR = 1.31, CI = 1.08–1.58) were most likely to report burnout. As of September 2020, 23.6% fewer respondents planned to remain in the U.S. public health workforce for three or more years compared to their retrospectively reported January 2020 plans. A large-scale public health emergency response places unsustainable burdens on an already underfunded and understaffed public health workforce. Pandemic-related burnout threatens the U.S. public health workforce’s future when many challenges related to the ongoing COVID-19 response remain unaddressed.


Author(s):  
Jin Zhou ◽  
Qing Zhang ◽  
Jian-Hao Fan ◽  
Wei Sun ◽  
Wei-Shi Zheng

AbstractRecent image aesthetic assessment methods have achieved remarkable progress due to the emergence of deep convolutional neural networks (CNNs). However, these methods focus primarily on predicting generally perceived preference of an image, making them usually have limited practicability, since each user may have completely different preferences for the same image. To address this problem, this paper presents a novel approach for predicting personalized image aesthetics that fit an individual user’s personal taste. We achieve this in a coarse to fine manner, by joint regression and learning from pairwise rankings. Specifically, we first collect a small subset of personal images from a user and invite him/her to rank the preference of some randomly sampled image pairs. We then search for the K-nearest neighbors of the personal images within a large-scale dataset labeled with average human aesthetic scores, and use these images as well as the associated scores to train a generic aesthetic assessment model by CNN-based regression. Next, we fine-tune the generic model to accommodate the personal preference by training over the rankings with a pairwise hinge loss. Experiments demonstrate that our method can effectively learn personalized image aesthetic preferences, clearly outperforming state-of-the-art methods. Moreover, we show that the learned personalized image aesthetic benefits a wide variety of applications.


2021 ◽  
Vol 7 (3) ◽  
pp. 50
Author(s):  
Anselmo Ferreira ◽  
Ehsan Nowroozi ◽  
Mauro Barni

The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.


Author(s):  
Anil S. Baslamisli ◽  
Partha Das ◽  
Hoang-An Le ◽  
Sezer Karaoglu ◽  
Theo Gevers

AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.


2021 ◽  
Vol 13 (5) ◽  
pp. 905
Author(s):  
Chuyi Wu ◽  
Feng Zhang ◽  
Junshi Xia ◽  
Yichen Xu ◽  
Guoqing Li ◽  
...  

The building damage status is vital to plan rescue and reconstruction after a disaster and is also hard to detect and judge its level. Most existing studies focus on binary classification, and the attention of the model is distracted. In this study, we proposed a Siamese neural network that can localize and classify damaged buildings at one time. The main parts of this network are a variety of attention U-Nets using different backbones. The attention mechanism enables the network to pay more attention to the effective features and channels, so as to reduce the impact of useless features. We train them using the xBD dataset, which is a large-scale dataset for the advancement of building damage assessment, and compare their result balanced F (F1) scores. The score demonstrates that the performance of SEresNeXt with an attention mechanism gives the best performance, with the F1 score reaching 0.787. To improve the accuracy, we fused the results and got the best overall F1 score of 0.792. To verify the transferability and robustness of the model, we selected the dataset on the Maxar Open Data Program of two recent disasters to investigate the performance. By visual comparison, the results show that our model is robust and transferable.


Author(s):  
Meysam Goodarzi ◽  
Darko Cvetkovski ◽  
Nebojsa Maletic ◽  
Jesús Gutiérrez ◽  
Eckhard Grass

AbstractClock synchronization has always been a major challenge when designing wireless networks. This work focuses on tackling the time synchronization problem in 5G networks by adopting a hybrid Bayesian approach for clock offset and skew estimation. Furthermore, we provide an in-depth analysis of the impact of the proposed approach on a synchronization-sensitive service, i.e., localization. Specifically, we expose the substantial benefit of belief propagation (BP) running on factor graphs (FGs) in achieving precise network-wide synchronization. Moreover, we take advantage of Bayesian recursive filtering (BRF) to mitigate the time-stamping error in pairwise synchronization. Finally, we reveal the merit of hybrid synchronization by dividing a large-scale network into local synchronization domains and applying the most suitable synchronization algorithm (BP- or BRF-based) on each domain. The performance of the hybrid approach is then evaluated in terms of the root mean square errors (RMSEs) of the clock offset, clock skew, and the position estimation. According to the simulations, in spite of the simplifications in the hybrid approach, RMSEs of clock offset, clock skew, and position estimation remain below 10 ns, 1 ppm, and 1.5 m, respectively.


Sign in / Sign up

Export Citation Format

Share Document