scholarly journals Poorly known aspects of flattening the curve of COVID-19

Author(s):  
Alain Debecker ◽  
Theodore Modis

AbstractThis work concerns the too-often mentioned flattening of the curve of COVID-19. The diffusion of the virus is analyzed with logistic-curve fits on the 25 countries most affected at the time of the writing and in which the diffusion curve was more than 95% completed. A negative correlation observed between the final number of infections and the slope of the logistic curve corroborates a result obtained long time ago via an extensive simulation study. There is both theoretical arguments and experimental evidence for the existence of such correlations. The flattening of the curve results in a retardation of the curve’s midpoint, which entails an increase in the final number of infections. It is possible that more lives are lost at the end by this process. Our analysis also permits evaluation of the various governments’ interventions in terms of rapidity of response, efficiency of the actions taken (the amount of flattening achieved), and the number of days by which the curve was delayed. Not surprisingly, early decisive response proves to be the optimum strategy among the countries studied.

2021 ◽  
Author(s):  
Jakob Raymaekers ◽  
Peter J. Rousseeuw

AbstractMany real data sets contain numerical features (variables) whose distribution is far from normal (Gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box–Cox and Yeo–Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data.


Author(s):  
Ashutosh Singh ◽  
Ankur Kumar ◽  
Prateek Kumar ◽  
Taniya Bhardwaj ◽  
Rajanish Giri ◽  
...  

Aims: c-Myc, along with its partner MAX, regulates the expression of several genes, leading to an oncogenic phenotype. The MAX interacting interface of c-Myc is disordered and uncharacterized for small molecule binding. Salvianolic acid B possesses numerous therapeutic properties, including anticancer activity. The current study was designed to elucidate the interaction of the Sal_Ac_B with the disordered bHLH domain of c-Myc using computational and biophysical techniques. Materials & methods: The binding of Sal_Ac_B with Myc was studied using computational and biophysical techniques, including molecular docking and simulation, fluorescence lifetime, circular dichroism and anisotropy. Results & conclusions: The study demonstrated a high binding potential of Sal_Ac_B against the disordered Myc peptide. The binding of the compounds leads to an overall conformational change in Myc. Moreover, an extensive simulation study showed a stable Sal_Ac_B/Myc binding.


2011 ◽  
Vol 2011 ◽  
pp. 1-21 ◽  
Author(s):  
Malte Brinkmeyer ◽  
Thasso Griebel ◽  
Sebastian Böcker

Supertree methods allow to reconstruct large phylogenetic trees by combining smaller trees with overlapping leaf sets into one, more comprehensive supertree. The most commonly used supertree method, matrix representation with parsimony (MRP), produces accurate supertrees but is rather slow due to the underlying hard optimization problem. In this paper, we present an extensive simulation study comparing the performance of MRP and the polynomial supertree methods MinCut Supertree, Modified MinCut Supertree, Build-with-distances, PhySIC, PhySIC_IST, and super distance matrix. We consider both quality and resolution of the reconstructed supertrees. Our findings illustrate the tradeoff between accuracy and running time in supertree construction, as well as the pros and cons of voting- and veto-based supertree approaches. Based on our results, we make some general suggestions for supertree methods yet to come.


1997 ◽  
Vol 56 (2) ◽  
pp. R485-R488 ◽  
Author(s):  
A. V. Kolobov ◽  
M. Kondo ◽  
H. Oyanagi ◽  
R. Durny ◽  
A. Matsuda ◽  
...  

2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
Adnan Agbaria ◽  
Muhamad Hugerat ◽  
Roy Friedman

Data dissemination is an important service in mobile ad hoc networks (MANETs). The main objective of this paper is to present a dissemination protocol, calledlocBcast, which utilizes positioning information to obtain efficient dissemination trees with low-control overhead. This paper includes an extensive simulation study that compares locBast with selfP, dominantP, fooding, and a couple of probabilistic-/counter-based protocols. It is shown that locBcast behaves similar to or better than those protocols and is especially useful in the following challenging environments: the message sizes are large, the network is dense, and nodes are highly mobile.


2010 ◽  
Vol 138 (11) ◽  
pp. 1674-1678 ◽  
Author(s):  
J. REICZIGEL ◽  
J. FÖLDI ◽  
L. ÓZSVÁRI

SUMMARYEstimation of prevalence of disease, including construction of confidence intervals, is essential in surveys for screening as well as in monitoring disease status. In most analyses of survey data it is implicitly assumed that the diagnostic test has a sensitivity and specificity of 100%. However, this assumption is invalid in most cases. Furthermore, asymptotic methods using the normal distribution as an approximation of the true sampling distribution may not preserve the desired nominal confidence level. Here we proposed exact two-sided confidence intervals for the prevalence of disease, taking into account sensitivity and specificity of the diagnostic test. We illustrated the advantage of the methods with results of an extensive simulation study and real-life examples.


2021 ◽  
Vol 263 ◽  
pp. 112565
Author(s):  
Peter Somkuti ◽  
Christopher W. O'Dell ◽  
Sean Crowell ◽  
Philipp Köhler ◽  
Gregory R. McGarragh ◽  
...  

2012 ◽  
Vol 60 (1) ◽  
pp. 109-113 ◽  
Author(s):  
M Ershadul Haque ◽  
Jafar A Khan

Classical inference considers sampling variability to be the only source of uncertainty, and does not address the issue of bias caused by contamination. Naive robust intervals replace the classical estimates by their robust counterparts without considering the possible bias of the robust point estimates. Consequently, the asymptotic coverage proportion of these intervals of any nominal level will invariably tend to zero for any proportion of contamination.In this study, we attempt to achieve reasonable coverage percentages by constructing globally robust confidence intervals that adjust for the bias of the robust point estimates. We improve these globally robust intervals by considering the direction of the bias of the robust estimates used. We compare the proposed intervals with the existing ones through an extensive simulation study. The proposed methods have reasonable coverage percentage while the existing method show very poor coverage as sample size increases.DOI: http://dx.doi.org/10.3329/dujs.v60i1.10347  Dhaka Univ. J. Sci. 60(1): 109-113 2012 (January)


Sign in / Sign up

Export Citation Format

Share Document