scholarly journals Optimal design for phase 2 studies of SARS-CoV-2 antiviral drugs

Author(s):  
James A Watson ◽  
Stephen Kissler ◽  
Nicholas PJ Day ◽  
Yonatan H. Grad ◽  
Nicholas J White

There is no agreed methodology for pharmacometric assessment of candidate antiviral drugs in COVID-19. The most widely used measure of virological response in clinical trials so far is the time to viral clearance assessed by qPCR of viral nucleic acid in eluates from serial nasopharyngeal swabs. We posited that the rate of viral clearance would have better discriminatory value. Using a pharmacodynamic model fit to individual SARS-CoV-2 virus clearance data from 46 uncomplicated COVID-19 infections in a cohort of prospectively followed adults, we simulated qPCR viral load data to compare type 2 errors when using time to clearance and rate of clearance under varying antiviral effects, sample sizes, sampling frequencies and durations of follow-up. The rate of viral clearance is a uniformly superior endpoint as compared to time to clearance with respect to type 2 error, and it is not dependent on initial viral load or assay sensitivity. For greatest efficiency pharmacometric assessments should be conducted in early illness and daily qPCR samples should be taken over 7 to 10 days in each patient studied. Adaptive randomisation and early stopping for success permits more rapid identification of active interventions.

Pathogens ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 752
Author(s):  
Chung-Guei Huang ◽  
Avijit Dutta ◽  
Ching-Tai Huang ◽  
Pi-Yueh Chang ◽  
Mei-Jen Hsiao ◽  
...  

A total of 15 RT-PCR confirmed COVID-19 patients were admitted to our hospital during the in-itial outbreak in Taiwan. The average time of virus clearance was delayed in seven patients, 24.14 ± 4.33 days compared to 10.25 ± 0.56 days post-symptom onset (PSO) in the other eight pa-tients. There was strong antibody response in patients with viral persistence at the pharynx, with peak values of serum antibody 677.2 ± 217.8 vs. 76.70 ± 32.11 in patients with delayed versus rapid virus clearance. The patients with delayed viral clearance had excessive antibodies of compromised quality in an early stage with the delay in peak virus neutralization efficacy, 34.14 ± 7.15 versus 12.50 ± 2.35 days PSO in patients with rapid virus clearance. Weak antibody re-sponse of patients with rapid viral clearance was also effective, with substantial and comparable neutralization efficacy, 35.70 ± 8.78 versus 41.37 ± 11.49 of patients with delayed virus clearance. Human Cytokine 48-Plex Screening of the serial sera samples revealed elevated concentrations of proinflammatory cytokines and chemokines in a deceased patient with delayed virus clear-ance and severe disease. The levels were comparatively less in the other two patients who suf-fered from severe disease but eventually survived.


Epidemics ◽  
2021 ◽  
pp. 100454
Author(s):  
Keisuke Ejima ◽  
Kwang Su Kim ◽  
Christina Ludema ◽  
Ana I. Bento ◽  
Shoya Iwanami ◽  
...  

2013 ◽  
Vol 144 (5) ◽  
pp. S-977 ◽  
Author(s):  
Mary Jane Burton ◽  
Alan D. Penman ◽  
Imran Sunesara ◽  
Casey A. Young ◽  
Brendan M. McGuire ◽  
...  

2020 ◽  
Author(s):  
◽  
Hao Cheng

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Universities commercialize their discoveries at an increasing pace in order to maximize their economic impact and generate additional funding for research. They form technology transfer offices (TTOs) to evaluate the commercial value of university inventions and choose the most promising ones to patent and commercialize. Uncertainties and asymmetric information in project selection make the TTO choices difficult and can cause both type 1 error (forgo valuable discoveries) and type 2 error (select low-value discoveries). In this dissertation, I examine the TTO's project selection process and the factors that influence the choice of academic inventions for patenting and commercialization, the type 1 error committed, and the final licensing outcome. The dissertation contains three essays. In the first essay, I analyze project selection under uncertainty when both the quality of the proposed project and the motives of the applicant are uncertain. Some inventors may have an incentive to disguise the true quality and commercial value of their discoveries in order to conform to organizational expectations of disclosure while retaining rights to potentially pursue commercialization of their discoveries outside the organization's boundaries for their own benefit. Inventors may equally, ex post, lose interest to the commercialization of their invention due to competing job demands. I develop a model to examine the decision process of a university TTO responsible for the commercialization of academic inventions under such circumstances. The model describes the conditions that prompt Type 1 and Type 2 errors and allows for inferences for minimizing each. Little is known about the factors that make project selection effective or the opposite and there has been limited empirical analysis in this area. The few empirical studies that are available, examine the sources of type 2 error but there is no empirical work that analyzes type 1 error and the contributing factors. Research on type 1 error encounters two main difficulties. First, it is difficult to ascertain the decision process and second, it is challenging to approximate the counterfactual. Using data from the TTO of the University of Missouri, in the second essay I study the factors that influence the project selection process of the TTO in and the ex post type 1 error realized. In most cases, universities pursue commercialization of their inventions through licensing. There have been a few empirical studies that have researched the factors that affect licensing and their relative importance. In the third essay, I examine the characteristics of university inventions that are licensed using almost 10 years of data on several hundred of inventions, their characteristics, and the licensing status.


2014 ◽  
Vol 38 (4) ◽  
pp. 196-197 ◽  
Author(s):  
Andy J. Owen ◽  
Deepak Mirok ◽  
Loopinder Sood
Keyword(s):  

2021 ◽  
Author(s):  
Xintong Li ◽  
Lana YH Lai ◽  
Anna Ostropolets ◽  
Faaizah Arshad ◽  
Eng Hooi Tan ◽  
...  

Using real-world data and past vaccination data, we conducted a large-scale experiment to quantify bias, precision and timeliness of different study designs to estimate historical background (expected) compared to post-vaccination (observed) rates of safety events for several vaccines. We used negative (not causally related) and positive control outcomes. The latter were synthetically generated true safety signals with incident rate ratios ranging from 1.5 to 4. Observed vs. expected analysis using within-database historical background rates is a sensitive but unspecific method for the identification of potential vaccine safety signals. Despite good discrimination, most analyses showed a tendency to overestimate risks, with 20%-100% type 1 error, but low (0% to 20%) type 2 error in the large databases included in our study. Efforts to improve the comparability of background and post-vaccine rates, including age-sex adjustment and anchoring background rates around a visit, reduced type 1 error and improved precision but residual systematic error persisted. Additionally, empirical calibration dramatically reduced type 1 to nominal but came at the cost of increasing type 2 error. Our study found that within-database background rate comparison is a sensitive but unspecific method to identify vaccine safety signals. The method is positively biased, with low (<=20%) type 2 error, and 20% to 100% of negative control outcomes were incorrectly identified as safety signals due to type 1 error. Age-sex adjustment and anchoring background rate estimates around a healthcare visit are useful strategies to reduce false positives, with little impact on type 2 error. Sufficient sensitivity was reached for the identification of safety signals by month 1-2 for vaccines with quick uptake (e.g., seasonal influenza), but much later (up to month 9) for vaccines with slower uptake (e.g., varicella-zoster or papillomavirus). Finally, we reported that empirical calibration using negative control outcomes reduces type 1 error to nominal at the cost of increasing type 2 error.


Author(s):  
Giuseppe Lippi ◽  
Fabian Sanchis-Gomar ◽  
Gianfranco Cervellin

AbstractBackground:The pathogenesis of different types of myocardial infarction (MI) differs widely, so that accurate and timely differential diagnosis is essential for tailoring treatments according to the underlying causal mechanisms. As the measurement of cardiac troponins is a mainstay for diagnosis and management of MI, we performed a systematic literature analysis of published works which concomitantly measured cardiac troponins in type 1 and 2 MI.Methods:The electronic search was conducted in Medline, Scopus and Web of Science using the keywords “myocardial infarction” AND “type(-)2” OR “type II” AND “troponin” in “Title/Abstract/Keywords”, with no language restriction and date limited from 2007 to the present.Results:Overall, 103 documents were identified, but 95 were excluded as precise comparison of troponin values in patients with type 1 and 2 MI was unavailable. Therefore, eight studies were finally selected for our analysis. Two studies used high-sensitivity (HS) immunoassays for measuring cardiac troponin T (HS-TnT), one used a HS immunoassay for measuring cardiac troponin I (HS-TnI), whereas the remaining used conventional methods for measuring TnI. In all studies, regardless of type and assay sensitivity, troponin values were higher in type 1 than in type 2 MI. The weighted percentage difference between type 1 and 2 MI was 32% for TnT and 91% for TnI, respectively. Post-discharge mortality obtained from pooling individual data was instead three times higher in type 2 than in type 1 MI.Conclusions:The results of our analysis suggest that the value of cardiac troponins is consistently higher in type 1 than in type 2 MI.


2021 ◽  
Vol 30 (9) ◽  
pp. 11-17
Author(s):  
Hoang Vu Mai Phuong ◽  
Ung Thi Hong Trang ◽  
Nguyen Vu Son ◽  
Le Thi Thanh ◽  
Nguyen Le Khanh Hang ◽  
...  

From January to August 2020, Northern Viet Nam faced a COVID-19 outbreak, up to September 2020, there were 1122 confrmed cases of SARS-CoV-2, of which 465 cases were imported from Europe, America and Asia, 657 cases were identifed domestically. A total of 30,686 samples were collected during the SARS-CoV-2 outbreak in Northern Viet Nam and examined by Real-time RT-PCR using primers and probe from Charite - Berlin protocol. This study showed the initial results of SARS-CoV-2 detection and RNA quantitative in positive samples. The positive rate was 0.8%, ranging from 0.4 to 3.5% according to collection sites. Out of 251 positive samples, the mean Ct value was 28 (IQR: 22.3-32; range 14 - 38). The positive samples had a Ct value below 30 was 68.5%, there was no signifcant difference between the Ct value of the group ≤ 30 and > 30. The mean of the RNA copies/µl was 8.4.107, (IQR: 2.29.106 - 1.83.109 RNA copies/µl, range: 1.95.103 – 4.95.1011). In the group of imported COVID-19 cases, the rate of virus at low level was 29%, an average was 56% and at high level was 15%. In the community groups, the viral load data showed that the average rate at low, intermediate and high level were 20%, 63% and 17% respectively. The proportion of high-level viral load may raise an alert to start the quarantine process to reduce the transmission of SARS-CoV-2


Author(s):  
Jiao Chen ◽  
Yuan Li ◽  
Jianfeng Yu ◽  
Wenbin Tang

Tolerance modeling is the most basic issue in Computer Aided Tolerancing (CAT). It will negatively influence the performance of subsequent activities such as tolerance analysis to a great extent if the resultant model cannot accurately represent variations in tolerance zone. According to ASME Y14.5M Standard [1], there is a class of profile tolerances for lines and surfaces which should also be interpreted correctly. Aim at this class of tolerances, the paper proposes a unified framework called DOFAS for representing them which composed of three parts: a basic DOF (Degrees of Freedom) model for interpreting geometric variations for profiles, an assessment method for filtering out and rejecting those profiles cannot be accurately represented and a split algorithm for splitting rejected profiles into sub profiles to make their variations interpretable. The scope of discussion in this paper is restricted to the line profiles; we will focus on the surface profiles in forthcoming papers. From the DOF model, two types of errors result from the rotations of the features are identified and formulized. One type of the errors is the result of the misalignment between profile boundary and tolerance zone boundary (noted as type 1); and if the feature itself exceeds the range of tolerance zone the other type of errors will form (noted as type 2). Specifically, it is required that the boundary points of the line profile should align with the corresponding boundary lines of the tolerance zone and an arbitrary point of the line profile should lie within the tolerance zone when line profile rotates in the tolerance zone. To make DOF model as accurate as possible, an assessment method and a split algorithm are developed to evaluate and eliminate these two type errors. It is clear that not all the line features carry the two type errors; as such the assessment method is used as a filter for checking and reserving such features that are consistent with the error conditions. In general, feature with simple geometry is error-free and selected by the filter whereas feature with complex geometry is rejected. According to the two type errors, two sub-procedures of the assessment process are introduced. The first one mathematically is a scheme of solving the maximum deviation of rotation trajectories of profile boundary, so as to neglect the type 1 error if it approaches to zero. The other one is to solve the maximum deviation of trajectories of all points of the feature: type 2 error can be ignored when the retrieved maximum deviation is not greater than prescribed threshold, so that the feature will always stay within the tolerance zone. For such features rejected by the filter which are inconsistent with the error conditions, the split algorithm, which is spread into the three cases of occurrence of type 1 error, occurrence of type 2 error and concurrence of two type errors, is developed to ease their errors. By utilizing and analyzing the geometric and kinematic properties of the feature, the split point is recognized and obtained accordingly. Two sub-features are retrieved from the split point and then substituted into the DOFAS framework recursively until all split features can be represented in desired resolution. The split algorithm is efficient and self-adapting lies in the fact that the rules applied can ensure high convergence rate and expected results. Finally, the implementation with two examples indicates that the DOFAS framework is capable of representing profile tolerances with enhanced accuracy thus supports the feasibility of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document