scholarly journals 147. Defining the Optimal Serial Testing Interval and Features for Identifying Patients with Early SARS-CoV-2 Infection

2021 ◽  
Vol 8 (Supplement_1) ◽  
pp. S88-S89
Author(s):  
Sanjat Kanjilal

Abstract Background Serial testing for SARS-CoV-2 is necessary to prevent spread from patients early in infection. Testing intervals are largely derived from viral kinetic studies performed early in the COVID-19 pandemic. Laboratory and epidemiologic data accrued over the past year present an opportunity to use empiric models to define optimal serial testing intervals and features predictive of early infection. Methods Retrospective analysis of 15,314 inpatients within the Mass General Brigham healthcare system who had two tests within a 36-hour period between May 1 2020 and May 29 2021. Early infection was defined as having a negative test followed by a positive test. Patients with prior positive tests were excluded. The primary outcome was the proportion of patients in early infection over the total number tested serially, stratified by 4-hour testing intervals from the timestamp of the first test. Multivariate modeling was used to identify features predictive of early infection. Covariates included demographics, body site, PCR assay, location, community incidence, percent positivity, and median / skew of Ct value distributions. Results Of 19,971 test pairs, 193 (0.97%) were characterized as a negative followed by a positive within 36 hours. Bivariate analysis showed a close association between negative to positive test pairs during the first surge in spring 2020 that was not present during the winter surge. Negative to positive test pairs were most common in the 12 to 16 hour time interval (51/193, 26%, Figure 1). After controlling for covariates, the Roche cobas assay was more likely to identify patients with a negative to positive test pair relative to the Cepheid Xpert, Hologic Panther Fusion and Roche Liat assays. A second specimen from the lower respiratory tract was more likely to lead to a positive relative to other body sites. Community incidence and Ct value distributions were not predictive and there were no differences between nasal and nasopharyngeal swabs. All 4-hour time intervals from 16 to 36 hours were significant for predicting a negative to positive test pair (Table 1). Figure 1. Distribution of negative to positive test pairs by 4 hour time intervals Table 1. Multivariate regression predicting a negative to positive test pair Conclusion The likelihood of detecting early infection is dependent on PCR platform and body site of sampling. A range of time intervals between 16 to 36 hours after the initial test were likely to identify positive cases. Disclosures Sanjat Kanjilal, MD, MPH, GlaskoSmithKline (Advisor or Review Panel member)

1963 ◽  
Vol 44 (3) ◽  
pp. 475-480 ◽  
Author(s):  
R. Grinberg

ABSTRACT Radiologically thyroidectomized female Swiss mice were injected intraperitoneally with 131I-labeled thyroxine (T4*), and were studied at time intervals of 30 minutes and 4, 28, 48 and 72 hours after injection, 10 mice for each time interval. The organs of the central nervous system and the pituitary glands were chromatographed, and likewise serum from the same animal. The chromatographic studies revealed a compound with the same mobility as 131I-labeled triiodothyronine in the organs of the CNS and in the pituitary gland, but this compound was not present in the serum. In most of the chromatographic studies, the peaks for I, T4 and T3 coincided with those for the standards. In several instances, however, such an exact coincidence was lacking. A tentative explanation for the presence of T3* in the pituitary gland following the injection of T4* is a deiodinating system in the pituitary gland or else the capacity of the pituitary gland to concentrate T3* formed in other organs. The presence of T3* is apparently a characteristic of most of the CNS (brain, midbrain, medulla and spinal cord); but in the case of the optic nerve, the compound is not present under the conditions of this study.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1213
Author(s):  
Ahmed Aljanad ◽  
Nadia M. L. Tan ◽  
Vassilios G. Agelidis ◽  
Hussain Shareef

Hourly global solar irradiance (GSR) data are required for sizing, planning, and modeling of solar photovoltaic farms. However, operating and controlling such farms exposed to varying environmental conditions, such as fast passing clouds, necessitates GSR data to be available for very short time intervals. Classical backpropagation neural networks do not perform satisfactorily when predicting parameters within short intervals. This paper proposes a hybrid backpropagation neural networks based on particle swarm optimization. The particle swarm algorithm is used as an optimization algorithm within the backpropagation neural networks to optimize the number of hidden layers and neurons used and its learning rate. The proposed model can be used as a reliable model in predicting changes in the solar irradiance during short time interval in tropical regions such as Malaysia and other regions. Actual global solar irradiance data of 5-s and 1-min intervals, recorded by weather stations, are applied to train and test the proposed algorithm. Moreover, to ensure the adaptability and robustness of the proposed technique, two different cases are evaluated using 1-day and 3-days profiles, for two different time intervals of 1-min and 5-s each. A set of statistical error indices have been introduced to evaluate the performance of the proposed algorithm. From the results obtained, the 3-days profile’s performance evaluation of the BPNN-PSO are 1.7078 of RMSE, 0.7537 of MAE, 0.0292 of MSE, and 31.4348 of MAPE (%), at 5-s time interval, where the obtained results of 1-min interval are 0.6566 of RMSE, 0.2754 of MAE, 0.0043 of MSE, and 1.4732 of MAPE (%). The results revealed that proposed model outperformed the standalone backpropagation neural networks method in predicting global solar irradiance values for extremely short-time intervals. In addition to that, the proposed model exhibited high level of predictability compared to other existing models.


2021 ◽  
pp. 1-6
Author(s):  
Jacob R. Morey ◽  
Xiangnan Zhang ◽  
Kurt A. Yaeger ◽  
Emily Fiano ◽  
Naoum Fares Marayati ◽  
...  

<b><i>Background and Purpose:</i></b> Randomized controlled trials have demonstrated the importance of time to endovascular therapy (EVT) in clinical outcomes in large vessel occlusion (LVO) acute ischemic stroke. Delays to treatment are particularly prevalent when patients require a transfer from hospitals without EVT capability onsite. A computer-aided triage system, Viz LVO, has the potential to streamline workflows. This platform includes an image viewer, a communication system, and an artificial intelligence (AI) algorithm that automatically identifies suspected LVO strokes on CTA imaging and rapidly triggers alerts. We hypothesize that the Viz application will decrease time-to-treatment, leading to improved clinical outcomes. <b><i>Methods:</i></b> A retrospective analysis of a prospectively maintained database was assessed for patients who presented to a stroke center currently utilizing Viz LVO and underwent EVT following transfer for LVO stroke between July 2018 and March 2020. Time intervals and clinical outcomes were compared for 55 patients divided into pre- and post-Viz cohorts. <b><i>Results:</i></b> The median initial door-to-neuroendovascular team (NT) notification time interval was significantly faster (25.0 min [IQR = 12.0] vs. 40.0 min [IQR = 61.0]; <i>p</i> = 0.01) with less variation (<i>p</i> &#x3c; 0.05) following Viz LVO implementation. The median initial door-to-skin puncture time interval was 25 min shorter in the post-Viz cohort, although this was not statistically significant (<i>p</i> = 0.15). <b><i>Conclusions:</i></b> Preliminary results have shown that Viz LVO implementation is associated with earlier, more consistent NT notification times. This application can serve as an early warning system and a failsafe to ensure that no LVO is left behind.


Fluids ◽  
2018 ◽  
Vol 3 (3) ◽  
pp. 63 ◽  
Author(s):  
Thomas Meunier ◽  
Claire Ménesguen ◽  
Xavier Carton ◽  
Sylvie Le Gentil ◽  
Richard Schopp

The stability properties of a vortex lens are studied in the quasi geostrophic (QG) framework using the generalized stability theory. Optimal perturbations are obtained using a tangent linear QG model and its adjoint. Their fine-scale spatial structures are studied in details. Growth rates of optimal perturbations are shown to be extremely sensitive to the time interval of optimization: The most unstable perturbations are found for time intervals of about 3 days, while the growth rates continuously decrease towards the most unstable normal mode, which is reached after about 170 days. The horizontal structure of the optimal perturbations consists of an intense counter-shear spiralling. It is also extremely sensitive to time interval: for short time intervals, the optimal perturbations are made of a broad spectrum of high azimuthal wave numbers. As the time interval increases, only low azimuthal wave numbers are found. The vertical structures of optimal perturbations exhibit strong layering associated with high vertical wave numbers whatever the time interval. However, the latter parameter plays an important role in the width of the vertical spectrum of the perturbation: short time interval perturbations have a narrow vertical spectrum while long time interval perturbations show a broad range of vertical scales. Optimal perturbations were set as initial perturbations of the vortex lens in a fully non linear QG model. It appears that for short time intervals, the perturbations decay after an initial transient growth, while for longer time intervals, the optimal perturbation keeps on growing, quickly leading to a non-linear regime or exciting lower azimuthal modes, consistent with normal mode instability. Very long time intervals simply behave like the most unstable normal mode. The possible impact of optimal perturbations on layering is also discussed.


2013 ◽  
Vol 70 (1) ◽  
pp. 9-15
Author(s):  
Maja Surbatovic ◽  
Zoran Vesic ◽  
Dragan Djordjevic ◽  
Sonja Radakovic ◽  
Snjezana Zeba ◽  
...  

Background/Aim: Laparoscopic cholecystectomy is considered to be the gold standard for laparoscopic surgical procedures. In ASA III patients with concomitant respiratory diseases, however, creation of pneumoperitoneum and the position of patients during surgery exert additional negative effect on intraoperative respiratory function, thus making a higher challenge for the anesthesiologist than for the surgeon. The aim of this study was to compare the effect of intermittent positive pressure ventilation (IPPV) and pressure controlled ventilation (PCV) during general anesthesia on respiratory function in ASA III patients submitted to laparoscopic cholecystectomy. Methods. The study included 60 patients randomized into two groups depending on the mode of ventilation: IPPV or PCV. Respiratory volume (VT), peak inspiratory pressure (PIP), compliance (C), end-tidal CO2 pressure (PETCO2), oxygen saturation (SpO2), partial pressures of O2, CO2 (PaO2 and PaCO2) and pH of arterial blood were recorded within four time intervals. Results. There were no statistically significant differences in VT, SpO2, PaO2, PaCO2 and pH values neither within nor between the two groups. In time interval t1 there were no statistically significant differences in PIP, C, PETCO2 values between the IPPV and the PCV group. But, in the next three time intervals there was a difference in PIP, C, and PETCO2 values between the two groups which ranged from statistically significant to highly significant; PIP was lower, C and PETCO2 were higher in the PCV group. Conclusion. Pressure controlled ventilation better maintains stability regarding intraoperative ventilatory parameters in ASA III patients with concomitant respiratory diseases during laparoscopic cholecystectomy.


Circulation ◽  
2021 ◽  
Vol 144 (Suppl_2) ◽  
Author(s):  
Hidetada Fukushima ◽  
Hideki Asai ◽  
Koji Yamamoto ◽  
Yasuyuki Kawai

Introduction: Under the SARS-CoV-2 pandemic, rescuers are recommended to cover their mouth and nose with a facemask or a cloth as well as victim’s mouth and nose when performing cardiopulmonary resuscitation (CPR). However, its impact on dispatch-assisted CPR (DACPR) has not been investigated well. Hypothesis: DACPR including the instruction for covering the rescuer’s and the victim’s mouth and nose can significantly delay the start of the first chest compression. Methods: We retrospectively analyzed DACPR records of the Nara Wide Area Fire Department, covering population of 853,000/3361km 2 , in Japan. We investigated the key time intervals of 505 DACPR records between May 2020 and March 2021. We also compared the results to that of the same period in 2019 (535 records). Results: Dispatchers failed to provide mask instruction in 322 cases (63.8%). The median time interval from the emergency call and the start of CPR instruction was longer in 2020 (197 seconds vs 190 seconds, p=0.641). The time to the first chest compression was also delayed in 2020 (264 seconds vs 246 seconds, p=0.015). Among the cases that dispatchers successfully provided mask instruction (183 cases, 36.2%), median time intervals to the start of instruction and the first chest compression were relatively faster than cases without mask instruction (177 seconds vs 211 seconds and 254 seconds vs 269.5 seconds, respectively). Conclusions: Dispatchers failed to provide mask instruction in the majority of CA cases. However, our study results indicate that the impact of mask instruction on DACPR can be minor in terms of immediate CPR provision.


2021 ◽  
Author(s):  
Cristian Suteanu

&lt;p&gt;Characterizing properties of wind speed variability and their dependence on the temporal scale is important: from sub-second intervals (for the design and monitoring of wind turbines) to longer time scales &amp;#8211; months, years (for the evaluation of the wind power potential). Wind speed data are usually reported as averages over time intervals of various length (minutes, days, months, etc). The research project presented in this paper addressed the following questions: What aspects of the wind pattern are changed, in what ways and to what extent, in the process of producing time-averaged values? What precautions should be considered when time-averaged values are used in the assessment of wind variability? What are the conditions to be fulfilled for a meaningful comparison of wind pattern characteristics obtained in distinct studies? Our research started from wind speed records sampled at 0.14 second intervals, which were averaged over increasingly longer time intervals. Variability evaluation was based on statistical moments, L-moments, and detrended fluctuation analysis. We present the change suffered by characteristics of temporal variability as a function of sampling rate and the averaging time interval. In particular, the height dependence of wind speed variability, which is of theoretical and practical importance, is shown to be progressively erased when averaging intervals are increased. The paper makes recommendations regarding the interpretation of wind pattern characteristics obtained at different sites as a function of sampling rate and time-averaging intervals.&lt;/p&gt;


Author(s):  
R.J. Milner ◽  
F. Reyers ◽  
J.H. Taylor ◽  
J.S. Van den Berg

A clinical trial was designed to evaluate the effects of diminazene aceturate and its stabiliser antipyrine on serum pseudocholinesterase (PChE) and red blood cell acetylcholinesterase (RBC AChE) in dogs with babesiosis. The trial was conducted on naturally occurring, uncomplicated cases of babesiosis (n = 20) that were randomly allocated to groups receiving a standard therapeutic dose of diminazene aceturate with antipyrine stabiliser (n = 10) or antipyrine alone (n = 10). Blood was drawn immediately before and every 15 minutes for 1 hour after treatment. Plasma PChE showed a 4 % decrease between 0 and 60 min within the treatment group (p < 0.05). No statistically significant differences were found between the treatment and control groups at any of the time intervals for PChE. There was an increase in RBC AChE activity at 15 min in the treatment group (p < 0.05). No significant differences were found between the treatment and control groups at any time interval for RBC AChE. In view of the difference in PChE, samples from additional, new cases (n = 10) of canine babesiosis were collected to identify the affect of the drug over 12 hours. No significant depression was identified over this time interval. The results suggests that the underlying mechanism in producing side-effects, when they do occur, is unlikely to be through cholinesterase depression.


2013 ◽  
Vol 31 (1) ◽  
pp. 165-174 ◽  
Author(s):  
N.M Correia ◽  
E.H Camilo ◽  
E.A Santos

The aim of this study was to assess the capacity of sulfentrazone applied in pre-emergence in controlling Ipomoea hederifolia and Ipomoea quamoclit as a function of the time interval between herbicide application and the occurrence of rain, and the presence of sugarcane straw on the soil surface. Two greenhouse experiments and one field experiment were conducted. For the greenhouse experiments, the study included three doses of sulfentrazone applied by spraying 0, 0.6, and 0.9 kg ha-1, two amounts of straw on the soil (0 and 10 t ha-1), and five time intervals between the application of herbicide and rain simulation (0, 20, 40, 60, and 90 days). In the field experiment, five herbicide treatments (sulfentrazone at 0.6 and 0.9 kg ha-1, sulfentrazone + hexazinone at 0.6 + 0.25 kg ha-1, amicarbazone at 1.4 kg ha-1, and imazapic at 0.147 kg ha-1) and two controls with no herbicide were studied. Management conditions with or without sugarcane straw on the soil were also assessed. From the greenhouse experiments, sulfentrazone application at 0.6 kg ha-1 was found to provide for the efficient control of I. hederifolia and I. quamoclit in a dry environment, with up to 90 days between herbicide application and rain simulation. After herbicide application, 20 mm of simulated rain was enough to leach sulfentrazone from the straw to the soil, as the biological effects observed in I. hederifolia and I. quamoclit remained unaffected. Under field conditions, either with or without sugarcane straw left on the soil, sulfentrazone alone (0.6 or 0.9 kg ha-1) or sulfentrazone combined with hexazinone (0.6 + 0.25 kg ha-1) was effective in the control of I. hederifolia and I. quamoclit, exhibiting similar or better control than amicarbazone (1.4 kg ha-1) and imazapic (0.147 kg ha-1).


2016 ◽  
Vol 63 (3) ◽  
pp. 131-138 ◽  
Author(s):  
Kenji Yoshida ◽  
Eri Tanaka ◽  
Hiroyoshi Kawaai ◽  
Shinya Yamazaki

To obtain effective infiltration anesthesia in the jawbone, high concentrations of local anesthetic are needed. However, to reduce pain experienced by patients during local anesthetic administration, low-pressure injection is recommended for subperiosteal infiltration anesthesia. Currently, there are no studies regarding the effect of injection pressure on infiltration anesthesia, and a standard injection pressure has not been clearly determined. Hence, the effect of injection pressure of subperiosteal infiltration anesthesia on local anesthetic infiltration to the jawbone was considered by directly measuring lidocaine concentration in the jawbone. Japanese white male rabbits were used as test animals. After inducing general anesthesia with oxygen and sevoflurane, cannulation to the femoral artery was performed and arterial pressure was continuously recorded. Subperiosteal infiltration anesthesia was performed by injecting 0.5 mL of 2% lidocaine containing 1/80,000 adrenaline, and injection pressure was monitored by a pressure transducer for 40 seconds. After specified time intervals (10, 20, 30, 40, 50, and 60 minutes), jawbone and blood samples were collected, and the concentration of lidocaine at each time interval was measured. The mean injection pressure was divided into 4 groups (100 ± 50 mm Hg, 200 ± 50 mm Hg, 300 ± 50 mm Hg, and 400 ± 50 mm Hg), and comparison statistical analysis between these 4 groups was performed. No significant change in blood pressure during infiltration anesthesia was observed in any of the 4 groups. Lidocaine concentration in the blood and jawbone were highest 10 minutes after the infiltration anesthesia in all 4 groups and decreased thereafter. Lidocaine concentration in the jawbone increased as injection pressure increased, while serum lidocaine concentration was significantly lower. This suggests that when injection pressure of subperiosteal infiltration anesthesia is low, infiltration of local anesthetic to the jawbone may be reduced, while transfer to oral mucosa and blood may be increased.


Sign in / Sign up

Export Citation Format

Share Document