scholarly journals Tailoring the sampling time of single-sample GFR measurement according to expected renal function: a multisite audit

Author(s):  
Helena McMeekin ◽  
Sam Townrow ◽  
Mark Barnfield ◽  
Andy Bradley ◽  
Ben Fongenie ◽  
...  

Abstract BackgroundThe 2018 BNMS Glomerular Filtration Rate (GFR) guidelines recommend a single-sample technique with the sampling time dictated by the expected renal function, but this is not known with any accuracy before the test. We aimed to assess whether the sampling regime suggested in the guidelines is optimal, and determine the expected error in GFR result if the sample time is chosen incorrectly. We can then infer the degree of flexibility in the sampling regime.Methods Data from 8946 patients referred for GFR assessment at 6 different hospitals for a variety of indications were reviewed. The difference between the single-sample (Fleming) GFR result at each sample time and the slope-intercept GFR result at each hospital was calculated. A second dataset of 775 studies from one hospital with nine samples collected from 5 minutes to 8 hours post injection was analysed to provide a reference GFR to which the single sample results were compared.Results Recommended single-sample times have been revised: for estimated GFR above 80 ml/min/1.73m2 a 2 hour sample is recommended, giving mean difference from slope-intercept GFR of -2.08 ml/min/1.73m2 (1333 GFR tests included). Between 30 and 80 ml/min/1.73m2 a 4 hour sample is recommended, giving a 1.95 ml/min/1.73m2 mean difference (2057 GFR tests included). The standard deviation of the differences is 3.50 ml/min/1.73m2 at 2 hours and 2.56 ml/min/1.73m2 at 4 hours for GFR results in the recommended range. It is 5.81 ml/min/1.73m2 at 2 hours and 5.70 ml/min/1.73m2 at 4 hours for GFR results outside the recommended range. ConclusionThe results of this multisite study demonstrate a reassuringly wide range of sample times for an acceptably accurate single-sample GFR result. Modified recommended single-sample times have been proposed in line with the results, and the reported errors for both sample times can be used for error analysis of a mistimed sample.

2019 ◽  
Vol 50 (4) ◽  
pp. 693-702 ◽  
Author(s):  
Christine Holyfield ◽  
Sydney Brooks ◽  
Allison Schluterman

Purpose Augmentative and alternative communication (AAC) is an intervention approach that can promote communication and language in children with multiple disabilities who are beginning communicators. While a wide range of AAC technologies are available, little is known about the comparative effects of specific technology options. Given that engagement can be low for beginning communicators with multiple disabilities, the current study provides initial information about the comparative effects of 2 AAC technology options—high-tech visual scene displays (VSDs) and low-tech isolated picture symbols—on engagement. Method Three elementary-age beginning communicators with multiple disabilities participated. The study used a single-subject, alternating treatment design with each technology serving as a condition. Participants interacted with their school speech-language pathologists using each of the 2 technologies across 5 sessions in a block randomized order. Results According to visual analysis and nonoverlap of all pairs calculations, all 3 participants demonstrated more engagement with the high-tech VSDs than the low-tech isolated picture symbols as measured by their seconds of gaze toward each technology option. Despite the difference in engagement observed, there was no clear difference across the 2 conditions in engagement toward the communication partner or use of the AAC. Conclusions Clinicians can consider measuring engagement when evaluating AAC technology options for children with multiple disabilities and should consider evaluating high-tech VSDs as 1 technology option for them. Future research must explore the extent to which differences in engagement to particular AAC technologies result in differences in communication and language learning over time as might be expected.


2020 ◽  
Vol 7 (2) ◽  
pp. 34-41
Author(s):  
VLADIMIR NIKONOV ◽  
◽  
ANTON ZOBOV ◽  

The construction and selection of a suitable bijective function, that is, substitution, is now becoming an important applied task, particularly for building block encryption systems. Many articles have suggested using different approaches to determining the quality of substitution, but most of them are highly computationally complex. The solution of this problem will significantly expand the range of methods for constructing and analyzing scheme in information protection systems. The purpose of research is to find easily measurable characteristics of substitutions, allowing to evaluate their quality, and also measures of the proximity of a particular substitutions to a random one, or its distance from it. For this purpose, several characteristics were proposed in this work: difference and polynomial, and their mathematical expectation was found, as well as variance for the difference characteristic. This allows us to make a conclusion about its quality by comparing the result of calculating the characteristic for a particular substitution with the calculated mathematical expectation. From a computational point of view, the thesises of the article are of exceptional interest due to the simplicity of the algorithm for quantifying the quality of bijective function substitutions. By its nature, the operation of calculating the difference characteristic carries out a simple summation of integer terms in a fixed and small range. Such an operation, both in the modern and in the prospective element base, is embedded in the logic of a wide range of functional elements, especially when implementing computational actions in the optical range, or on other carriers related to the field of nanotechnology.


2019 ◽  
Author(s):  
Le Wang ◽  
Devon Jakob ◽  
Haomin Wang ◽  
Alexis Apostolos ◽  
Marcos M. Pires ◽  
...  

<div>Infrared chemical microscopy through mechanical probing of light-matter interactions by atomic force microscopy (AFM) bypasses the diffraction limit. One increasingly popular technique is photo-induced force microscopy (PiFM), which utilizes the mechanical heterodyne signal detection between cantilever mechanical resonant oscillations and the photo induced force from light-matter interaction. So far, photo induced force microscopy has been operated in only one heterodyne configuration. In this article, we generalize heterodyne configurations of photoinduced force microscopy by introducing two new schemes: harmonic heterodyne detection and sequential heterodyne detection. In harmonic heterodyne detection, the laser repetition rate matches integer fractions of the difference between the two mechanical resonant modes of the AFM cantilever. The high harmonic of the beating from the photothermal expansion mixes with the AFM cantilever oscillation to provide PiFM signal. In sequential heterodyne detection, the combination of the repetition rate of laser pulses and polarization modulation frequency matches the difference between two AFM mechanical modes, leading to detectable PiFM signals. These two generalized heterodyne configurations for photo induced force microscopy deliver new avenues for chemical imaging and broadband spectroscopy at ~10 nm spatial resolution. They are suitable for a wide range of heterogeneous materials across various disciplines: from structured polymer film, polaritonic boron nitride materials, to isolated bacterial peptidoglycan cell walls. The generalized heterodyne configurations introduce flexibility for the implementation of PiFM and related tapping mode AFM-IR, and provide possibilities for additional modulation channel in PiFM for targeted signal extraction with nanoscale spatial resolution.</div>


2021 ◽  
Vol 9 (1) ◽  
pp. 232596712097366
Author(s):  
Zhen-Zhen Dai ◽  
Lin Sha ◽  
Zi-Ming Zhang ◽  
Zhen-Peng Liang ◽  
Hao Li ◽  
...  

Background: The tibial tubercle–trochlear groove (TT-TG) distance was originally described for computed tomography (CT), but it has been measured on magnetic resonance imaging (MRI) in patients with patellar instability (PI). Whether the TT-TG measured on CT versus MRI can be considered equivalent in skeletally immature children remains unclear. Purpose: To investigate in skeletally immature patients (1) the effects of CT versus MRI imaging modality and cartilage versus bony landmarks on consistency of TT-TG measurement, (2) the difference between CT and MRI measurements of the TT-TG, and (3) the difference in TT-TG between patients with and without PI. Study Design: Cross-sectional study; Level of evidence, 3. Methods: We retrospectively identified 24 skeletally immature patients with PI and 24 patients with other knee disorders or injury but without PI. The bony and cartilaginous TT-TG distances on CT and MRI were measured by 2 researchers, and related clinical data were collected. The interrater, interperiod (bony vs cartilaginous), and intermethod (CT vs MRI) reliabilities of TT-TG measurement were assessed with intraclass correlation coefficients. Results: The 48 study patients (19 boys, 29 girls) had a mean age of 11.3 years (range, 7-14 years). TT-TG measurements had excellent interrater reliability and good or excellent interperiod reliability but fair or poor intermethod reliability. TT-TG distance was greater on CT versus MRI (mean difference, 4.07 mm; 95% CI, 2.6-5.5 mm), and cartilaginous distance was greater than bony distance (mean difference, 2.3 mm; 95% CI, 0.79-3.8 mm). The TT-TG measured on CT was found to increase with the femoral width. Patients in the PI group had increased TT-TG distance compared with those in the control group, regardless of landmarks or modality used ( P > .05 for all). Conclusion: For skeletally immature patients, the TT-TG distance could be evaluated on MRI, regardless of whether cartilage or bony landmarks were used. Its value could not be interchanged with CT according to our results; however, further research on this topic is needed.


2014 ◽  
Vol 39 (2) ◽  
pp. 74-79
Author(s):  
F Jahan ◽  
MNU Chowdhury ◽  
T Mahbub ◽  
SM Arafat ◽  
S Jahan ◽  
...  

To ensure that potential kidney donors in Bangladesh have no renal impairment, it is extremely important to have accurate methods for evaluating the glomerular filtration rate (GFR). We evaluated the performance of serum creatinine based GFR in healthy adult potential kidney donors in Bangladesh to compare GFR determined by DTPA with that determined by various prediction equations. In this study GFR in 61 healthy adult potential kidney donors were measured with 99mTc-diethylenetriamine penta-acetic acid (DTPA) renogram. We also estimated GFR using a four variable equation modification of diet in renal disease (MDRD), Cockcroft-Gault creatinine clearance (CG CrCl), Cockcroft-Gault glomerular filtration rate (CG-GFR). The mean age of study population was 34.31±9.46 years and out of them 65.6% was male. In this study mean mGFR was 85.4±14.8. Correlation of estimated GFR calculated by CG-CrCl, CG-GFR and MDRD were done with measured GFR DTPA using quartile. Kappa values were also estimated which was found to be 0.104 for (p=0.151), 0.336 for (p=0.001) and 0.125 for (p=0.091) respectively. This indicates there is no association between estimated GFR calculated by CG-CrCl, CG-GFR, MDRD with measured GFR DTPA. These results show poor performance of these equations in evaluation of renal function among healthy population and also raise question regarding validity of these equations for assessment of renal function in chronic kidney disease in our population. DOI: http://dx.doi.org/10.3329/bmrcb.v39i2.19646 Bangladesh Med Res Counc Bull 2013; 39: 74-79


Religions ◽  
2019 ◽  
Vol 10 (6) ◽  
pp. 389
Author(s):  
James Robert Brown

Religious notions have long played a role in epistemology. Theological thought experiments, in particular, have been effective in a wide range of situations in the sciences. Some of these are merely picturesque, others have been heuristically important, and still others, as I will argue, have played a role that could be called essential. I will illustrate the difference between heuristic and essential with two examples. One of these stems from the Newton–Leibniz debate over the nature of space and time; the other is a thought experiment of my own constructed with the aim of making a case for a more liberal view of evidence in mathematics.


Author(s):  
A Jodat ◽  
M Moghiman

In the present study, the applicability of widely used evaporation models (Dalton approach-based correlations) is experimentally investigated for natural, forced, and combined convection regimes. A series of experimental measurements are carried out over a wide range of water temperatures and air velocities for 0.01 ≤ Gr/Re2 ≤ 100 in a heated rectangular pool. The investigations show that the evaporation rate strongly depends on the convection regime's Gr/ Re2 value. The results show that the evaporation rate increases with the difference in vapour pressures over both forced convection (0.01 ≤ Gr/Re2 ≤ 0.1) and turbulent mixed convection regimes (0.15 ≤ Gr/Re2 ≤ 25). However, the escalation rate of evaporation decreases with Gr/ Re2 in the forced convection regime whereas in the turbulent mixed convection it increases. In addition, over the range of the free convection regime ( Gr/Re2 ≥ 25), the evaporation rate is affected not only by the vapour pressure difference but also by the density variation. A dimensionless correlation using the experimental data of all convection regimes (0.01 ≤ Gr/Re2 ≤ 100) is proposed to cover different water surface geometries and airflow conditions.


2011 ◽  
Vol 133 (4) ◽  
Author(s):  
Raed I. Bourisli ◽  
Adnan A. AlAnzi

This work aims at developing a closed-form correlation between key building design variables and its energy use. The results can be utilized during the initial design stages to assess the different building shapes and designs according to their expected energy use. Prototypical, 20-floor office buildings were used. The relative compactness, footprint area, projection factor, and window-to-wall ratio were changed and the resulting buildings performances were simulated. In total, 729 different office buildings were developed and simulated in order to provide the training cases for optimizing the correlation’s coefficients. Simulations were done using the VisualDOE TM software with a Typical Meteorological Year data file, Kuwait City, Kuwait. A real-coded genetic algorithm (GA) was used to optimize the coefficients of a proposed function that relates the energy use of a building to its four key parameters. The figure of merit was the difference in the ratio of the annual energy use of a building normalized by that of a reference building. The objective was to minimize the difference between the simulated results and the four-variable function trying to predict them. Results show that the real-coded GA was able to come up with a function that estimates the thermal performance of a proposed design with an accuracy of around 96%, based on the number of buildings tested. The goodness of fit, roughly represented by R2, ranged from 0.950 to 0.994. In terms of the effects of the various parameters, the area was found to have the smallest role among the design parameters. It was also found that the accuracy of the function suffers the most when high window-to-wall ratios are combined with low projection factors. In such cases, the energy use develops a potential optimum compactness. The proposed function (and methodology) will be a great tool for designers to inexpensively explore a wide range of alternatives and assess them in terms of their energy use efficiency. It will also be of great use to municipality officials and building codes authors.


2006 ◽  
Vol 104 (4) ◽  
pp. 696-700 ◽  
Author(s):  
Yongquan Tang ◽  
Martin J. Turner ◽  
A Barry Baker

Background Physiologic dead space is usually estimated by the Bohr-Enghoff equation or the Fletcher method. Alveolar dead space is calculated as the difference between anatomical dead space estimated by the Fowler equal area method and physiologic dead space. This study introduces a graphical method that uses similar principles for measuring and displaying anatomical, physiologic, and alveolar dead spaces. Methods A new graphical equal area method for estimating physiologic dead space is derived. Physiologic dead spaces of 1,200 carbon dioxide expirograms obtained from 10 ventilated patients were calculated by the Bohr-Enghoff equation, the Fletcher area method, and the new graphical equal area method and were compared by Bland-Altman analysis. Dead space was varied by varying tidal volume, end-expiratory pressure, inspiratory-to-expiratory ratio, and inspiratory hold in each patient. Results The new graphical equal area method for calculating physiologic dead space is shown analytically to be identical to the Bohr-Enghoff calculation. The mean difference (limits of agreement) between the physiologic dead spaces calculated by the new equal area method and Bohr-Enghoff equation was -0.07 ml (-1.27 to 1.13 ml). The mean difference between new equal area method and the Fletcher area method was -0.09 ml (-1.52 to 1.34 ml). Conclusions The authors' equal area method for calculating, displaying, and visualizing physiologic dead space is easy to understand and yields the same results as the classic Bohr-Enghoff equation and Fletcher area method. All three dead spaces--physiologic, anatomical, and alveolar--together with their relations to expired volume, can be displayed conveniently on the x-axis of a carbon dioxide expirogram.


Sign in / Sign up

Export Citation Format

Share Document