scholarly journals Clinical Factors Associated with Atrial Fibrillation Detection on Single-Time Point Screening Using a Hand-Held Single-Lead ECG Device

2021 ◽  
Vol 10 (4) ◽  
pp. 729 ◽  
Author(s):  
Giuseppe Boriani ◽  
Pietro Palmisano ◽  
Vincenzo Livio Malavasi ◽  
Elisa Fantecchi ◽  
Marco Vitolo ◽  
...  

Our aim was to assess the prevalence of unknown atrial fibrillation (AF) among adults during single-time point rhythm screening performed during meetings or social recreational activities organized by patient groups or volunteers. A total of 2814 subjects (median age 68 years) underwent AF screening by a handheld single-lead ECG device (MyDiagnostick). Overall, 56 subjects (2.0%) were diagnosed with AF, as a result of 12-lead ECG following a positive/suspected recording. Screening identified AF in 2.9% of the subjects ≥ 65 years. None of the 265 subjects aged below 50 years was found positive at AF screening. Risk stratification for unknown AF based on a CHA2DS2VASc > 0 in males and >1 in females (or CHA2DS2VA > 0) had a high sensitivity (98.2%) and a high negative predictive value (99.8%) for AF detection. A slightly lower sensitivity (96.4%) was achieved by using age ≥ 65 years as a risk stratifier. Conversely, raising the threshold at ≥75 years showed a low sensitivity. Within the subset of subjects aged ≥ 65 a CHA2DS2VASc > 1 in males and >2 in females, or a CHA2DS2VA > 1 had a high sensitivity (94.4%) and negative predictive value (99.3%), while age ≥ 75 was associated with a marked drop in sensitivity for AF detection.

2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
F Ghazal ◽  
F Al-Khalili ◽  
M Rosenqvist

Abstract Background Pulse-palpation is recommended (ESC IA) for single time-point screening for atrial fibrillation (AF). AF may, however, be paroxysmal which can make AF detection difficult to detect on single time-point measurement. Intermittent ECG recording is a sensitive method to detect AF. However, the role of pulse-palpation for AF detection has not been validated against simultaneous ECG recordings. Purpose To study the validity of AF detection using self pulse-palpation simultaneously with hand-hold ECG recording 3 times daily for two weeks for AF. Method Patients 65 years and older visiting four primary health care centers, four any reason, were invited to AF screening from July 2017 to December 2018. Hand-held intermittent ECG recordings, 30 seconds three times a day, was offered to participants without AF for a period of 2 weeks. Patients were instructed how to take their own pulse, simultaneously with intermittent ECG measurement and in written to note whether it was irregular or not. Results A total of 1010 patients (mean age 73 years, 61% women) participated in the study, 27 new cases of AF (mostly paroxysmal) were detected. Totally 53 782 simultaneous ECG-recordings and pulse-measurements were registered. AF was verified in 311 ECG-recordings but the pulse was palpated as irregular only in 77 of these recordings (25% sensitivity per measurement-occasion). Of those 27 detected AF cases, 15 cases felt their pulse as irregular at least at once occasion (56% sensitivity per individuals). 187 individuals without AF felt their pulse as irregular in at least one occasion. The specificity per measurement-occasion and per individuals were 98% and 81% respectively. Diagnostic odds ratio was 5.3. AF 27 patients No AF 983 patients Irregular pulse 202 individuals 15 187 Regular pulse 808 individuals 12 796 Sensitivity 56% Specificity 81% Positive Predictive Value 7% Negative Predictive Value 99% AF 311 measurements No AF 53471 measurements Irregular pulse 1046 measurements 77 969 Regular pulse 52,736 measurements 234 52,502 Sensitivity 25% Specificity 98% Positive Predictive Value 7% Negative Predictive Value 99% Conclusion AF screening using own pulse-palpation 3 times daily for two weeks is feasible but has a low sensitivity for AF detection. Acknowledgement/Funding This study was supported by the Swedish Heart and Lung Foundation, Pfizer, Boehringer-Ingelheim and Bayer


2018 ◽  
Vol 27 (6) ◽  
pp. 633-644 ◽  
Author(s):  
Marco Proietti ◽  
Alessio Farcomeni ◽  
Giulio Francesco Romiti ◽  
Arianna Di Rocco ◽  
Filippo Placentino ◽  
...  

Aims Many clinical scores for risk stratification in patients with atrial fibrillation have been proposed, and some have been useful in predicting all-cause mortality. We aim to analyse the relationship between clinical risk score and all-cause death occurrence in atrial fibrillation patients. Methods We performed a systematic search in PubMed and Scopus from inception to 22 July 2017. We considered the following scores: ATRIA-Stroke, ATRIA-Bleeding, CHADS2, CHA2DS2-VASc, HAS-BLED, HATCH and ORBIT. Papers reporting data about scores and all-cause death rates were considered. Results Fifty studies and 71 scores groups were included in the analysis, with 669,217 patients. Data on ATRIA-Bleeding, CHADS2, CHA2DS2-VASc and HAS-BLED were available. All the scores were significantly associated with an increased risk for all-cause death. All the scores showed modest predictive ability at five years (c-indexes (95% confidence interval) CHADS2: 0.64 (0.63–0.65), CHA2DS2-VASc: 0.62 (0.61–0.64), HAS-BLED: 0.62 (0.58–0.66)). Network meta-regression found no significant differences in predictive ability. CHA2DS2-VASc score had consistently high negative predictive value (≥94%) at one, three and five years of follow-up; conversely it showed the highest probability of being the best performing score (63% at one year, 60% at three years, 68% at five years). Conclusion In atrial fibrillation patients, contemporary clinical risk scores are associated with an increased risk of all-cause death. Use of these scores for death prediction in atrial fibrillation patients could be considered as part of holistic clinical assessment. The CHA2DS2-VASc score had consistently high negative predictive value during follow-up and the highest probability of being the best performing clinical score.


2014 ◽  
Vol 8 (10) ◽  
pp. 1252-1258 ◽  
Author(s):  
Reem Mostafa Hassan ◽  
Mervat G El Enany ◽  
Hussien H Rizk

Introduction: Diagnosis of bloodstream infections using bacteriological cultures suffers from low sensitivity and reporting delay. Advanced molecular techniques introduced in many laboratories provide rapid results and may show improvements in patient outcomes. This study aimed to evaluate the usefulness of a molecular technique, broad-range 16S rRNA PCR followed by sequencing for the diagnosis of bloodstream infections, compared to blood culture in different patient groups. Methodology: Conventional PCR was performed, using broad-range 16S rRNA primers, on blood cultures collected from different patients with suspected bloodstream infections; results were compared with those of blood culture. Results: Though blood culture is regarded as the gold standard, PCR evaluation showed sensitivity of 86.25%, specificity of 91.25%, positive predictive value of 76.67%, negative predictive value of 95.22%, and accuracy of 88.8%. Conclusions: Molecular assays seem not to be sufficient to replace microbial cultures in the diagnosis of bloodstream infections, but they can offer a rapid, good negative test to rule out infection due to their high negative predictive value.


2001 ◽  
Vol 7 (6) ◽  
pp. 359-363 ◽  
Author(s):  
M Tintoré ◽  
A Rovira ◽  
L Brieva ◽  
E Grivé ◽  
R Jardí ◽  
...  

Aim of the study: To evaluate and compare the capacity of oligoclonal bands (OB) and three sets of MR imaging criteria to predict the conversion of clinically isolated syndromes (CIS) to clinically definite multiple sclerosis (CDMS). Patients and methods: One hundred and twelve patients with CIS were prospectively studied with MR imaging and determination of OB. Based on the clinical follow-up (conversion or not conversion to CDMS), we calculated the sensitivity, specificity accuracy, positive and negative predictive value of the OB, and MR imaging criteria proposed by Paty et al, Fazekas et al and Barkhof et al. Results: CDMS developed in 26 (23.2%) patients after a mean follow-up of 31 months (range 12-62). OB were positive in 70 (62.5%) patients and were associated with a higher risk of developing CDMS. OB showed a sensitivity of 81%, specificity of 43%, accuracy of 52%, positive predictive value (PPV) of 30% and negative predictive value (NPV) of 88%. Paty and Fazekas criteria showed the same results with a sensitivity of 77%, specificity of 51%, accuracy of 57%, positive predictive value of 32% and negative predictive value of 88%. Barkhof criteria showed a sensitivity of 65%, specificity of 70%, accuracy of 69%, PPV of 40% and NPV of 87%. The greatest accuracy was achieved when patients with positive OB and three or four Barkhof's criteria were selected. Conclusions: We observed a high prevalence of OB in CIS. OB and MR imaging (Paty's and Fazekas' criteria) have high sensitivity. Barkhof's criteria have a higher specificity. Both OB and MR imaging criteria have a high negative predictive value.


2021 ◽  
Vol 42 (Supplement_1) ◽  
Author(s):  
D M Kimenai ◽  
A Anand ◽  
M De Bakker ◽  
M Shipley ◽  
T Fujisawa ◽  
...  

Abstract Background High-sensitivity cardiac troponin may be a promising biomarker that could be used for personalised cardiovascular risk prediction and monitoring in the general population. Temporal changes in high-sensitivity cardiac troponin before cardiovascular death are largely unexplored. Purpose Using the longitudinal Whitehall II cohort, we evaluated whether three serial high-sensitivity cardiac troponin I measurements over 15 years improved prediction of cardiovascular death when compared to a single time point at baseline. Methods Whitehall II is an ongoing longitudinal observation cohort study of 10,308 civil servants, and we included participants who had at least one cardiac troponin measurement and outcome data available. We constructed time trajectories to evaluate the temporal pattern of cardiac troponin I in those who died from cardiovascular disease as compared to those who did not. Cox regression and joint models were used to investigate the association of cardiac troponin I in relation to cardiovascular death using single time point (at baseline) and repeated measurements (at baseline, 10 and 15 years), respectively. The discriminative ability was assessed by the concordance index. Results In total, we included 7,293 individuals (mean age of 58 years [SD 7] at baseline, 29.4% women). Of these, 5,818 (79.8%) and 4,045 (55.5%) individuals had a second and third cardiac troponin I concentration measured, respectively. Cardiovascular death occurred in 281 (3.9%) individuals during a median follow-up of 21.4 [IQR, 15.8 to 21.8] years. In the 21-year trajectories of cardiac troponin I, we observed higher baseline concentrations in those who died due to cardiovascular disease as compared to those who did not (median 5 [IQR, 2 to 9] ng/L versus 3 [IQR, 2 to 5] ng/L respectively, Figure). Cardiac troponin I was an independent predictor of cardiovascular death, and the hazard ratio (HR) derived from the joint model that included serial cardiac troponin measurements was higher than the HR derived from the single time point model (single time point model: adjusted HR 1.53, 95% Confidence Interval [CI] 1.37 to 1.70 per naturally log transformed unit of cardiac troponin I, versus repeated measurements model: adjusted HR 1.79, 95% CI 1.58 to 2.02). The discriminative ability of the cardiac troponin model improved when using repeated measurements (concordance index of unadjusted cardiac troponin models, single time point: 0.668 versus repeated measurements: 0.724). Conclusions Our study shows that cardiac troponin I trajectories were persistently higher among individuals who died from cardiovascular disease. Cardiac troponin I is a strong independent predictor of cardiovascular death, and incorporating repeated measurements of cardiac troponin improves cardiovascular risk prediction in the general population. FUNDunding Acknowledgement Type of funding sources: Foundation. Main funding source(s): Cardiac troponin I measurements and analysis were supported by were supported by Siemens Healthineers. The study was supported by Health Data Research UK which receives its funding from HDR UK Ltd (HDR-5012) funded by the UK Medical Research Council, Engineering and Physical Sciences Research Council, Economic and Social Research Council, Department of Health and Social Care (England), Chief Scientist Office of the Scottish Government Health and Social Care Directorates, Health and Social Care Research and Development Division (Welsh Government), Public Health Agency (Northern Ireland), British Heart Foundation and the Wellcome Trust. NLM is supported by the British Heart Foundation through a Senior Clinical Research Fellowship (FS/16/14/32023), Programme Grant (RG/20/10/34966) and a Research Excellence Award (RE/18/5/34216). The funders had no role in the study and the decision to submit this work to be considered for publication.


2021 ◽  
Vol 9 (B) ◽  
pp. 1128-1134
Author(s):  
Saif Hassan Alrasheed ◽  
Amel Mohamed Yousif ◽  
Majid A. Moafa ◽  
Abd Elaziz Mohamed Elmadina ◽  
Mohammad Alobaid

BACKGROUND: Sheard and Percival assumed that symptoms from latent strabismus can be avoided if the relevant fusional vergence is adequate to support the heterophoria. AIM: The aim of the study was to determine the sensitivity and specificity of Sheard’s and Percival’s criterion for the diagnosis of heterophoria. METHODS: A cross-sectional hospital-based study was performed at Al-Neelain Eye Hospital Khartoum, Sudan from February to October 2019. Heterophoria was measured using Maddox Wing and fusional vergence using a prism bar. Thereafter, Sheard’s and Percival’s criteria were used for the diagnosis of heterophoria. RESULTS: A total of 230 participants (age = 15–30 years; mean age = 19.34 ± 3.325 years) were recruited for this study. The Sheard’s criteria showed a high sensitivity of 87.2% and a low specificity of 8.0% for the diagnosing of exophoria, with positive and negative predictive values of 65.5% and 26%, respectively. The criteria showed a relatively low sensitivity of 77.8% and a specificity of 9.0% in the diagnosis of esophoria, with a positive and negative predictive values of 56% and 20%, respectively. Percival criteria showed high sensitivity 84.2% and low specificity 9.1% in diagnosing esophoria, with a positive and negative predictive value of 61.5% and 25%, respectively. On the other hand, the criteria showed low sensitivity 67.4% and specificity 13.8% in diagnosing exophoria, with positive and negative predictive value 61.9% and 17%, respectively. CONCLUSION: Sheard’s and Percival’s criteria are useful in diagnosing binocular vision problems. Sheard’s criteria are accurate in diagnosing near exophoria and Percival’s criteria are more accurate in diagnosing near esophoria. Therefore, these criteria provide good clues and predictions for the diagnosis of binocular vision problems.


Author(s):  
Baharan Kamousi ◽  
Suganya Karunakaran ◽  
Kapil Gururangan ◽  
Matthew Markert ◽  
Barbara Decker ◽  
...  

Abstract Introduction Current electroencephalography (EEG) practice relies on interpretation by expert neurologists, which introduces diagnostic and therapeutic delays that can impact patients’ clinical outcomes. As EEG practice expands, these experts are becoming increasingly limited resources. A highly sensitive and specific automated seizure detection system would streamline practice and expedite appropriate management for patients with possible nonconvulsive seizures. We aimed to test the performance of a recently FDA-cleared machine learning method (Claritγ, Ceribell Inc.) that measures the burden of seizure activity in real time and generates bedside alerts for possible status epilepticus (SE). Methods We retrospectively identified adult patients (n = 353) who underwent evaluation of possible seizures with Rapid Response EEG system (Rapid-EEG, Ceribell Inc.). Automated detection of seizure activity and seizure burden throughout a recording (calculated as the percentage of ten-second epochs with seizure activity in any 5-min EEG segment) was performed with Claritγ, and various thresholds of seizure burden were tested (≥ 10% indicating ≥ 30 s of seizure activity in the last 5 min, ≥ 50% indicating ≥ 2.5 min of seizure activity, and ≥ 90% indicating ≥ 4.5 min of seizure activity and triggering a SE alert). The sensitivity and specificity of Claritγ’s real-time seizure burden measurements and SE alerts were compared to the majority consensus of at least two expert neurologists. Results Majority consensus of neurologists labeled the 353 EEGs as normal or slow activity (n = 249), highly epileptiform patterns (HEP, n = 87), or seizures [n = 17, nine longer than 5 min (e.g., SE), and eight shorter than 5 min]. The algorithm generated a SE alert (≥ 90% seizure burden) with 100% sensitivity and 93% specificity. The sensitivity and specificity of various thresholds for seizure burden during EEG recordings for detecting patients with seizures were 100% and 82% for ≥ 50% seizure burden and 88% and 60% for ≥ 10% seizure burden. Of the 179 EEG recordings in which the algorithm detected no seizures, seizures were identified by the expert reviewers in only two cases, indicating a negative predictive value of 99%. Discussion Claritγ detected SE events with high sensitivity and specificity, and it demonstrated a high negative predictive value for distinguishing nonepileptiform activity from seizure and highly epileptiform activity. Conclusions Ruling out seizures accurately in a large proportion of cases can help prevent unnecessary or aggressive over-treatment in critical care settings, where empiric treatment with antiseizure medications is currently prevalent. Claritγ’s high sensitivity for SE and high negative predictive value for cases without epileptiform activity make it a useful tool for triaging treatment and the need for urgent neurological consultation.


2020 ◽  
Vol 24 (3) ◽  
pp. 1-164 ◽  
Author(s):  
Rui Duarte ◽  
Angela Stainthorpe ◽  
Janette Greenhalgh ◽  
Marty Richardson ◽  
Sarah Nevitt ◽  
...  

Background Atrial fibrillation (AF) is the most common type of cardiac arrhythmia and is associated with an increased risk of stroke and congestive heart failure. Lead-I electrocardiogram (ECG) devices are handheld instruments that can be used to detect AF at a single time point in people who present with relevant signs or symptoms. Objective To assess the diagnostic test accuracy, clinical impact and cost-effectiveness of using single time point lead-I ECG devices for the detection of AF in people presenting to primary care with relevant signs or symptoms, and who have an irregular pulse compared with using manual pulse palpation (MPP) followed by a 12-lead ECG in primary or secondary care. Data sources MEDLINE, MEDLINE Epub Ahead of Print and MEDLINE In-Process & Other Non-Indexed Citations, EMBASE, PubMed, Cochrane Databases of Systematic Reviews, Cochrane Central Database of Controlled Trials, Database of Abstracts of Reviews of Effects and the Health Technology Assessment Database. Methods The systematic review methods followed published guidance. Two reviewers screened the search results (database inception to April 2018), extracted data and assessed the quality of the included studies. Summary estimates of diagnostic accuracy were calculated using bivariate models. An economic model consisting of a decision tree and two cohort Markov models was developed to evaluate the cost-effectiveness of lead-I ECG devices. Results No studies were identified that evaluated the use of lead-I ECG devices for patients with signs or symptoms of AF. Therefore, the diagnostic accuracy and clinical impact results presented are derived from an asymptomatic population (used as a proxy for people with signs or symptoms of AF). The summary sensitivity of lead-I ECG devices was 93.9% [95% confidence interval (CI) 86.2% to 97.4%] and summary specificity was 96.5% (95% CI 90.4% to 98.8%). One study reported limited clinical outcome data. Acceptability of lead-I ECG devices was reported in four studies, with generally positive views. The de novo economic model yielded incremental cost-effectiveness ratios (ICERs) per quality-adjusted life-year (QALY) gained. The results of the pairwise analysis show that all lead-I ECG devices generated ICERs per QALY gained below the £20,000–30,000 threshold. Kardia Mobile (AliveCor Ltd, Mountain View, CA, USA) is the most cost-effective option in a full incremental analysis. Limitations No published data evaluating the diagnostic accuracy, clinical impact or cost-effectiveness of lead-I ECG devices for the population of interest are available. Conclusions Single time point lead-I ECG devices for the detection of AF in people with signs or symptoms of AF and an irregular pulse appear to be a cost-effective use of NHS resources compared with MPP followed by a 12-lead ECG in primary or secondary care, given the assumptions used in the base-case model. Future work Studies assessing how the use of lead-I ECG devices in this population affects the number of people diagnosed with AF when compared with current practice would be useful. Study registration This study is registered as PROSPERO CRD42018090375. Funding The National Institute for Health Research Health Technology Assessment programme.


Sign in / Sign up

Export Citation Format

Share Document