scholarly journals Detección automática de la Fibrilación Auricular a través de un Smartwatch

2019 ◽  
Vol 11 (4) ◽  
pp. 3
Author(s):  
Anna Abad Torrent ◽  
Helena Benito Naverac

La fibrilación auricular es la arritmia cardiaca más frecuente en la práctica clínica. La prevalencia se sitúa en torno al 0,4 - 1 % de la población general. Aumenta con la edad, llegando hasta el 8% a partir de los 80 años. Esta arritmia es la principal causa a nivel mundial de accidente cerebrovascular (20-30% de los casos son debidos a la fibrilación auricular), insuficiencia cardíaca o muerte súbita. Muchas veces , es clínicamente silente o se manifiesta con síntomas vagos como las palpitaciones, que pueden atribuirse erróneamente a ansiedad y retrasar el diagnóstico. La instauración temprana de anticoagulación (en determinados casos) reduce, de forma significativa la incidencia de fenómenos tromboembólicos. ABSTRACT Automatic detection of atrial fibrillation using a Smartwatch Atrial fibrillation is the most common cardiac arrhythmia in clinical practice. The prevalence is around 0.4 — 1% of the general population. It increases with age, reaching up to 8% from 80 years. In cardiology, the standard for the diagnosis of a cardiac arrhythmia is based on the performance of an electrocardiogram (ECG). From the monitoring of KardiaBand ™ and SmartRhythm ™, AliveCor launches the first platform for Apple Watch series 4, which combines an electrocardiography device approved by the FDA and certain analysis algorithms with artificial intelligence models, which help to detect the atrial fibrillation.  

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Xinran Wang ◽  
Liang Wang ◽  
Hong Bu ◽  
Ningning Zhang ◽  
Meng Yue ◽  
...  

AbstractProgrammed death ligand-1 (PD-L1) expression is a key biomarker to screen patients for PD-1/PD-L1-targeted immunotherapy. However, a subjective assessment guide on PD-L1 expression of tumor-infiltrating immune cells (IC) scoring is currently adopted in clinical practice with low concordance. Therefore, a repeatable and quantifiable PD-L1 IC scoring method of breast cancer is desirable. In this study, we propose a deep learning-based artificial intelligence-assisted (AI-assisted) model for PD-L1 IC scoring. Three rounds of ring studies (RSs) involving 31 pathologists from 10 hospitals were carried out, using the current guideline in the first two rounds (RS1, RS2) and our AI scoring model in the last round (RS3). A total of 109 PD-L1 (Ventana SP142) immunohistochemistry (IHC) stained images were assessed and the role of the AI-assisted model was evaluated. With the assistance of AI, the scoring concordance across pathologists was boosted to excellent in RS3 (0.950, 95% confidence interval (CI): 0.936–0.962) from moderate in RS1 (0.674, 95% CI: 0.614–0.735) and RS2 (0.736, 95% CI: 0.683–0.789). The 2- and 4-category scoring accuracy were improved by 4.2% (0.959, 95% CI: 0.953–0.964) and 13% (0.815, 95% CI: 0.803–0.827) (p < 0.001). The AI results were generally accepted by pathologists with 61% “fully accepted” and 91% “almost accepted”. The proposed AI-assisted method can help pathologists at all levels to improve the PD-L1 assay (SP-142) IC assessment in breast cancer in terms of both accuracy and concordance. The AI tool provides a scheme to standardize the PD-L1 IC scoring in clinical practice.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Albert T. Young ◽  
Kristen Fernandez ◽  
Jacob Pfau ◽  
Rasika Reddy ◽  
Nhat Anh Cao ◽  
...  

AbstractArtificial intelligence models match or exceed dermatologists in melanoma image classification. Less is known about their robustness against real-world variations, and clinicians may incorrectly assume that a model with an acceptable area under the receiver operating characteristic curve or related performance metric is ready for clinical use. Here, we systematically assessed the performance of dermatologist-level convolutional neural networks (CNNs) on real-world non-curated images by applying computational “stress tests”. Our goal was to create a proxy environment in which to comprehensively test the generalizability of off-the-shelf CNNs developed without training or evaluation protocols specific to individual clinics. We found inconsistent predictions on images captured repeatedly in the same setting or subjected to simple transformations (e.g., rotation). Such transformations resulted in false positive or negative predictions for 6.5–22% of skin lesions across test datasets. Our findings indicate that models meeting conventionally reported metrics need further validation with computational stress tests to assess clinic readiness.


Sign in / Sign up

Export Citation Format

Share Document