scholarly journals TISA: Topic Independence Scoring Algorithm

Author(s):  
Justin Christopher Martineau ◽  
Doreen Cheng ◽  
Tim Finin
Keyword(s):  
2021 ◽  
Vol 8 (1) ◽  
pp. e000648
Author(s):  
Gilles Jadd Hoilat ◽  
Mohamad Fekredeen Ayas ◽  
Judie Noemie Hoilat ◽  
Ahmed Abu-Zaid ◽  
Ceren Durer ◽  
...  

BackgroundHepatic encephalopathy (HE) is defined as brain dysfunction that occurs because of acute liver failure or liver cirrhosis and is associated with significant morbidity and mortality. Lactulose is the standard of care till this date; however, polyethylene glycol (PEG) has gained the attention of multiple investigators.MethodsWe screened five databases namely PubMed, Scopus, Web of Science, Cochrane Library and Embase from inception to 10 February 2021. Dichotomous and continuous data were analysed using the Mantel-Haenszel and inverse variance methods, respectively, which yielded a meta-analysis comparing PEG versus lactulose in the treatment of HE.ResultsFour trials with 229 patients were included. Compared with lactulose, the pooled effect size demonstrated a significantly lower average HE Scoring Algorithm (HESA) Score at 24 hours (Mean difference (MD)=−0.68, 95% CI (−1.05 to –0.31), p<0.001), a higher proportion of patients with reduction of HESA Score by ≥1 grade at 24 hours (risk ratio (RR)=1.40, 95% CI (1.17 to 1.67), p<0.001), a higher proportion of patients with a HESA Score of grade 0 at 24 hours (RR=4.33, 95% CI (2.27 to 8.28), p<0.0010) and a shorter time to resolution of HE group (MD=−1.45, 95% CI (−1.72 to –1.18), p<0.001) in favour of patients treated with PEG.ConclusionPEG leads to a higher drop in the HESA Score and thus leads to a faster resolution of HE compared with lactulose.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Boju Pan ◽  
Yuxin Kang ◽  
Yan Jin ◽  
Lin Yang ◽  
Yushuang Zheng ◽  
...  

Abstract Introduction Programmed cell death ligand-1 (PD-L1) expression is a promising biomarker for identifying treatment related to non-small cell lung cancer (NSCLC). Automated image analysis served as an aided PD-L1 scoring tool for pathologists to reduce inter- and intrareader variability. We developed a novel automated tumor proportion scoring (TPS) algorithm, and evaluated the concordance of this image analysis algorithm with pathologist scores. Methods We included 230 NSCLC samples prepared and stained using the PD-L1(SP263) and PD-L1(22C3) antibodies separately. The scoring algorithm was based on regional segmentation and cellular detection. We used 30 PD-L1(SP263) slides for algorithm training and validation. Results Overall, 192 SP263 samples and 117 22C3 samples were amenable to image analysis scoring. Automated image analysis and pathologist scores were highly concordant [intraclass correlation coefficient (ICC) = 0.873 and 0.737]. Concordances at moderate and high cutoff values were better than at low cutoff values significantly. For SP263 and 22C3, the concordances in squamous cell carcinomas were better than adenocarcinomas (SP263 ICC = 0.884 vs 0.783; 22C3 ICC = 0.782 vs 0.500). In addition, our automated immune cell proportion scoring (IPS) scores achieved high positive correlation with the pathologists TPS scores. Conclusions The novel automated image analysis scoring algorithm permitted quantitative comparison with existing PD-L1 diagnostic assays and demonstrated effectiveness by combining cellular and regional information for image algorithm training. Meanwhile, the fact that concordances vary in different subtypes of NSCLC samples, which should be considered in algorithm development.


2016 ◽  
Vol 3 (1) ◽  
pp. 1159847
Author(s):  
John Kwagyan ◽  
Victor Apprey ◽  
George E. Bonney ◽  
Zudi Lu

2002 ◽  
Vol 8 (5) ◽  
pp. 379-386 ◽  
Author(s):  
N. Kehtarnavaz ◽  
H.-J. Oh ◽  
Y. Yoo

SLEEP ◽  
2021 ◽  
Vol 44 (Supplement_2) ◽  
pp. A101-A101
Author(s):  
Ulysses Magalang ◽  
Brendan Keenan ◽  
Bethany Staley ◽  
Peter Anderer ◽  
Marco Ross ◽  
...  

Abstract Introduction Scoring algorithms have the potential to increase polysomnography (PSG) scoring efficiency while also ensuring consistency and reproducibility. We sought to validate an updated sleep staging algorithm (Somnolyzer; Philips, Monroeville PA USA) against manual sleep staging, by analyzing a dataset we have previously used to report sleep staging variability across nine center-members of the Sleep Apnea Global Interdisciplinary Consortium (SAGIC). Methods Fifteen PSGs collected at a single sleep clinic were scored independently by technologists at nine SAGIC centers located in six countries, and auto-scored with the algorithm. Each 30-second epoch was staged manually according to American Academy of Sleep Medicine criteria. We calculated the intraclass correlation coefficient (ICC) and performed a Bland-Altman analysis comparing the average manual- and auto-scored total sleep time (TST) and time in each sleep stage (N1, N2, N3, rapid eye movement [REM]). We hypothesized that the values from auto-scoring would show good agreement and reliability when compared to the average across manual scorers. Results The participants contributing to the original dataset had a mean (SD) age of 47 (12) years and 80% were male. Auto-scoring showed substantial (ICC=0.60-0.80) or almost perfect (ICC=0.80-1.00) reliability compared to manual-scoring average, with ICCs (95% confidence interval) of 0.976 (0.931, 0.992) for TST, 0.681 (0.291, 0.879) for time in N1, 0.685 (0.299, 0.881) for time in N2, 0.922 (0.791, 0.973) for time in N3, and 0.930 (0.811, 0.976) for time in REM. Similarly, Bland-Altman analyses showed good agreement between methods, with a mean difference (limits of agreement) of only 1.2 (-19.7, 22.0) minutes for TST, 13.0 (-18.2, 44.1) minutes for N1, -13.8 (-65.7, 38.1) minutes for N2, -0.33 (-26.1, 25.5) minutes for N3, and -1.2 (-25.9, 23.5) minutes for REM. Conclusion Results support high reliability and good agreement between the auto-scoring algorithm and average human scoring for measurements of sleep durations. Auto-scoring slightly overestimated N1 and underestimated N2, but results for TST, N3 and REM were nearly identical on average. Thus, the auto-scoring algorithm is acceptable for sleep staging when compared against human scorers. Support (if any) Philips.


2018 ◽  
Vol 151 (3) ◽  
pp. 428-432 ◽  
Author(s):  
Jean M. Hansen ◽  
Anil K. Sood ◽  
Robert L. Coleman ◽  
Shannon N. Westin ◽  
Pamela T. Soliman ◽  
...  

2009 ◽  
Vol 7 (4) ◽  
pp. 24
Author(s):  
A. McElhinny ◽  
J. Ranger-Moore ◽  
I. Loftin ◽  
L. Wang ◽  
M. Loftus ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document