scholarly journals ERCP and video assessment: Can video judge the endoscopy star?

2021 ◽  
Vol 93 (4) ◽  
pp. 924-926
Author(s):  
Andrew Johannes ◽  
Patrick Pfau
Keyword(s):  
Author(s):  
Aida Carballo-Fazanes ◽  
Ezequiel Rey ◽  
Nadia C. Valentini ◽  
José E. Rodríguez-Fernández ◽  
Cristina Varela-Casal ◽  
...  

The Test of Gross Motor Development (TGMD) is one of the most common tools for assessing the fundamental movement skills (FMS) in children between 3 and 10 years. This study aimed to examine the intra-rater and inter-rater reliability of the TGMD—3rd Edition (TGMD-3) between expert and novice raters using live and video assessment. Five raters [2 experts and 3 novices (one of them BSc in Physical Education and Sport Science)] assessed and scored the performance of the TGMD-3 of 25 healthy children [Female: 60%; mean (standard deviation) age 9.16 (1.31)]. Schoolchildren were attending at one public elementary school during the academic year 2019–2020 from Santiago de Compostela (Spain). Raters scored each children performance through two viewing moods (live and slow-motion). The ICC (Intraclass Correlation Coefficient) was used to determine the agreement between raters. Our results showed moderate-to-excellent intra-rater reliability for overall score and locomotor and ball skills subscales; moderate-to-good inter-rater reliability for overall and ball skills; and poor-to-good for locomotor subscale. Higher intra-rater reliability was achieved by the expert raters and novice rater with physical education background compared to novice raters. However, the inter-rater reliability was more variable in all the raters regardless of their experience or background. No significant differences in reliability were found when comparing live and video assessments. For clinical practice, it would be recommended that raters reach an agreement before the assessment to avoid subjective interpretations that might distort the results.


2018 ◽  
Vol 35 (8) ◽  
pp. 1508-1518
Author(s):  
Rosembergue Pereira Souza ◽  
Luiz Fernando Rust da Costa Carmo ◽  
Luci Pirmez

Purpose The purpose of this paper is to present a procedure for finding unusual patterns in accredited tests using a rapid processing method for analyzing video records. The procedure uses the temporal differencing technique for object tracking and considers only frames not identified as statistically redundant. Design/methodology/approach An accreditation organization is responsible for accrediting facilities to undertake testing and calibration activities. Periodically, such organizations evaluate accredited testing facilities. These evaluations could use video records and photographs of the tests performed by the facility to judge their conformity to technical requirements. To validate the proposed procedure, a real-world data set with video records from accredited testing facilities in the field of vehicle safety in Brazil was used. The processing time of this proposed procedure was compared with the time needed to process the video records in a traditional fashion. Findings With an appropriate threshold value, the proposed procedure could successfully identify video records of fraudulent services. Processing time was faster than when a traditional method was employed. Originality/value Manually evaluating video records is time consuming and tedious. This paper proposes a procedure to rapidly find unusual patterns in videos of accredited tests with a minimum of manual effort.


2015 ◽  
Vol 50 (2) ◽  
pp. 413-432 ◽  
Author(s):  
Chu Van Cuong ◽  
Michael Russell ◽  
Sharon Brown ◽  
Peter Dart

2020 ◽  
Vol 21 (3) ◽  
pp. 181-190
Author(s):  
Jaroslav Frnda ◽  
Marek Durica ◽  
Mihail Savrasovs ◽  
Philippe Fournier-Viger ◽  
Jerry Chun-Wei Lin

AbstractThis paper deals with an analysis of Kohonen map usage possibility for real-time evaluation of end-user video quality perception. The Quality of Service framework (QoS) describes how the network impairments (network utilization or packet loss) influence the picture quality, but it does not reflect precisely on customer subjective perceived quality of received video stream. There are several objective video assessment metrics based on mathematical models trying to simulate human visual system but each of them has its own evaluation scale. This causes a serious problem for service providers to identify a critical point when intervention into the network behaviour is needed. On the other hand, subjective tests (Quality of Experience concept) are time-consuming and costly and of course, cannot be performed in real-time. Therefore, we proposed a mapping function able to predict subjective end-user quality perception based on the situation in a network, video stream features and results obtained from the objective video assessment method.


2018 ◽  
Author(s):  
Jian Rong Tan ◽  
Susan Coulson ◽  
Melanie Keep

BACKGROUND Patients with facial nerve paralysis (FNP) experience challenges in accessing health care that could potentially be overcome by telemedicine. However, the reliability of telemedicine has yet to be established in this field. OBJECTIVE This study aimed to investigate the consistency between face-to-face and video assessments of patients with FNP by experienced clinicians. METHODS A repeated-measures design was used. A total of 7 clinicians assessed the FNP of 28 patients in a face-to-face clinic using standardized grading systems (the House-Brackmann, Sydney, and Sunnybrook facial grading systems). After 3 months, the same grading systems were used to assess facial palsy in video recordings of the same patients. RESULTS The House-Brackmann system in video assessment had excellent reliability and agreement (intraclass correlation coefficient [ICC]=0.780; principal component analysis [PCA]=87.5%), similar to face-to-face assessment (ICC=0.686; PCA=79.2%). Reliability of the Sydney system was good to excellent, with excellent agreement face-to-face (ICC=0.633 to 0.834; PCA=81.0%-95.2%). However, video assessment of the cervical branch and synkinesis had fair reliability and good agreement (ICC=0.437 to 0.597; PCA=71.4%), whereas that of other branches had good to excellent reliability and excellent agreement (ICC=0.625 to 0.862; PCA=85.7%-100.0%). Reliability of the Sunnybrook system was poor to fair for resting symmetry (ICC=0.195 to 0.498; PCA=91.3%-100.0%) and synkinesis (ICC=−0.037 to 0.637; PCA=69.6%-87.0%) but was good to excellent for voluntary movement (ICC=0.601 to 0.906; PCA=56.5%-91.3%) in face-to-face and video assessments. Bland-Altman plots indicated normal limits of agreement within ±1 between face-to-face and video-assessed scores only for the temporal and buccal branches of the Sydney system and for resting symmetry in the Sunnybrook system. CONCLUSIONS Video assessment of FNP with the House-Brackmann and Sunnybrook systems was as reliable as face-to-face but with insufficient agreement, especially in the assessment of synkinesis. However, video assessment does not account for the impact of real-time interactions that occur during tele-assessment sessions.


Circulation ◽  
2018 ◽  
Vol 138 (Suppl_2) ◽  
Author(s):  
Koichiro Shinozaki ◽  
Kota Saeki ◽  
Lee Jacobson ◽  
Julianne Falotico ◽  
Timmy Li ◽  
...  

Objective: Capillary refill time (CRT) measured at the bedside is widely promulgated as an acceptable method to identify patients in shock. However, inter-observer reliability of CRT visual assessments is questionable. To investigate the variability that occurs when healthcare providers (HCP) visually assess CRT, we conducted a study in the emergency department (ED) that simultaneously measured CRT while recording the change in the patients’ fingertip color by video. Methods: Three HCPs were selected to perform manual CRT assessments at the bedside to classified patients as having either normal (≤2 seconds) or abnormal (>2 seconds) CRT. An attending ED physician, blinded from the HCP classification, quantitatively measured CRT using a chronograph ( visual CRT ). A video camera was mounted on top of the hand tool in order to obtain a digital recording of the change in color, as the fingertip was compressed. The videos were thereafter used to calculate CRT via image software analysis ( image CRT ). Additionally, nine HCPs, including the ED physician, reviewed the videos in a separate setting to visually assess CRT while blinded from any patient information ( video assessment CRT ). Results: Thirty ER patients were enrolled in the ED. The ED physician identified 10 patients with abnormal CRT (> 2.0 seconds), while only two patients were identified by HCPs. Mean visual CRT was 2.0±0.9 (range: 1.2-4.4) seconds; mean image CRT, 2.4±2.1 (range: 0.5-8.0); mean video assessment CRT by the ED physician, 1.8±0.6 (range: 1.0-2.7). The correlation between visual CRT and image CRT was strong (r=0.648, p=0.002), while it was weak between visual and video assessment CRT (r=0.312, p=0.18). Inter-observer reliability of video assessment CRT among HCPs was low (intra correlation coefficient: 0.15, 95% CI 0.05-0.33). Conclusions: Inter-observer reliability of visual CRT assessment is low. The reliability of skilled physician in a real clinical situation increases and it infers that a reliable CRT test may only be capable if other patient information is available at the bedside, such as patient’s background, distressed appearance, cold peripheral temperature etc.


2019 ◽  
Vol 2 (3) ◽  
pp. 154-165
Author(s):  
Nick Draper ◽  
Rizanudin Jusary ◽  
Helen Marshall

2020 ◽  
Vol 5 (10) ◽  
pp. 1194
Author(s):  
Samuel A. Kelly ◽  
Kevin B. Schesing ◽  
Jennifer T. Thibodeau ◽  
Colby R. Ayers ◽  
Mark H. Drazner

2001 ◽  
Vol 10 (1-2) ◽  
pp. 39-45 ◽  
Author(s):  
Julie Thorburn ◽  
Maureen Dean ◽  
Therese Finn ◽  
Jennifer King ◽  
May Wilkinson

2010 ◽  
Vol 43 (7) ◽  
pp. 1380-1385 ◽  
Author(s):  
Deydre S. Teyhen ◽  
Tansy R. Christ ◽  
Elissa R. Ballas ◽  
Carrie W. Hoppes ◽  
Joshua D. Walters ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document