scholarly journals Evaluating Classification Consistency of Oral Lesion Images for Use in an Image Classification Teaching Tool

2021 ◽  
Vol 9 (8) ◽  
pp. 94
Author(s):  
Yuxin Shen ◽  
Minn N. Yoon ◽  
Silvia Ortiz ◽  
Reid Friesen ◽  
Hollis Lai

A web-based image classification tool (DiLearn) was developed to facilitate active learning in the oral health profession. Students engage with oral lesion images using swipe gestures to classify each image into pre-determined categories (e.g., left for refer and right for no intervention). To assemble the training modules and to provide feedback to students, DiLearn requires each oral lesion image to be classified, with various features displayed in the image. The collection of accurate meta-information is a crucial step for enabling the self-directed active learning approach taken in DiLearn. The purpose of this study is to evaluate the classification consistency of features in oral lesion images by experts and students for use in the learning tool. Twenty oral lesion images from DiLearn’s image bank were classified by three oral lesion experts and two senior dental hygiene students using the same rubric containing eight features. Classification agreement among and between raters were evaluated using Fleiss’ and Cohen’s Kappa. Classification agreement among the three experts ranged from identical (Fleiss’ Kappa = 1) for “clinical action”, to slight agreement for “border regularity” (Fleiss’ Kappa = 0.136), with the majority of categories having fair to moderate agreement (Fleiss’ Kappa = 0.332–0.545). Inclusion of the two student raters with the experts yielded fair to moderate overall classification agreement (Fleiss’ Kappa = 0.224–0.554), with the exception of “morphology”. The feature of clinical action could be accurately classified, while other anatomical features indirectly related to diagnosis had a lower classification consistency. The findings suggest that one oral lesion expert or two student raters can provide fairly consistent meta-information for selected categories of features implicated in the creation of image classification tasks in DiLearn.

2020 ◽  
Author(s):  
Siddhesh Bhojane ◽  
Krishna Shrestha ◽  
Sanghmitra Bharadwaj ◽  
Ritul Yadav ◽  
Fenil Ribinwala ◽  
...  

PLoS ONE ◽  
2018 ◽  
Vol 13 (1) ◽  
pp. e0188996 ◽  
Author(s):  
Muhammad Ahmad ◽  
Stanislav Protasov ◽  
Adil Mehmood Khan ◽  
Rasheed Hussain ◽  
Asad Masood Khattak ◽  
...  

2021 ◽  
Author(s):  
Benjamin Kellenberger ◽  
Devis Tuia ◽  
Dan Morris

<p>Ecological research like wildlife censuses increasingly relies on data on the scale of Terabytes. For example, modern camera trap datasets contain millions of images that require prohibitive amounts of manual labour to be annotated with species, bounding boxes, and the like. Machine learning, especially deep learning [3], could greatly accelerate this task through automated predictions, but involves expansive coding and expert knowledge.</p><p>In this abstract we present AIDE, the Annotation Interface for Data-driven Ecology [2]. In a first instance, AIDE is a web-based annotation suite for image labelling with support for concurrent access and scalability, up to the cloud. In a second instance, it tightly integrates deep learning models into the annotation process through active learning [7], where models learn from user-provided labels and in turn select the most relevant images for review from the large pool of unlabelled ones (Fig. 1). The result is a system where users only need to label what is required, which saves time and decreases errors due to fatigue.</p><p><img src="https://contentmanager.copernicus.org/fileStorageProxy.php?f=gnp.0402be60f60062057601161/sdaolpUECMynit/12UGE&app=m&a=0&c=131251398e575ac9974634bd0861fadc&ct=x&pn=gnp.elif&d=1" alt=""></p><p><em>Fig. 1: AIDE offers concurrent web image labelling support and uses annotations and deep learning models in an active learning loop.</em></p><p>AIDE includes a comprehensive set of built-in models, such as ResNet [1] for image classification, Faster R-CNN [5] and RetinaNet [4] for object detection, and U-Net [6] for semantic segmentation. All models can be customised and used without having to write a single line of code. Furthermore, AIDE accepts any third-party model with minimal implementation requirements. To complete the package, AIDE offers both user annotation and model prediction evaluation, access control, customisable model training, and more, all through the web browser.</p><p>AIDE is fully open source and available under https://github.com/microsoft/aerial_wildlife_detection.</p><p> </p><p><strong>References</strong></p>


2009 ◽  
pp. 1334-1349
Author(s):  
Elizabeth Avery Gomez ◽  
Dezhi Wu ◽  
Katia Passerini ◽  
Michael Bieber

Team-based learning is an active learning instructional strategy used in the traditional face-to-face classroom. Web-based computer-mediated communication (CMC) tools complement the face-toface classroom and enable active learning between face-to-face class times. This article presents the results from pilot assessments of computer-supported team-based learning. The authors utilized pedagogical approaches grounded in collaborative learning techniques, such as team-based learning, and extended these techniques to a Web-based environment through the use of computer-mediated communications tools (discussion Web-boards). This approach was examined through field studies in the course of two semesters at a US public technological university. The findings indicate that the perceptions of team learning experience such as perceived motivation, enjoyment, and learning in such a Web-based CMC environment are higher than in traditional face-to-face courses. In addition, our results show that perceived team members’ contributions impact individual learning experiences. Overall, Web-based CMC tools are found to effectively facilitate team interactions and achieve higher-level learning.


Author(s):  
Qiusha Zhu ◽  
Lin Lin ◽  
Mei-Ling Shyu ◽  
Dianting Liu

Traditional image classification relies on text information such as tags, which requires a lot of human effort to annotate them. Therefore, recent work focuses more on training the classifiers directly on visual features extracted from image content. The performance of content-based classification is improving steadily, but it is still far below users’ expectation. Moreover, in a web environment, HTML surrounding texts associated with images naturally serve as context information and are complementary to content information. This paper proposes a novel two-stage image classification framework that aims to improve the performance of content-based image classification by utilizing context information of web-based images. A new TF*IDF weighting scheme is proposed to extract discriminant textual features from HTML surrounding texts. Both content-based and context-based classifiers are built by applying multiple correspondence analysis (MCA). Experiments on web-based images from Microsoft Research Asia (MSRA-MM) dataset show that the proposed framework achieves promising results.


Sign in / Sign up

Export Citation Format

Share Document