scholarly journals A deep learning approach to automatic gingivitis screening based on classification and localization in RGB photos

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Wen Li ◽  
Yuan Liang ◽  
Xuan Zhang ◽  
Chao Liu ◽  
Lei He ◽  
...  

AbstractRoutine dental visit is the most common approach to detect the gingivitis. However, such diagnosis can sometimes be unavailable due to the limited medical resources in certain areas and costly for low-income populations. This study proposes to screen the existence of gingivitis and its irritants, i.e., dental calculus and soft deposits, from oral photos with a novel Multi-Task Learning convolutional neural network (CNN) model. The study can be meaningful for promoting the public dental health, since it sheds light on a cost-effective and ubiquitous solution for the early detection of dental issues. With 625 patients included in this study, the classification Area Under the Curve (AUC) for detecting gingivitis, dental calculus and soft deposits were 87.11%, 80.11%, and 78.57%, respectively; Meanwhile, according to our experiments, the model can also localize the three types of findings on oral photos with moderate accuracy, which enables the model to explain the screen results. By comparing to general-purpose CNNs, we showed our model significantly outperformed on both classification and localization tasks, which indicates the effectiveness of Multi-Task Learning on dental disease detection. In all, the study shows the potential of deep learning for enabling the screening of dental diseases among large populations.

2020 ◽  
Author(s):  
Wen Li ◽  
Yuan Liang ◽  
Xuan Zhang ◽  
Chao Liu ◽  
Lei He ◽  
...  

Abstract Routine dental visit is the most common approach to detect the gingivitis. However, such diagnosis can sometimes be unavailable due to the limited medical resources in certain areas and costly for low-income populations. To increase the availability, this study proposes to screen the existence of gingivitis and its irritants, i.e., dental calculus and soft deposits, from oral photos with a novel Multi-Task Learning convolutional neural network (CNN) model. With 625 patients included in this study, the classification Area Under the Curve (AUC) for detecting gingivitis, dental calculus and soft deposits were 87.11%, 80.11%, and 78.57%, respectively; Meanwhile the box-wise localization sensitivity for gingivitis and dental calculus were 66.57% and 45.61%. Moreover, according to a consistency evaluation with three board-certificated dentists, the model achieved a median score of 3.0/5.0 for reasoning locations of soft deposits without any spatial supervision. By comparing to general-purpose CNNs, we showed our model significantly outperformed on both classification and localization tasks, which indicates the effectiveness of Multi-Task Learning on dental disease detection. The results show the potential of deep learning for enabling cost-effective screening of dental diseases among large populations.


Author(s):  
K.M. Ibrahim Khalilullah ◽  
Shunsuke Ota ◽  
Toshiyuki Yasuda ◽  
Mitsuru Jindai

Purpose The purpose of this study is to develop a cost-effective autonomous wheelchair robot navigation method that assists the aging population. Design/methodology/approach Navigation in outdoor environments is still a challenging task for an autonomous mobile robot because of the highly unstructured and different characteristics of outdoor environments. This study examines a complete vision guided real-time approach for robot navigation in urban roads based on drivable road area detection by using deep learning. During navigation, the camera takes a snapshot of the road, and the captured image is then converted into an illuminant invariant image. Subsequently, a deep belief neural network considers this image as an input. It extracts additional discriminative abstract features by using general purpose learning procedure for detection. During obstacle avoidance, the robot measures the distance from the obstacle position by using estimated parameters of the calibrated camera, and it performs navigation by avoiding obstacles. Findings The developed method is implemented on a wheelchair robot, and it is verified by navigating the wheelchair robot on different types of urban curve roads. Navigation in real environments indicates that the wheelchair robot can move safely from one place to another. The navigation performance of the developed method and a comparison with laser range finder (LRF)-based methods were demonstrated through experiments. Originality/value This study develops a cost-effective navigation method by using a single camera. Additionally, it utilizes the advantages of deep learning techniques for robust classification of the drivable road area. It performs better in terms of navigation when compared to LRF-based methods in LRF-denied environments.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1352
Author(s):  
Darius Riziki Martin ◽  
Nicole Remaliah Sibuyi ◽  
Phumuzile Dube ◽  
Adewale Oluwaseun Fadaka ◽  
Ruben Cloete ◽  
...  

The transmission of Tuberculosis (TB) is very rapid and the burden it places on health care systems is felt globally. The effective management and prevention of this disease requires that it is detected early. Current TB diagnostic approaches, such as the culture, sputum smear, skin tuberculin, and molecular tests are time-consuming, and some are unaffordable for low-income countries. Rapid tests for disease biomarker detection are mostly based on immunological assays that use antibodies which are costly to produce, have low sensitivity and stability. Aptamers can replace antibodies in these diagnostic tests for the development of new rapid tests that are more cost effective; more stable at high temperatures and therefore have a better shelf life; do not have batch-to-batch variations, and thus more consistently bind to a specific target with similar or higher specificity and selectivity and are therefore more reliable. Advancements in TB research, in particular the application of proteomics to identify TB specific biomarkers, led to the identification of a number of biomarker proteins, that can be used to develop aptamer-based diagnostic assays able to screen individuals at the point-of-care (POC) more efficiently in resource-limited settings.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1031
Author(s):  
Joseba Gorospe ◽  
Rubén Mulero ◽  
Olatz Arbelaitz ◽  
Javier Muguerza ◽  
Miguel Ángel Antón

Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.


Author(s):  
Yongfeng Gao ◽  
Jiaxing Tan ◽  
Zhengrong Liang ◽  
Lihong Li ◽  
Yumei Huo

AbstractComputer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.


2021 ◽  
pp. 089719002110272
Author(s):  
Joanne Huang ◽  
Jeannie D. Chan ◽  
Thu Nguyen ◽  
Rupali Jain ◽  
Zahra Kassamali Escobar

Universal area-under-the-curve (AUC) guided vancomycin therapeutic drug monitoring (TDM) is resource-intensive, cost-prohibitive, and presents a paradigm shift that leaves institutions with the quandary of defining the preferred and most practical method for TDM. We report a step-by-step quality improvement process using 4 plan-do-study-act (PDSA) cycles to provide a framework for development of a hybrid model of trough and AUC-based vancomycin monitoring. We found trough-based monitoring a pragmatic strategy as a first-tier approach when anticipated use is short-term. AUC-guided monitoring was most impactful and cost-effective when reserved for patients with high-risk for nephrotoxicity. We encourage others to consider quality improvement tools to locally adopt AUC-based monitoring.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Isabella Castiglioni ◽  
Davide Ippolito ◽  
Matteo Interlenghi ◽  
Caterina Beatrice Monti ◽  
Christian Salvatore ◽  
...  

Abstract Background We aimed to train and test a deep learning classifier to support the diagnosis of coronavirus disease 2019 (COVID-19) using chest x-ray (CXR) on a cohort of subjects from two hospitals in Lombardy, Italy. Methods We used for training and validation an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals (Centres 1 and 2). We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as the reference standard. Results At 10-fold cross-validation, our deep learning model classified COVID-19 and non-COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74–0.81), 0.82 specificity (95% CI 0.78–0.85), and 0.89 area under the curve (AUC) (95% CI 0.86–0.91). For the independent dataset, deep learning showed 0.80 sensitivity (95% CI 0.72–0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73–0.87), and 0.81 AUC (95% CI 0.73–0.87). Radiologists’ reading obtained 0.63 sensitivity (95% CI 0.52–0.74) and 0.78 specificity (95% CI 0.61–0.90) in Centre 1 and 0.64 sensitivity (95% CI 0.52–0.74) and 0.86 specificity (95% CI 0.71–0.95) in Centre 2. Conclusions This preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of deep learning for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance.


2020 ◽  
pp. 000313482098255
Author(s):  
Michael D. Watson ◽  
Maria R. Baimas-George ◽  
Keith J. Murphy ◽  
Ryan C. Pickens ◽  
David A. Iannitti ◽  
...  

Background Neoadjuvant therapy may improve survival of patients with pancreatic adenocarcinoma; however, determining response to therapy is difficult. Artificial intelligence allows for novel analysis of images. We hypothesized that a deep learning model can predict tumor response to NAC. Methods Patients with pancreatic cancer receiving neoadjuvant therapy prior to pancreatoduodenectomy were identified between November 2009 and January 2018. The College of American Pathologists Tumor Regression Grades 0-2 were defined as pathologic response (PR) and grade 3 as no response (NR). Axial images from preoperative computed tomography scans were used to create a 5-layer convolutional neural network and LeNet deep learning model to predict PRs. The hybrid model incorporated decrease in carbohydrate antigen 19-9 (CA19-9) of 10%. Accuracy was determined by area under the curve. Results A total of 81 patients were included in the study. Patients were divided between PR (333 images) and NR (443 images). The pure model had an area under the curve (AUC) of .738 ( P < .001), whereas the hybrid model had an AUC of .785 ( P < .001). CA19-9 decrease alone was a poor predictor of response with an AUC of .564 ( P = .096). Conclusions A deep learning model can predict pathologic tumor response to neoadjuvant therapy for patients with pancreatic adenocarcinoma and the model is improved with the incorporation of decreases in serum CA19-9. Further model development is needed before clinical application.


2021 ◽  
Vol 11 (4) ◽  
pp. 1965
Author(s):  
Raul-Ronald Galea ◽  
Laura Diosan ◽  
Anca Andreica ◽  
Loredana Popa ◽  
Simona Manole ◽  
...  

Despite the promising results obtained by deep learning methods in the field of medical image segmentation, lack of sufficient data always hinders performance to a certain degree. In this work, we explore the feasibility of applying deep learning methods on a pilot dataset. We present a simple and practical approach to perform segmentation in a 2D, slice-by-slice manner, based on region of interest (ROI) localization, applying an optimized training regime to improve segmentation performance from regions of interest. We start from two popular segmentation networks, the preferred model for medical segmentation, U-Net, and a general-purpose model, DeepLabV3+. Furthermore, we show that ensembling of these two fundamentally different architectures brings constant benefits by testing our approach on two different datasets, the publicly available ACDC challenge, and the imATFIB dataset from our in-house conducted clinical study. Results on the imATFIB dataset show that the proposed approach performs well with the provided training volumes, achieving an average Dice Similarity Coefficient of the whole heart of 89.89% on the validation set. Moreover, our algorithm achieved a mean Dice value of 91.87% on the ACDC validation, being comparable to the second best-performing approach on the challenge. Our approach provides an opportunity to serve as a building block of a computer-aided diagnostic system in a clinical setting.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yuanyuan Xu ◽  
Genke Yang ◽  
Jiliang Luo ◽  
Jianan He

Electronic component recognition plays an important role in industrial production, electronic manufacturing, and testing. In order to address the problem of the low recognition recall and accuracy of traditional image recognition technologies (such as principal component analysis (PCA) and support vector machine (SVM)), this paper selects multiple deep learning networks for testing and optimizes the SqueezeNet network. The paper then presents an electronic component recognition algorithm based on the Faster SqueezeNet network. This structure can reduce the size of network parameters and computational complexity without deteriorating the performance of the network. The results show that the proposed algorithm performs well, where the Receiver Operating Characteristic Curve (ROC) and Area Under the Curve (AUC), capacitor and inductor, reach 1.0. When the FPR is less than or equal 10 − 6   level, the TPR is greater than or equal to 0.99; its reasoning time is about 2.67 ms, achieving the industrial application level in terms of time consumption and performance.


Sign in / Sign up

Export Citation Format

Share Document