scholarly journals Deep Learning for Automatic Image Segmentation in Stomatology and Its Clinical Application

2021 ◽  
Vol 3 ◽  
Author(s):  
Dan Luo ◽  
Wei Zeng ◽  
Jinlong Chen ◽  
Wei Tang

Deep learning has become an active research topic in the field of medical image analysis. In particular, for the automatic segmentation of stomatological images, great advances have been made in segmentation performance. In this paper, we systematically reviewed the recent literature on segmentation methods for stomatological images based on deep learning, and their clinical applications. We categorized them into different tasks and analyze their advantages and disadvantages. The main categories that we explored were the data sources, backbone network, and task formulation. We categorized data sources into panoramic radiography, dental X-rays, cone-beam computed tomography, multi-slice spiral computed tomography, and methods based on intraoral scan images. For the backbone network, we distinguished methods based on convolutional neural networks from those based on transformers. We divided task formulations into semantic segmentation tasks and instance segmentation tasks. Toward the end of the paper, we discussed the challenges and provide several directions for further research on the automatic segmentation of stomatological images.

2021 ◽  
Author(s):  
Sang-Heon Lim ◽  
Young Jae Kim ◽  
Yeon-Ho Park ◽  
Doojin Kim ◽  
Kwang Gi Kim ◽  
...  

Abstract Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1,006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the Cancer Imaging Archive (TCIA) pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.


2020 ◽  
Vol 4 (3) ◽  
pp. 568-575
Author(s):  
Yamina Azzi ◽  
Abdelouahab Moussaoui ◽  
Mohand-Tahar Kechadi

Semantic segmentation is one of the biggest challenging tasks in computer vision, especially in medical image analysis, it helps to locate and identify pathological structures automatically. It is an active research area. Continuously different techniques are proposed. Recently Deep Learning is the latest technique used intensively to improve the performance in medical image segmentation. For this reason, we present in this non-systematic review a preliminary description about semantic segmentation with deep learning and the most important steps to build a model that deal with this problem.


2021 ◽  
Vol 26 (1) ◽  
pp. 200-215
Author(s):  
Muhammad Alam ◽  
Jian-Feng Wang ◽  
Cong Guangpei ◽  
LV Yunrong ◽  
Yuanfang Chen

AbstractIn recent years, the success of deep learning in natural scene image processing boosted its application in the analysis of remote sensing images. In this paper, we applied Convolutional Neural Networks (CNN) on the semantic segmentation of remote sensing images. We improve the Encoder- Decoder CNN structure SegNet with index pooling and U-net to make them suitable for multi-targets semantic segmentation of remote sensing images. The results show that these two models have their own advantages and disadvantages on the segmentation of different objects. In addition, we propose an integrated algorithm that integrates these two models. Experimental results show that the presented integrated algorithm can exploite the advantages of both the models for multi-target segmentation and achieve a better segmentation compared to these two models.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Hyunkwang Lee ◽  
Chao Huang ◽  
Sehyo Yune ◽  
Shahein H. Tajmir ◽  
Myeongchan Kim ◽  
...  

Abstract Recent advancements in deep learning for automated image processing and classification have accelerated many new applications for medical image analysis. However, most deep learning algorithms have been developed using reconstructed, human-interpretable medical images. While image reconstruction from raw sensor data is required for the creation of medical images, the reconstruction process only uses a partial representation of all the data acquired. Here, we report the development of a system to directly process raw computed tomography (CT) data in sinogram-space, bypassing the intermediary step of image reconstruction. Two classification tasks were evaluated for their feasibility of sinogram-space machine learning: body region identification and intracranial hemorrhage (ICH) detection. Our proposed SinoNet, a convolutional neural network optimized for interpreting sinograms, performed favorably compared to conventional reconstructed image-space-based systems for both tasks, regardless of scanning geometries in terms of projections or detectors. Further, SinoNet performed significantly better when using sparsely sampled sinograms than conventional networks operating in image-space. As a result, sinogram-space algorithms could be used in field settings for triage (presence of ICH), especially where low radiation dose is desired. These findings also demonstrate another strength of deep learning where it can analyze and interpret sinograms that are virtually impossible for human experts.


Author(s):  
Vitoantonio Bevilacqua ◽  
Antonio Brunetti ◽  
Giacomo Donato Cascarano ◽  
Andrea Guerriero ◽  
Francesco Pesce ◽  
...  

Abstract Background The automatic segmentation of kidneys in medical images is not a trivial task when the subjects undergoing the medical examination are affected by Autosomal Dominant Polycystic Kidney Disease (ADPKD). Several works dealing with the segmentation of Computed Tomography images from pathological subjects were proposed, showing high invasiveness of the examination or requiring interaction by the user for performing the segmentation of the images. In this work, we propose a fully-automated approach for the segmentation of Magnetic Resonance images, both reducing the invasiveness of the acquisition device and not requiring any interaction by the users for the segmentation of the images. Methods Two different approaches are proposed based on Deep Learning architectures using Convolutional Neural Networks (CNN) for the semantic segmentation of images, without needing to extract any hand-crafted features. In details, the first approach performs the automatic segmentation of images without any procedure for pre-processing the input. Conversely, the second approach performs a two-steps classification strategy: a first CNN automatically detects Regions Of Interest (ROIs); a subsequent classifier performs the semantic segmentation on the ROIs previously extracted. Results Results show that even though the detection of ROIs shows an overall high number of false positives, the subsequent semantic segmentation on the extracted ROIs allows achieving high performance in terms of mean Accuracy. However, the segmentation of the entire images input to the network remains the most accurate and reliable approach showing better performance than the previous approach. Conclusion The obtained results show that both the investigated approaches are reliable for the semantic segmentation of polycystic kidneys since both the strategies reach an Accuracy higher than 85%. Also, both the investigated methodologies show performances comparable and consistent with other approaches found in literature working on images from different sources, reducing both the invasiveness of the analyses and the interaction needed by the users for performing the segmentation task.


2019 ◽  
Vol 8 (9) ◽  
pp. 1446 ◽  
Author(s):  
Arsalan ◽  
Owais ◽  
Mahmood ◽  
Cho ◽  
Park

Automatic segmentation of retinal images is an important task in computer-assisted medical image analysis for the diagnosis of diseases such as hypertension, diabetic and hypertensive retinopathy, and arteriosclerosis. Among the diseases, diabetic retinopathy, which is the leading cause of vision detachment, can be diagnosed early through the detection of retinal vessels. The manual detection of these retinal vessels is a time-consuming process that can be automated with the help of artificial intelligence with deep learning. The detection of vessels is difficult due to intensity variation and noise from non-ideal imaging. Although there are deep learning approaches for vessel segmentation, these methods require many trainable parameters, which increase the network complexity. To address these issues, this paper presents a dual-residual-stream-based vessel segmentation network (Vess-Net), which is not as deep as conventional semantic segmentation networks, but provides good segmentation with few trainable parameters and layers. The method takes advantage of artificial intelligence for semantic segmentation to aid the diagnosis of retinopathy. To evaluate the proposed Vess-Net method, experiments were conducted with three publicly available datasets for vessel segmentation: digital retinal images for vessel extraction (DRIVE), the Child Heart Health Study in England (CHASE-DB1), and structured analysis of retina (STARE). Experimental results show that Vess-Net achieved superior performance for all datasets with sensitivity (Se), specificity (Sp), area under the curve (AUC), and accuracy (Acc) of 80.22%, 98.1%, 98.2%, and 96.55% for DRVIE; 82.06%, 98.41%, 98.0%, and 97.26% for CHASE-DB1; and 85.26%, 97.91%, 98.83%, and 96.97% for STARE dataset.


2020 ◽  
Vol 20 (21) ◽  
pp. 1858-1867
Author(s):  
Xian Tan ◽  
Yang Yu ◽  
Kaiwen Duan ◽  
Jingbo Zhang ◽  
Pingping Sun ◽  
...  

Anticancer drug screening can accelerate drug discovery to save the lives of cancer patients, but cancer heterogeneity makes this screening challenging. The prediction of anticancer drug sensitivity is useful for anticancer drug development and the identification of biomarkers of drug sensitivity. Deep learning, as a branch of machine learning, is an important aspect of in silico research. Its outstanding computational performance means that it has been used for many biomedical purposes, such as medical image interpretation, biological sequence analysis, and drug discovery. Several studies have predicted anticancer drug sensitivity based on deep learning algorithms. The field of deep learning has made progress regarding model performance and multi-omics data integration. However, deep learning is limited by the number of studies performed and data sources available, so it is not perfect as a pre-clinical approach for use in the anticancer drug screening process. Improving the performance of deep learning models is a pressing issue for researchers. In this review, we introduce the research of anticancer drug sensitivity prediction and the use of deep learning in this research area. To provide a reference for future research, we also review some common data sources and machine learning methods. Lastly, we discuss the advantages and disadvantages of deep learning, as well as the limitations and future perspectives regarding this approach.


Sign in / Sign up

Export Citation Format

Share Document