scholarly journals Generation of Augmented Capillary Network Optical Coherence Tomography Image Data of Human Skin for Deep Learning and Capillary Segmentation

Diagnostics ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 685
Author(s):  
Bitewulign Kassa Mekonnen ◽  
Tung-Han Hsieh ◽  
Dian-Fu Tsai ◽  
Shien-Kuei Liaw ◽  
Fu-Liang Yang ◽  
...  

The segmentation of capillaries in human skin in full-field optical coherence tomography (FF-OCT) images plays a vital role in clinical applications. Recent advances in deep learning techniques have demonstrated a state-of-the-art level of accuracy for the task of automatic medical image segmentation. However, a gigantic amount of annotated data is required for the successful training of deep learning models, which demands a great deal of effort and is costly. To overcome this fundamental problem, an automatic simulation algorithm to generate OCT-like skin image data with augmented capillary networks (ACNs) in a three-dimensional volume (which we called the ACN data) is presented. This algorithm simultaneously acquires augmented FF-OCT and corresponding ground truth images of capillary structures, in which potential functions are introduced to conduct the capillary pathways, and the two-dimensional Gaussian function is utilized to mimic the brightness reflected by capillary blood flow seen in real OCT data. To assess the quality of the ACN data, a U-Net deep learning model was trained by the ACN data and then tested on real in vivo FF-OCT human skin images for capillary segmentation. With properly designed data binarization for predicted image frames, the testing result of real FF-OCT data with respect to the ground truth achieved high scores in performance metrics. This demonstrates that the proposed algorithm is capable of generating ACN data that can imitate real FF-OCT skin images of capillary networks for use in research and deep learning, and that the model for capillary segmentation could be of wide benefit in clinical and biomedical applications.

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


2021 ◽  
Author(s):  
Adrit Rao ◽  
Harvey A. Fishman

Identifying diseases in Optical Coherence Tomography (OCT) images using Deep Learning models and methods is emerging as a powerful technique to enhance clinical diagnosis. Identifying macular diseases in the eye at an early stage and preventing misdiagnosis is crucial. The current methods developed for OCT image analysis have not yet been integrated into an accessible form-factor that can be utilized in a real-life scenario by Ophthalmologists. Additionally, current methods do not employ robust multiple metric feedback. This paper proposes a highly accurate smartphone-based Deep Learning system, OCTAI, that allows a user to take an OCT picture and receive real-time feedback through on-device inference. OCTAI analyzes the input OCT image in three different ways: (1) full image analysis, (2) quadrant based analysis, and (3) disease detection based analysis. With these three analysis methods, along with an Ophthalmologist's interpretation, a robust diagnosis can potentially be made. The ultimate goal of OCTAI is to assist Ophthalmologists in making a diagnosis through a digital second opinion and enabling them to cross-check their diagnosis before making a decision based on purely manual analysis of OCT images. OCTAI has the potential to allow Ophthalmologists to improve their diagnosis and may reduce misdiagnosis rates, leading to faster treatment of diseases.


EP Europace ◽  
2020 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
D Liang ◽  
A Haeberlin

Abstract Background The immediate effect of radiofrequency catheter ablation (RFA) on the tissue is not directly visualized. Optical coherence tomography (OCT) is an imaging technique that uses light to capture histology-like images with a penetration depth of 1-3 mm in the cardiac tissue. There are two specific features of ablation lesions in the OCT images: the disappearance of birefringence artifacts in the lateral and sudden decrease of signal at the bottom (Figure panel A and D). These features can not only be used to recognize the ablation lesions from the OCT images by eye, but also be used to train a machine learning model for automatic lesion segmentation. In recent years, deep learning methods, e.g. convolutional neural networks, have been used in medical image analysis and greatly increased the accuracy of image segmentation. We hypothesize that using a convolutional neural network, e.g. U-Net, can locate and segment the ablation lesions in the OCT images. Purpose To investigate whether a deep learning method such as a convolutional neural network optimized for biomedical image processing, could be used to segment ablation lesions in OCT images automatically. Method 8 OCT datasets with ablation lesions were used for training the convolutional neural network (U-Net model). After training, the model was validated by two new OCT datasets. Dice coefficients were calculated to evaluate spatial overlap between the predictions and the ground truth segmentations, which were manually segmented by the researchers (its value ranges from 0 to 1, and "1" means perfect segmentation). Results The U-Net model could predict the central parts of lesions automatically and accurately (Dice coefficients are 0.933 and 0.934), compared with the ground truth segmentations (Figure panel B and E). These predictions could reveal the depths and diameters of the ablation lesions correctly (Figure panel C and F). Conclusions  Our results showed that deep learning could facilitate ablation lesion identification and segmentation in OCT images. Deep learning methods, integrated in an OCT system, might enable automatic and precise ablation lesion visualization, which may help to assess ablation lesions during radiofrequency ablation procedures with great precision. Figure legend Panel A and D: the central OCT images of the ablation lesions. The blue arrows indicate the lesion bottom, where the image intensity suddenly decreases. The white arrows indicate the birefringence artifacts (the black bands in the grey regions). Panel B and E: the ground true segmentations of lesions in panel A and D. Panel C and F: the predictions by U-Net model of the lesions in panel A and D. A scale bar representing 500 μm is shown in each panel. Abstract Figure


2018 ◽  
Vol 9 (12) ◽  
pp. 6205 ◽  
Author(s):  
Kerry J. Halupka ◽  
Bhavna J. Antony ◽  
Matthew H. Lee ◽  
Katie A. Lucy ◽  
Ravneet S. Rai ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document