scholarly journals Effects of Enhancement on Deep Learning Based Hepatic Vessel Segmentation

Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1165
Author(s):  
Shanmugapriya Survarachakan ◽  
Egidijius Pelanis ◽  
Zohaib Amjad Khan ◽  
Rahul Prasanna Kumar ◽  
Bjørn Edwin ◽  
...  

Colorectal cancer (CRC) is the third most common type of cancer with the liver being the most common site for cancer spread. A precise understanding of patient liver anatomy and pathology, as well as surgical planning based on that, plays a critical role in the treatment process. In some cases, surgeons request a 3D reconstruction, which requires a thorough analysis of the available images to be converted into 3D models of relevant objects through a segmentation process. Liver vessel segmentation is challenging due to the large variations in size and directions of the vessel structures as well as difficult contrasting conditions. In recent years, deep learning-based methods had been outperforming the conventional image analysis methods in the field of medical imaging. Though Convolutional Neural Networks (CNN) have been proved to be efficient for the task of medical image segmentation, the way of handling the image data and the preprocessing techniques play an important role in segmentation. Our work focuses on the combination of different vesselness enhancement filters and preprocessing methods to enhance the hepatic vessels prior to segmentation. In the first experiment, the effect of enhancement using individual vesselness filters was studied. In the second experiment, the effect of gamma correction on vesselness filters was studied. Lastly, the effect of fused vesselness filters over individual filters was studied. The methods were evaluated on clinical CT data. The quantitative analysis of the results in terms of different evaluation metrics from experiments can be summed up as (i) each of the filtered methods shows an improvement as compared to unenhanced with the best mean DICE score of 0.800 in comparison to 0.740 for unenhanced; (ii) applied gamma correction provides a statistically significant improvement in the performance of each filter with improvement in mean DICE of around 2%; (iii) both the fused filtered images and fused segmentation give the best results (mean DICE score of 0.818 and 0.830, respectively) with the statistically significant improvement compared to the individual filters with and without Gamma correction. The results have further been verified by qualitative analysis and hence show the importance of our proposed fused filter and segmentation approaches.

2021 ◽  
Author(s):  
Daniella M. Patton ◽  
Emilie N. Henning ◽  
Rob W. Goulet ◽  
Sean K. Carroll ◽  
Erin M.R. Bigelow ◽  
...  

Segmenting bone from background is required to quantify bone architecture in computed tomography (CT) image data. A deep learning approach using convolutional neural networks (CNN) is a promising alternative method for automatic segmentation. The study objectives were to evaluate the performance of CNNs in automatic segmentation of human vertebral body (micro-CT) and femoral neck (nano-CT) data and to investigate the performance of CNNs to segment data across scanners. Scans of human L1 vertebral bodies (microCT [North Star Imaging], n=28, 53μm3) and femoral necks (nano-CT [GE], n=28, 27μm3) were used for evaluation. Six slices were selected for each scan and then manually segmented to create ground truth masks (Dragonfly 4.0, ORS). Two-dimensional U-Net CNNs were trained in Dragonfly 4.0 with images of the [FN] femoral necks only, [VB] vertebral bodies only, and [F+V] combined CT data. Global (i.e., Otsu and Yen) and local (i.e., Otsu r = 100) thresholding methods were applied to each dataset. Segmentation performance was evaluated using the Dice index, a similarity metric of overlap. Kruskal-Wallis and Tukey-Kramer post-hoc tests were used to test for significant differences in the accuracy of segmentation methods. The FN U-Net had significantly higher Dice indices (i.e., better performance) than the global (Otsu: p=0.001; Yen: p=0.001) and local (Otsu [r=100]: p=0.001) thresholding methods and the VB U-Net (p=0.001) but there was no significant difference in model performance compared to the FN + VB U-net (p=0.783) on femoral neck image data. The VB U-net had significantly higher Dice coefficients than the global and local Otsu (p=0.001 for both) and FN U-Net (p=0.001) but not compared to the Yen (p=0.462) threshold or FN + VB U-net (p=0.783) on vertebral body image data. The results demonstrate that the U-net architecture outperforms common thresholding methods. Further, a network trained with bone data from a different system (i.e., different image acquisition parameters and voxel size) and a different anatomical site can perform well on unseen data. Finally, a network trained with combined datasets performed well on both datasets, indicating that a network can feasibly be trained with multiple datasets and perform well on varied image data.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shanshan Wang ◽  
Cheng Li ◽  
Rongpin Wang ◽  
Zaiyi Liu ◽  
Meiyun Wang ◽  
...  

AbstractAutomatic medical image segmentation plays a critical role in scientific research and medical care. Existing high-performance deep learning methods typically rely on large training datasets with high-quality manual annotations, which are difficult to obtain in many clinical applications. Here, we introduce Annotation-effIcient Deep lEarning (AIDE), an open-source framework to handle imperfect training datasets. Methodological analyses and empirical evaluations are conducted, and we demonstrate that AIDE surpasses conventional fully-supervised models by presenting better performance on open datasets possessing scarce or noisy annotations. We further test AIDE in a real-life case study for breast tumor segmentation. Three datasets containing 11,852 breast images from three medical centers are employed, and AIDE, utilizing 10% training annotations, consistently produces segmentation maps comparable to those generated by fully-supervised counterparts or provided by independent radiologists. The 10-fold enhanced efficiency in utilizing expert labels has the potential to promote a wide range of biomedical applications.


2011 ◽  
Vol 189-193 ◽  
pp. 3659-3663 ◽  
Author(s):  
Peng Cheng Wang ◽  
De Qun Li ◽  
Jin Long Zhao ◽  
Zhi Yan Zhen ◽  
Liang Ming Yan

Based on CT data of the lesion of an orthopedic patient with knee, Mimics13.1, Magics9.5 can be used for the medical image processing. The CT image data were filtered, interpolated, sharpen, the interest region was extracted by the application of threshold segmentation method and was grown. The three dimension model of patient’s skeleton is reconstructed. Based on selective laser sintering, the digit model of patient’s skeleton reconstructed can be accurately transformed into the object model of the individual matching skeleton. The results indicated that the contour of reconstructed knee-joint bone object model were very coincident, and symmetric with knee-joint bone defect.


2019 ◽  
Vol 5 ◽  
pp. e222
Author(s):  
Matthew Z. Wong ◽  
Kiyohito Kunii ◽  
Max Baylis ◽  
Wai Hong Ong ◽  
Pavel Kroupa ◽  
...  

The availability of large image data sets has been a crucial factor in the success of deep learning-based classification and detection methods. Yet, while data sets for everyday objects are widely available, data for specific industrial use-cases (e.g., identifying packaged products in a warehouse) remains scarce. In such cases, the data sets have to be created from scratch, placing a crucial bottleneck on the deployment of deep learning techniques in industrial applications. We present work carried out in collaboration with a leading UK online supermarket, with the aim of creating a computer vision system capable of detecting and identifying unique supermarket products in a warehouse setting. To this end, we demonstrate a framework for using data synthesis to create an end-to-end deep learning pipeline, beginning with real-world objects and culminating in a trained model. Our method is based on the generation of a synthetic dataset from 3D models obtained by applying photogrammetry techniques to real-world objects. Using 100K synthetic images for 10 classes, an InceptionV3 convolutional neural network was trained, which achieved accuracy of 96% on a separately acquired test set of real supermarket product images. The image generation process supports automatic pixel annotation. This eliminates the prohibitively expensive manual annotation typically required for detection tasks. Based on this readily available data, a one-stage RetinaNet detector was trained on the synthetic, annotated images to produce a detector that can accurately localize and classify the specimen products in real-time.


Diagnostics ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 685
Author(s):  
Bitewulign Kassa Mekonnen ◽  
Tung-Han Hsieh ◽  
Dian-Fu Tsai ◽  
Shien-Kuei Liaw ◽  
Fu-Liang Yang ◽  
...  

The segmentation of capillaries in human skin in full-field optical coherence tomography (FF-OCT) images plays a vital role in clinical applications. Recent advances in deep learning techniques have demonstrated a state-of-the-art level of accuracy for the task of automatic medical image segmentation. However, a gigantic amount of annotated data is required for the successful training of deep learning models, which demands a great deal of effort and is costly. To overcome this fundamental problem, an automatic simulation algorithm to generate OCT-like skin image data with augmented capillary networks (ACNs) in a three-dimensional volume (which we called the ACN data) is presented. This algorithm simultaneously acquires augmented FF-OCT and corresponding ground truth images of capillary structures, in which potential functions are introduced to conduct the capillary pathways, and the two-dimensional Gaussian function is utilized to mimic the brightness reflected by capillary blood flow seen in real OCT data. To assess the quality of the ACN data, a U-Net deep learning model was trained by the ACN data and then tested on real in vivo FF-OCT human skin images for capillary segmentation. With properly designed data binarization for predicted image frames, the testing result of real FF-OCT data with respect to the ground truth achieved high scores in performance metrics. This demonstrates that the proposed algorithm is capable of generating ACN data that can imitate real FF-OCT skin images of capillary networks for use in research and deep learning, and that the model for capillary segmentation could be of wide benefit in clinical and biomedical applications.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


Author(s):  
Darius M. Thiesen ◽  
Dimitris Ntalos ◽  
Alexander Korthaus ◽  
Andreas Petersik ◽  
Karl-Heinz Frosch ◽  
...  

Abstract Introduction For successful intramedullary implant placement at the femur, such as nailing in unstable proximal femur fractures, the use of an implant that at least reaches or exceeds the femoral isthmus and yields sufficient thickness is recommended. A number of complications after intramedullary femoral nailing have been reported, particularly in Asians. To understand the anatomical features of the proximal femur and their ethnic differences, we aimed to accurately calculate the femoral isthmus dimensions and proximal distance of Asians and Caucasians. Methods In total, 1189 Asian and Caucasian segmented 3D CT data sets of femurs were analyzed. The individual femoral isthmus diameter was precisely computed to investigate whether gender, femur length, age, ethnicity or body mass index have an influence on isthmus diameters. Results The mean isthmus diameter of all femurs was 10.71 ± 2.2 mm. A significantly larger diameter was found in Asians when compared to Caucasians (p < 0.001). Age was a strong predictor of the isthmus diameter variability in females (p < 0.001, adjusted r2 = 0.299). With every year of life, the isthmus showed a widening of 0.08 mm in women. A Matched Pair Analysis of 150 female femurs showed a significant difference between isthmus diameter in Asian and Caucasian femurs (p = 0.05). In 50% of the cases the isthmus was found in a range of 2.4 cm between 16.9 and 19.3 cm distal to the tip of the greater trochanter. The female Asian femur differs from Caucasians as it is wider at the isthmus. Conclusions In absolute values, the proximal isthmus distance did not show much variation but is more proximal in Asians. The detailed data presented may be helpful in the development of future implant designs. The length and thickness of future standard implants may be considered based on the findings.


Author(s):  
Annika Niemann ◽  
Samuel Voß ◽  
Riikka Tulamo ◽  
Simon Weigand ◽  
Bernhard Preim ◽  
...  

Abstract Purpose For the evaluation and rupture risk assessment of intracranial aneurysms, clinical, morphological and hemodynamic parameters are analyzed. The reliability of intracranial hemodynamic simulations strongly depends on the underlying models. Due to the missing information about the intracranial vessel wall, the patient-specific wall thickness is often neglected as well as the specific physiological and pathological properties of the vessel wall. Methods In this work, we present a model for structural simulations with patient-specific wall thickness including different tissue types based on postmortem histologic image data. Images of histologic 2D slices from intracranial aneurysms were manually segmented in nine tissue classes. After virtual inflation, they were combined into 3D models. This approach yields multiple 3D models of the inner and outer wall and different tissue parts as a prerequisite for subsequent simulations. Result We presented a pipeline to generate 3D models of aneurysms with respect to the different tissue textures occurring in the wall. First experiments show that including the variance of the tissue in the structural simulation affect the simulation result. Especially at the interfaces between neighboring tissue classes, the larger influence of stiffer components on the stability equilibrium became obvious. Conclusion The presented approach enables the creation of a geometric model with differentiated wall tissue. This information can be used for different applications, like hemodynamic simulations, to increase the modeling accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2611
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Christopher Lawson ◽  
Paul Meek ◽  
Paul Kwan

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.


Sign in / Sign up

Export Citation Format

Share Document