scholarly journals An interactive system for muscle and fat tissue identification of the lumbar spine using semantic segmentation

2021 ◽  
Vol 7 (2) ◽  
pp. 391-394
Author(s):  
Richard Bieck ◽  
David Baur ◽  
Johann Berger ◽  
Tim Stelzner ◽  
Anna Völker ◽  
...  

Abstract We introduce a system that allows the immediate identification and inspection of fat and muscle structures around the lumbar spine as a means of orthopaedic diagnostics before surgical treatment. The system comprises a backend component that accepts MRI data from a web-based interactive frontend as REST requests. The MRI data is passed through a U-net model, fine-tuned on lumbar MRI images, to generate segmentation masks of fat and muscle areas. The result is sent back to the frontend that functions as an inspection tool. For the model training, 4000 MRI images from 108 patients were used in a k-fold cross-validation study with k = 10. The model training was performed over 25-30 epochs. We applied shift, scale, and rotation operations as well as elastic deformation and distortion functions for image augmentation and a combined objective function using Dice and Focal loss. The trained models reached a mean dice score of 0.83 and 0.52 and a mean area error tissue of 0.1 and 0.3 for muscle and fat tissue, respectively. The interactive webbased frontend as an inspection tool was evaluated by clinicians to be suitable for the exploration of patient data as well as the assessment of segmentation results. We developed a system that uses semantic segmentation to identify fat and muscle tissue areas in MRI images of the lumbar spine. Further improvements should focus on the segmentation accuracy of fat tissue, as it is a determining factor in surgical decisionmaking. To our knowledge, this is the first system that automatically provides semantic information of the respective lumbar tissues.

2021 ◽  
Author(s):  
Benjamin Kellenberger ◽  
Devis Tuia ◽  
Dan Morris

<p>Ecological research like wildlife censuses increasingly relies on data on the scale of Terabytes. For example, modern camera trap datasets contain millions of images that require prohibitive amounts of manual labour to be annotated with species, bounding boxes, and the like. Machine learning, especially deep learning [3], could greatly accelerate this task through automated predictions, but involves expansive coding and expert knowledge.</p><p>In this abstract we present AIDE, the Annotation Interface for Data-driven Ecology [2]. In a first instance, AIDE is a web-based annotation suite for image labelling with support for concurrent access and scalability, up to the cloud. In a second instance, it tightly integrates deep learning models into the annotation process through active learning [7], where models learn from user-provided labels and in turn select the most relevant images for review from the large pool of unlabelled ones (Fig. 1). The result is a system where users only need to label what is required, which saves time and decreases errors due to fatigue.</p><p><img src="https://contentmanager.copernicus.org/fileStorageProxy.php?f=gnp.0402be60f60062057601161/sdaolpUECMynit/12UGE&app=m&a=0&c=131251398e575ac9974634bd0861fadc&ct=x&pn=gnp.elif&d=1" alt=""></p><p><em>Fig. 1: AIDE offers concurrent web image labelling support and uses annotations and deep learning models in an active learning loop.</em></p><p>AIDE includes a comprehensive set of built-in models, such as ResNet [1] for image classification, Faster R-CNN [5] and RetinaNet [4] for object detection, and U-Net [6] for semantic segmentation. All models can be customised and used without having to write a single line of code. Furthermore, AIDE accepts any third-party model with minimal implementation requirements. To complete the package, AIDE offers both user annotation and model prediction evaluation, access control, customisable model training, and more, all through the web browser.</p><p>AIDE is fully open source and available under https://github.com/microsoft/aerial_wildlife_detection.</p><p> </p><p><strong>References</strong></p>


Author(s):  
Tiezhu Sun ◽  
Wei Zhang ◽  
Zhijie Wang ◽  
Lin Ma ◽  
Zequn Jie

Conventional convolutional neural networks (CNNs) have achieved great success in image semantic segmentation. Existing methods mainly focus on learning pixel-wise labels from an image directly. In this paper, we advocate tackling the pixel-wise segmentation problem by considering the image-level classification labels. Theoretically, we analyze and discuss the effects of image-level labels on pixel-wise segmentation from the perspective of information theory. In practice, an end-to-end segmentation model is built by fusing the image-level and pixel-wise labeling networks. A generative network is included to reconstruct the input image and further boost the segmentation model training with an auxiliary loss. Extensive experimental results on benchmark dataset demonstrate the effectiveness of the proposed method, where good image-level labels can significantly improve the pixel-wise segmentation accuracy.


2021 ◽  
Vol 32 (3) ◽  
Author(s):  
Dimitrios Bellos ◽  
Mark Basham ◽  
Tony Pridmore ◽  
Andrew P. French

AbstractOver recent years, many approaches have been proposed for the denoising or semantic segmentation of X-ray computed tomography (CT) scans. In most cases, high-quality CT reconstructions are used; however, such reconstructions are not always available. When the X-ray exposure time has to be limited, undersampled tomograms (in terms of their component projections) are attained. This low number of projections offers low-quality reconstructions that are difficult to segment. Here, we consider CT time-series (i.e. 4D data), where the limited time for capturing fast-occurring temporal events results in the time-series tomograms being necessarily undersampled. Fortunately, in these collections, it is common practice to obtain representative highly sampled tomograms before or after the time-critical portion of the experiment. In this paper, we propose an end-to-end network that can learn to denoise and segment the time-series’ undersampled CTs, by training with the earlier highly sampled representative CTs. Our single network can offer two desired outputs while only training once, with the denoised output improving the accuracy of the final segmentation. Our method is able to outperform state-of-the-art methods in the task of semantic segmentation and offer comparable results in regard to denoising. Additionally, we propose a knowledge transfer scheme using synthetic tomograms. This not only allows accurate segmentation and denoising using less real-world data, but also increases segmentation accuracy. Finally, we make our datasets, as well as the code, publicly available.


2021 ◽  
Vol 6 (1) ◽  
pp. e000898
Author(s):  
Andrea Peroni ◽  
Anna Paviotti ◽  
Mauro Campigotto ◽  
Luis Abegão Pinto ◽  
Carlo Alberto Cutolo ◽  
...  

ObjectiveTo develop and test a deep learning (DL) model for semantic segmentation of anatomical layers of the anterior chamber angle (ACA) in digital gonio-photographs.Methods and analysisWe used a pilot dataset of 274 ACA sector images, annotated by expert ophthalmologists to delineate five anatomical layers: iris root, ciliary body band, scleral spur, trabecular meshwork and cornea. Narrow depth-of-field and peripheral vignetting prevented clinicians from annotating part of each image with sufficient confidence, introducing a degree of subjectivity and features correlation in the ground truth. To overcome these limitations, we present a DL model, designed and trained to perform two tasks simultaneously: (1) maximise the segmentation accuracy within the annotated region of each frame and (2) identify a region of interest (ROI) based on local image informativeness. Moreover, our calibrated model provides results interpretability returning pixel-wise classification uncertainty through Monte Carlo dropout.ResultsThe model was trained and validated in a 5-fold cross-validation experiment on ~90% of available data, achieving ~91% average segmentation accuracy within the annotated part of each ground truth image of the hold-out test set. An appropriate ROI was successfully identified in all test frames. The uncertainty estimation module located correctly inaccuracies and errors of segmentation outputs.ConclusionThe proposed model improves the only previously published work on gonio-photographs segmentation and may be a valid support for the automatic processing of these images to evaluate local tissue morphology. Uncertainty estimation is expected to facilitate acceptance of this system in clinical settings.


2021 ◽  
Vol 2099 (1) ◽  
pp. 012021
Author(s):  
A V Dobshik ◽  
A A Tulupov ◽  
V B Berikov

Abstract This paper presents an automatic algorithm for the segmentation of areas affected by an acute stroke in the non-contrast computed tomography brain images. The proposed algorithm is designed for learning in a weakly supervised scenario when some images are labeled accurately, and some images are labeled inaccurately. Wrong labels appear as a result of inaccuracy made by a radiologist in the process of manual annotation of computed tomography images. We propose methods for solving the segmentation problem in the case of inaccurately labeled training data. We use the U-Net neural network architecture with several modifications. Experiments on real computed tomography scans show that the proposed methods increase the segmentation accuracy.


2013 ◽  
Vol 3;16 (3;5) ◽  
pp. E295-E300
Author(s):  
Thomas T. Simopoulos

Background: The use of magnetic resonance imaging (MRI) is continuously escalating for the evaluation of patients with persistent pain following lumbar spine surgery (LSS). Spinal cord stimulation (SCS) therapy is being clinically applied much more commonly for the management of chronic pain following LSS. There is an increased probability that these 2 incompatible modalities may be accidentally used in the same patient. Objectives: The purpose of this case report is to: (1) summarize a case in which a patient with a thoracic spinal cord stimulator underwent a diagnostic lumbar MRI, (2) describe the 3 magnetic fields used to generate images and their interactions with SCS devices, and (3) summarize the present literature. Study design: Case report. Setting: University hospital. Results: Aside from mild heat sensations in the generator/pocket site and very low intensity shocking sensations in the back while in the MRI scanner, the patient emerged from the study with no clinically detected adverse events. Subsequent activation of the SCS device would result in a brief intense shocking sensation. This persisted whenever the device was activated and required Implantable Pulse Generator (IPG) replacement. Electrical analysis revealed that some of the output circuitry switches, which regulate IPG stimulation and capacitor charge balancing, were damaged, most likely by MRI radiofrequency injected current. Limitations: Single case of a patient with a thoracic SCS having a lumbar MRI study. Conclusion: This case demonstrates the lack of compatibility of lumbar MRI and the Precision SCS system as well as one of the possible patient adverse events that can occur when patients are exposed to MRI outside of the approved device labeling. Key words: Spinal cord stimulation devices, magnetic resonance imaging


2020 ◽  
Vol 8 (3) ◽  
pp. 188
Author(s):  
Fangfang Liu ◽  
Ming Fang

Image semantic segmentation technology has been increasingly applied in many fields, for example, autonomous driving, indoor navigation, virtual reality and augmented reality. However, underwater scenes, where there is a huge amount of marine biological resources and irreplaceable biological gene banks that need to be researched and exploited, are limited. In this paper, image semantic segmentation technology is exploited to study underwater scenes. We extend the current state-of-the-art semantic segmentation network DeepLabv3 + and employ it as the basic framework. First, the unsupervised color correction method (UCM) module is introduced to the encoder structure of the framework to improve the quality of the image. Moreover, two up-sampling layers are added to the decoder structure to retain more target features and object boundary information. The model is trained by fine-tuning and optimizing relevant parameters. Experimental results indicate that the image obtained by our method demonstrates better performance in improving the appearance of the segmented target object and avoiding its pixels from mingling with other class’s pixels, enhancing the segmentation accuracy of the target boundaries and retaining more feature information. Compared with the original method, our method improves the segmentation accuracy by 3%.


Sign in / Sign up

Export Citation Format

Share Document