tissue classification
Recently Published Documents


TOTAL DOCUMENTS

451
(FIVE YEARS 102)

H-INDEX

36
(FIVE YEARS 3)

2022 ◽  
Vol 15 ◽  
Author(s):  
Meera Srikrishna ◽  
Rolf A. Heckemann ◽  
Joana B. Pereira ◽  
Giovanni Volpe ◽  
Anna Zettergren ◽  
...  

Brain tissue segmentation plays a crucial role in feature extraction, volumetric quantification, and morphometric analysis of brain scans. For the assessment of brain structure and integrity, CT is a non-invasive, cheaper, faster, and more widely available modality than MRI. However, the clinical application of CT is mostly limited to the visual assessment of brain integrity and exclusion of copathologies. We have previously developed two-dimensional (2D) deep learning-based segmentation networks that successfully classified brain tissue in head CT. Recently, deep learning-based MRI segmentation models successfully use patch-based three-dimensional (3D) segmentation networks. In this study, we aimed to develop patch-based 3D segmentation networks for CT brain tissue classification. Furthermore, we aimed to compare the performance of 2D- and 3D-based segmentation networks to perform brain tissue classification in anisotropic CT scans. For this purpose, we developed 2D and 3D U-Net-based deep learning models that were trained and validated on MR-derived segmentations from scans of 744 participants of the Gothenburg H70 Cohort with both CT and T1-weighted MRI scans acquired timely close to each other. Segmentation performance of both 2D and 3D models was evaluated on 234 unseen datasets using measures of distance, spatial similarity, and tissue volume. Single-task slice-wise processed 2D U-Nets performed better than multitask patch-based 3D U-Nets in CT brain tissue classification. These findings provide support to the use of 2D U-Nets to segment brain tissue in one-dimensional (1D) CT. This could increase the application of CT to detect brain abnormalities in clinical settings.


2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
Pei Wang ◽  
Shuwei Wang ◽  
Yuan Zhang ◽  
Xiaoyan Duan

The objectives of this study were to improve the efficiency and accuracy of early clinical diagnosis of cervical cancer and to explore the application of tissue classification algorithm combined with multispectral imaging in screening of cervical cancer. 50 patients with suspected cervical cancer were selected. Firstly, the multispectral imaging technology was used to collect the multispectral images of the cervical tissues of 50 patients under the conventional white light waveband, the narrowband green light waveband, and the narrowband blue light waveband. Secondly, the collected multispectral images were fused, and then the tissue classification algorithm was used to segment the diseased area according to the difference between the cervical tissues without lesions and the cervical tissues with lesions. The difference in the contrast and other characteristics of the multiband spectrum fusion image would segment the diseased area, which was compared with the results of the disease examination. The average gradient, standard deviation (SD), and image entropy were adopted to evaluate the image quality, and the sensitivity and specificity were selected to evaluate the clinical application value of discussed method. The fused spectral image was compared with the image without lesions, it was found that there was a clear difference, and the fused multispectral image showed a contrast of 0.7549, which was also higher than that before fusion (0.4716), showing statistical difference ( P < 0.05 ). The average gradient, SD, and image entropy of the multispectral image assisted by the tissue classification algorithm were 2.0765, 65.2579, and 4.974, respectively, showing statistical difference ( P < 0.05 ). Compared with the three reported indicators, the values of the algorithm in this study were higher. The sensitivity and specificity of the multispectral image with the tissue classification algorithm were 85.3% and 70.8%, respectively, which were both greater than those of the image without the algorithm. It showed that the multispectral image assisted by tissue classification algorithm can effectively screen the cervical cancer and can quickly, efficiently, and safely segment the cervical tissue from the lesion area and the nonlesion area. The segmentation result was the same as that of the doctor's disease examination, indicating that it showed high clinical application value. This provided an effective reference for the clinical application of multispectral imaging technology assisted by tissue classification algorithm in the early screening and diagnosis of cervical cancer.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Man Wu ◽  
Shuwen Wang ◽  
Shirui Pan ◽  
Andrew C. Terentis ◽  
John Strasswimmer ◽  
...  

AbstractRecently, Raman Spectroscopy (RS) was demonstrated to be a non-destructive way of cancer diagnosis, due to the uniqueness of RS measurements in revealing molecular biochemical changes between cancerous vs. normal tissues and cells. In order to design computational approaches for cancer detection, the quality and quantity of tissue samples for RS are important for accurate prediction. In reality, however, obtaining skin cancer samples is difficult and expensive due to privacy and other constraints. With a small number of samples, the training of the classifier is difficult, and often results in overfitting. Therefore, it is important to have more samples to better train classifiers for accurate cancer tissue classification. To overcome these limitations, this paper presents a novel generative adversarial network based skin cancer tissue classification framework. Specifically, we design a data augmentation module that employs a Generative Adversarial Network (GAN) to generate synthetic RS data resembling the training data classes. The original tissue samples and the generated data are concatenated to train classification modules. Experiments on real-world RS data demonstrate that (1) data augmentation can help improve skin cancer tissue classification accuracy, and (2) generative adversarial network can be used to generate reliable synthetic Raman spectroscopic data.


2021 ◽  
Author(s):  
S Pérez ◽  
N Van de Berg ◽  
F Manni ◽  
M Lai ◽  
L Rijstenberg ◽  
...  

Author(s):  
Mariana Mulinari Pinheiro Machado ◽  
Alina Voda ◽  
Gildas Besançon ◽  
Guillaume Becq ◽  
Philippe Kahane ◽  
...  

2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Daniel G. E. Thiem ◽  
Paul Römer ◽  
Matthias Gielisch ◽  
Bilal Al-Nawas ◽  
Martin Schlüter ◽  
...  

Abstract Background Hyperspectral imaging (HSI) is a promising non-contact approach to tissue diagnostics, generating large amounts of raw data for whose processing computer vision (i.e. deep learning) is particularly suitable. Aim of this proof of principle study was the classification of hyperspectral (HS)-reflectance values into the human-oral tissue types fat, muscle and mucosa using deep learning methods. Furthermore, the tissue-specific hyperspectral signatures collected will serve as a representative reference for the future assessment of oral pathological changes in the sense of a HS-library. Methods A total of about 316 samples of healthy human-oral fat, muscle and oral mucosa was collected from 174 different patients and imaged using a HS-camera, covering the wavelength range from 500 nm to 1000 nm. HS-raw data were further labelled and processed for tissue classification using a light-weight 6-layer deep neural network (DNN). Results The reflectance values differed significantly (p < .001) for fat, muscle and oral mucosa at almost all wavelengths, with the signature of muscle differing the most. The deep neural network distinguished tissue types with an accuracy of > 80% each. Conclusion Oral fat, muscle and mucosa can be classified sufficiently and automatically by their specific HS-signature using a deep learning approach. Early detection of premalignant-mucosal-lesions using hyperspectral imaging and deep learning is so far represented rarely in in medical and computer vision research domain but has a high potential and is part of subsequent studies.


NeuroImage ◽  
2021 ◽  
pp. 118606
Author(s):  
Meera Srikrishna ◽  
Joana B. Pereira ◽  
Rolf A. Heckemann ◽  
Giovanni Volpe ◽  
Danielle van Westen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document