scholarly journals MMTDNN: Multi-view Massive Training Deep Neural Network for Segmentation and Detection of Abnormal Tissues in Medical Images

Author(s):  
Hassan Khastavaneh ◽  
Hossein Ebrahimpour Komleh

Purpose: Automated segmentation of abnormal tissues in medical images is considered as an essential part of those computer-aided detection and diagnosis systems which analyze medical images. However, automated segmentation of abnormalities is a challenging task due to the limitations of imaging technologies and complex structure of abnormalities, including low contrast between normal and abnormal tissues, shape diversity, appearance inhomogeneity, and the vague boundaries of abnormalities. Therefore, more intelligent segmentation techniques are required to tackle these challenges. Materials and Methods: In this study, a method, which is called MMTDNN, is proposed to segment and detect medical image abnormalities. MMTDNN, as a multi-view learning machine, utilizes convolutional neural networks in a massive training strategy. Moreover, the proposed method has four phases of preprocessing, view generation, pixel-level segmentation, and post-processing. The International Symposium on Biomedical Imaging (ISBI)-2016 dataset is used for the evaluation of the proposed method. Results: The performance of the proposed method has been evaluated on the task of skin lesion segmentation as one of the challenging applications of abnormal tissue segmentation. Both qualitative and quantitative results demonstrate outstanding performance. Meanwhile, the accuracy of 0.973, the Jaccard index of 0.876, and the Dice similarity coefficient of 0.931 have been achieved. Conclusion: In conclusion, the experimental result demonstrates that the proposed method outperforms stateof-the-art methods of skin lesion segmentation.

Symmetry ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 1224
Author(s):  
Omran Salih ◽  
Serestina Viriri

Markov random field (MRF) theory has achieved great success in image segmentation. Researchers have developed various methods based on MRF theory to solve skin lesions segmentation problems such as pixel-based MRF model, stochastic region-merging approach, symmetric MRF model, etc. In this paper, the proposed method seeks to provide a complement to the advantages of the pixel-based MRF model and stochastic region-merging approach. This is in order to overcome shortcomings of the pixel-based MRF model, because of various challenges that affect the skin lesion segmentation results such as irregular and fuzzy border, noisy and artifacts presence, and low contrast between lesions. The strength of the proposed method lies in the aspect of combining the benefits of the pixel-based MRF model and the stochastic region-merging by decomposing the likelihood function into the multiplication of stochastic region-merging likelihood function and the pixel likelihood function. The proposed method was evaluated on bench marked available datasets, PH2 and ISIC. The proposed method achieves Dice coefficients of 89.65 % on PH2 and 88.34 % on ISIC datasets respectively.


2021 ◽  
Vol 11 (10) ◽  
pp. 4528
Author(s):  
Tran-Dac-Thinh Phan ◽  
Soo-Hyung Kim ◽  
Hyung-Jeong Yang ◽  
Guee-Sang Lee

Skin lesion segmentation is one of the pivotal stages in the diagnosis of melanoma. Many methods have been proposed but, to date, this is still a challenging task. Variations in size and color, the fuzzy boundary and the low contrast between lesion and normal skin are the adverse factors for deficient or excessive delineation of lesions, or even inaccurate lesion location detection. In this paper, to counter these problems, we introduce a deep learning method based on U-Net architecture, which performs three tasks, namely lesion segmentation, boundary distance map regression and contour detection. The two auxiliary tasks provide an awareness of boundary and shape to the main encoder, which improves the object localization and pixel-wise classification in the transition region from lesion tissues to healthy tissues. Moreover, concerning the large variation in size, the Selective Kernel modules, which are placed in the skip connections, transfer the multi-receptive field features from the encoder to the decoder. Our methods are evaluated on three publicly available datasets: ISBI2016, ISBI 2017 and PH2. The extensive experimental results show the effectiveness of the proposed method in the task of skin lesion segmentation.


2020 ◽  
Vol 39 (3) ◽  
pp. 169-185
Author(s):  
Omran Salih ◽  
Serestina Viriri

Deep learning techniques such as Deep Convolutional Networks have achieved great success in skin lesion segmentation towards melanoma detection. The performance is however restrained by distinctive and challenging features of skin lesions such as irregular and fuzzy border, noise and artefacts presence and low contrast between lesions. The methods are also restricted with scarcity of annotated lesion images training dataset and limited computing resources. Recent research in convolutional neural network (CNN) has provided a variety of new architectures for deep learning. One interesting new architecture is the local binary convolutional neural network (LBCNN), which can reduce the workload of CNNs and improve the classification accuracy. The proposed framework employs the local binary convolution on U-net architecture instead of the standard convolution in order to reduced-size deep convolutional encoder-decoder network that adopts loss function for robust segmentation. The proposed framework replaced the encoder part in U-net by LBCNN layers. The approach automatically learns and segments complex features of skin lesion images. The encoder stage learns the contextual information by extracting discriminative features while the decoder stage captures the lesion boundaries of the skin images. This addresses the issues with encoder-decoder network producing coarse segmented output with challenging skin lesions appearances such as low contrast between healthy and unhealthy tissues and fine grained variability. It also addresses issues with multi-size, multi-scale and multi-resolution skin lesion images. The deep convolutional network also adopts a reduced-size network with just five levels of encoding-decoding network. This reduces greatly the consumption of computational processing resources. The system was evaluated on publicly available dataset of ISIC and PH2. The proposed system outperforms most of the existing state-of-art.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3462
Author(s):  
Shengxin Tao ◽  
Yun Jiang ◽  
Simin Cao ◽  
Chao Wu ◽  
Zeqi Ma

The automatic segmentation of skin lesions is considered to be a key step in the diagnosis and treatment of skin lesions, which is essential to improve the survival rate of patients. However, due to the low contrast, the texture and boundary are difficult to distinguish, which makes the accurate segmentation of skin lesions challenging. To cope with these challenges, this paper proposes an attention-guided network with densely connected convolution for skin lesion segmentation, called CSAG and DCCNet. In the last step of the encoding path, the model uses densely connected convolution to replace the ordinary convolutional layer. A novel attention-oriented filter module called Channel Spatial Fast Attention-guided Filter (CSFAG for short) was designed and embedded in the skip connection of the CSAG and DCCNet. On the ISIC-2017 data set, a large number of ablation experiments have verified the superiority and robustness of the CSFAG module and Densely Connected Convolution. The segmentation performance of CSAG and DCCNet is compared with other latest algorithms, and very competitive results have been achieved in all indicators. The robustness and cross-data set performance of our method was tested on another publicly available data set PH2, further verifying the effectiveness of the model.


Diagnostics ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. 72 ◽  
Author(s):  
Halil Murat Ünver ◽  
Enes Ayan

Skin lesion segmentation has a critical role in the early and accurate diagnosis of skin cancer by computerized systems. However, automatic segmentation of skin lesions in dermoscopic images is a challenging task owing to difficulties including artifacts (hairs, gel bubbles, ruler markers), indistinct boundaries, low contrast and varying sizes and shapes of the lesion images. This paper proposes a novel and effective pipeline for skin lesion segmentation in dermoscopic images combining a deep convolutional neural network named as You Only Look Once (YOLO) and the GrabCut algorithm. This method performs lesion segmentation using a dermoscopic image in four steps: 1. Removal of hairs on the lesion, 2. Detection of the lesion location, 3. Segmentation of the lesion area from the background, 4. Post-processing with morphological operators. The method was evaluated on two publicly well-known datasets, that is the PH2 and the ISBI 2017 (Skin Lesion Analysis Towards Melanoma Detection Challenge Dataset). The proposed pipeline model has achieved a 90% sensitivity rate on the ISBI 2017 dataset, outperforming other deep learning-based methods. The method also obtained close results according to the results obtained from other methods in the literature in terms of metrics of accuracy, specificity, Dice coefficient, and Jaccard index.


2020 ◽  
Vol 10 (9) ◽  
pp. 3045 ◽  
Author(s):  
Maria Rizzi ◽  
Cataldo Guaragnella

The establishment of automatic diagnostic systems able to detect and classify skin lesions at the initial stage are getting really relevant and effective in providing support for medical personnel during clinical assessment. Image segmentation has a determinant part in computer-aided skin lesion diagnosis pipeline because it makes possible to extract and highlight information on lesion contour texture as, for example, skewness and area unevenness. However, artifacts, low contrast, indistinct boundaries, and different shapes and areas contribute to make skin lesion segmentation a challenging task. In this paper, a fully automatic computer-aided system for skin lesion segmentation in dermoscopic images is indicated. Adopting this method, noise and artifacts are initially reduced by the singular value decomposition; afterward lesion decomposition into a frame of bit-plane layers is performed. A specific procedure is implemented for redundant data reduction using simple Boolean operators. Since lesion and background are rarely homogeneous regions, the obtained segmentation region could contain some disjointed areas classified as lesion. To obtain a single zone classified as lesion avoiding spurious pixels or holes inside the image under test, mathematical morphological techniques are implemented. The performance obtained highlights the method validity.


2021 ◽  
Vol 26 (1) ◽  
pp. 93-102
Author(s):  
Yue Zhang ◽  
Shijie Liu ◽  
Chunlai Li ◽  
Jianyu Wang

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1952
Author(s):  
May Phu Paing ◽  
Supan Tungjitkusolmun ◽  
Toan Huy Bui ◽  
Sarinporn Visitsattapongse ◽  
Chuchart Pintavirooj

Automated segmentation methods are critical for early detection, prompt actions, and immediate treatments in reducing disability and death risks of brain infarction. This paper aims to develop a fully automated method to segment the infarct lesions from T1-weighted brain scans. As a key novelty, the proposed method combines variational mode decomposition and deep learning-based segmentation to take advantages of both methods and provide better results. There are three main technical contributions in this paper. First, variational mode decomposition is applied as a pre-processing to discriminate the infarct lesions from unwanted non-infarct tissues. Second, overlapped patches strategy is proposed to reduce the workload of the deep-learning-based segmentation task. Finally, a three-dimensional U-Net model is developed to perform patch-wise segmentation of infarct lesions. A total of 239 brain scans from a public dataset is utilized to develop and evaluate the proposed method. Empirical results reveal that the proposed automated segmentation can provide promising performances with an average dice similarity coefficient (DSC) of 0.6684, intersection over union (IoU) of 0.5022, and average symmetric surface distance (ASSD) of 0.3932, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5172
Author(s):  
Yuying Dong ◽  
Liejun Wang ◽  
Shuli Cheng ◽  
Yongming Li

Considerable research and surveys indicate that skin lesions are an early symptom of skin cancer. Segmentation of skin lesions is still a hot research topic. Dermatological datasets in skin lesion segmentation tasks generated a large number of parameters when data augmented, limiting the application of smart assisted medicine in real life. Hence, this paper proposes an effective feedback attention network (FAC-Net). The network is equipped with the feedback fusion block (FFB) and the attention mechanism block (AMB), through the combination of these two modules, we can obtain richer and more specific feature mapping without data enhancement. Numerous experimental tests were given by us on public datasets (ISIC2018, ISBI2017, ISBI2016), and a good deal of metrics like the Jaccard index (JA) and Dice coefficient (DC) were used to evaluate the results of segmentation. On the ISIC2018 dataset, we obtained results for DC equal to 91.19% and JA equal to 83.99%, compared with the based network. The results of these two main metrics were improved by more than 1%. In addition, the metrics were also improved in the other two datasets. It can be demonstrated through experiments that without any enhancements of the datasets, our lightweight model can achieve better segmentation performance than most deep learning architectures.


Sign in / Sign up

Export Citation Format

Share Document