scholarly journals Random Fourier Features-Based Deep Learning Improvement with Class Activation Interpretability for Nerve Structure Segmentation

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7741
Author(s):  
Cristian Alfonso Jimenez-Castaño ◽  
Andrés Marino Álvarez-Meza ◽  
Oscar David Aguirre-Ospina ◽  
David Augusto Cárdenas-Peña ◽  
Álvaro Angel Orozco-Gutiérrez

Peripheral nerve blocking (PNB) is a standard procedure to support regional anesthesia. Still, correct localization of the nerve’s structure is needed to avoid adverse effects; thereby, ultrasound images are used as an aid approach. In addition, image-based automatic nerve segmentation from deep learning methods has been proposed to mitigate attenuation and speckle noise ultrasonography issues. Notwithstanding, complex architectures highlight the region of interest lacking suitable data interpretability concerning the learned features from raw instances. Here, a kernel-based deep learning enhancement is introduced for nerve structure segmentation. In a nutshell, a random Fourier features-based approach was utilized to complement three well-known semantic segmentation architectures, e.g., fully convolutional network, U-net, and ResUnet. Moreover, two ultrasound image datasets for PNB were tested. Obtained results show that our kernel-based approach provides a better generalization capability from image segmentation-based assessments on different nerve structures. Further, for data interpretability, a semantic segmentation extension of the GradCam++ for class-activation mapping was used to reveal relevant learned features separating between nerve and background. Thus, our proposal favors both straightforward (shallow) and complex architectures (deeper neural networks).

Diagnostics ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 66
Author(s):  
Yung-Hsien Hsieh ◽  
Fang-Rong Hsu ◽  
Seng-Tong Dai ◽  
Hsin-Ya Huang ◽  
Dar-Ren Chen ◽  
...  

In this study, we applied semantic segmentation using a fully convolutional deep learning network to identify characteristics of the Breast Imaging Reporting and Data System (BI-RADS) lexicon from breast ultrasound images to facilitate clinical malignancy tumor classification. Among 378 images (204 benign and 174 malignant images) from 189 patients (102 benign breast tumor patients and 87 malignant patients), we identified seven malignant characteristics related to the BI-RADS lexicon in breast ultrasound. The mean accuracy and mean IU of the semantic segmentation were 32.82% and 28.88, respectively. The weighted intersection over union was 85.35%, and the area under the curve was 89.47%, showing better performance than similar semantic segmentation networks, SegNet and U-Net, in the same dataset. Our results suggest that the utilization of a deep learning network in combination with the BI-RADS lexicon can be an important supplemental tool when using ultrasound to diagnose breast malignancy.


Author(s):  
R. Murugan

The retinal parts segmentation has been recognized as a key component in both ophthalmological and cardiovascular sickness analysis. The parts of retinal pictures, vessels, optic disc, and macula segmentations, will add to the indicative outcome. In any case, the manual segmentation of retinal parts is tedious and dreary work, and it additionally requires proficient aptitudes. This chapter proposes a supervised method to segment blood vessel utilizing deep learning methods. All the more explicitly, the proposed part has connected the completely convolutional network, which is normally used to perform semantic segmentation undertaking with exchange learning. The convolutional neural system has turned out to be an amazing asset for a few computer vision assignments. As of late, restorative picture investigation bunches over the world are rapidly entering this field and applying convolutional neural systems and other deep learning philosophies to a wide assortment of uses, and uncommon outcomes are rising constantly.


2021 ◽  
pp. 29-42
Author(s):  
admin admin ◽  
◽  
◽  
Adnan Mohsin Abdulazeez

With the development of technology and smart devices in the medical field, the computer system has become an essential part of this development to learn devices in the medical field. One of the learning methods is deep learning (DL), which is a branch of machine learning (ML). The deep learning approach has been used in this field because it is one of the modern methods of obtaining accurate results through its algorithms, and among these algorithms that are used in this field are convolutional neural networks (CNN) and recurrent neural networks (RNN). In this paper we reviewed what have researchers have done in their researches to solve fetal problems, then summarize and carefully discuss the applications in different tasks identified for segmentation and classification of ultrasound images. Finally, this study discussed the potential challenges and directions for applying deep learning in ultrasound image analysis.


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0251899
Author(s):  
Samir M. Badawy ◽  
Abd El-Naser A. Mohamed ◽  
Alaa A. Hefnawy ◽  
Hassan E. Zidan ◽  
Mohammed T. GadAllah ◽  
...  

Computer aided diagnosis (CAD) of biomedical images assists physicians for a fast facilitated tissue characterization. A scheme based on combining fuzzy logic (FL) and deep learning (DL) for automatic semantic segmentation (SS) of tumors in breast ultrasound (BUS) images is proposed. The proposed scheme consists of two steps: the first is a FL based preprocessing, and the second is a Convolutional neural network (CNN) based SS. Eight well-known CNN based SS models have been utilized in the study. Studying the scheme was by a dataset of 400 cancerous BUS images and their corresponding 400 ground truth images. SS process has been applied in two modes: batch and one by one image processing. Three quantitative performance evaluation metrics have been utilized: global accuracy (GA), mean Jaccard Index (mean intersection over union (IoU)), and mean BF (Boundary F1) Score. In the batch processing mode: quantitative metrics’ average results over the eight utilized CNNs based SS models over the 400 cancerous BUS images were: 95.45% GA instead of 86.08% without applying fuzzy preprocessing step, 78.70% mean IoU instead of 49.61%, and 68.08% mean BF score instead of 42.63%. Moreover, the resulted segmented images could show tumors’ regions more accurate than with only CNN based SS. While, in one by one image processing mode: there has been no enhancement neither qualitatively nor quantitatively. So, only when a batch processing is needed, utilizing the proposed scheme may be helpful in enhancing automatic ss of tumors in BUS images. Otherwise applying the proposed approach on a one-by-one image mode will disrupt segmentation’s efficiency. The proposed batch processing scheme may be generalized for an enhanced CNN based SS of a targeted region of interest (ROI) in any batch of digital images. A modified small dataset is available: https://www.kaggle.com/mohammedtgadallah/mt-small-dataset (S1 Data).


2021 ◽  
Vol 13 (24) ◽  
pp. 5100
Author(s):  
Teerapong Panboonyuen ◽  
Kulsawasd Jitkajornwanich ◽  
Siam Lawawirojwong ◽  
Panu Srestasathiern ◽  
Peerapon Vateekul

Transformers have demonstrated remarkable accomplishments in several natural language processing (NLP) tasks as well as image processing tasks. Herein, we present a deep-learning (DL) model that is capable of improving the semantic segmentation network in two ways. First, utilizing the pre-training Swin Transformer (SwinTF) under Vision Transformer (ViT) as a backbone, the model weights downstream tasks by joining task layers upon the pretrained encoder. Secondly, decoder designs are applied to our DL network with three decoder designs, U-Net, pyramid scene parsing (PSP) network, and feature pyramid network (FPN), to perform pixel-level segmentation. The results are compared with other image labeling state of the art (SOTA) methods, such as global convolutional network (GCN) and ViT. Extensive experiments show that our Swin Transformer (SwinTF) with decoder designs reached a new state of the art on the Thailand Isan Landsat-8 corpus (89.8% F1 score), Thailand North Landsat-8 corpus (63.12% F1 score), and competitive results on ISPRS Vaihingen. Moreover, both our best-proposed methods (SwinTF-PSP and SwinTF-FPN) even outperformed SwinTF with supervised pre-training ViT on the ImageNet-1K in the Thailand, Landsat-8, and ISPRS Vaihingen corpora.


2018 ◽  
Vol 4 (1) ◽  
pp. 71-74 ◽  
Author(s):  
Jannis Hagenah ◽  
Mattias Heinrich ◽  
Floris Ernst

AbstractPre-operative planning of valve-sparing aortic root reconstruction relies on the automatic discrimination of healthy and pathologically dilated aortic roots. The basis of this classification are features extracted from 3D ultrasound images. In previously published approaches, handcrafted features showed a limited classification accuracy. However, feature learning is insufficient due to the small data sets available for this specific problem. In this work, we propose transfer learning to use deep learning on these small data sets. For this purpose, we used the convolutional layers of the pretrained deep neural network VGG16 as a feature extractor. To simplify the problem, we only took two prominent horizontal slices throgh the aortic root, the coaptation plane and the commissure plane, into account by stitching the features of both images together and training a Random Forest classifier on the resulting feature vectors. We evaluated this method on a data set of 48 images (24 healthy, 24 dilated) using 10-fold cross validation. Using the deep learned features we could reach a classification accuracy of 84 %, which clearly outperformed the handcrafted features (71 % accuracy). Even though the VGG16 network was trained on RGB photos and for different classification tasks, the learned features are still relevant for ultrasound image analysis of aortic root pathology identification. Hence, transfer learning makes deep learning possible even on very small ultrasound data sets.


2017 ◽  
Vol 37 (6) ◽  
pp. 944-952 ◽  
Author(s):  
Po-Heng Chen ◽  
Kai-Sheng Hsieh ◽  
Chih-Chung Huang

Abstract Ultrasound examinations are a standard procedure in the clinical diagnosis of many diseases. However, the efficacy of an ultrasound examination is highly dependent on the skill and experience of the operator, which has prompted proposals for ultrasound simulation systems to facilitate training and education in hospitals and medical schools. The key technology of the medical ultrasound simulation system is the probe tracking method that is used to determine the position and inclination angle of the sham probe, since this information is used to display the ultrasound images in real time. This study investigated a novel acoustic tracking approach for an ultrasound simulation system that exhibits high sensitivity and is cost-effective. Five air-coupled ultrasound elements are arranged as a 1D array in front of a sham probe for transmitting the acoustic signals, and a 5 × 5 2D array of receiving elements is used to receive the acoustic signals from the moving transmitting elements. Since the patterns of the received signals can differ for different positions and angles of the moving probe, the probe can be tracked precisely by the acoustic tracking approach. After the probe position has been determined by the system, the corresponding ultrasound image is immediately displayed on the screen. The system performance was verified by scanning three different subjects as image databases: a simple commercial phantom, a complicated self-made phantom, and a porcine heart. The experimental results indicated that the tracking and angle accuracies of the presented acoustic tracking approach were 0.7 mm and 0.5°, respectively. The performance of the acoustic tracking approach is compared with those of other tracking technologies.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Zhemin Zhuang ◽  
Zengbiao Yang ◽  
Shuxin Zhuang ◽  
Alex Noel Joseph Raj ◽  
Ye Yuan ◽  
...  

Breast ultrasound examination is a routine, fast, and safe method for clinical diagnosis of breast tumors. In this paper, a classification method based on multi-features and support vector machines was proposed for breast tumor diagnosis. Multi-features are composed of characteristic features and deep learning features of breast tumor images. Initially, an improved level set algorithm was used to segment the lesion in breast ultrasound images, which provided an accurate calculation of characteristic features, such as orientation, edge indistinctness, characteristics of posterior shadowing region, and shape complexity. Simultaneously, we used transfer learning to construct a pretrained model as a feature extractor to extract the deep learning features of breast ultrasound images. Finally, the multi-features were fused and fed to support vector machine for the further classification of breast ultrasound images. The proposed model, when tested on unknown samples, provided a classification accuracy of 92.5% for cancerous and noncancerous tumors.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Ji Li ◽  
Dan Liu ◽  
Xiaofeng Qing ◽  
Lanlan Yu ◽  
Huizhen Xiang

This study was aimed to enhance and detect the characteristics of three-dimensional transvaginal ultrasound images based on the partial differential algorithm and HSegNet algorithm under deep learning. Thereby, the effect of quantitative parameter values of optimized three-dimensional ultrasound image was analyzed on the diagnosis and evaluation of intrauterine adhesions. Specifically, 75 patients with suspected intrauterine adhesion in hospital who underwent the hysteroscopic diagnosis were selected as the research subjects. The three-dimensional transvaginal ultrasound image was enhanced and optimized by the partial differential equation algorithm and processed by the deep learning algorithm. Subsequently, three-dimensional transvaginal ultrasound examinations were performed on the study subjects that met the standards. The March classification method was used to classify the patients with intrauterine adhesion. Finally, the results by the three-dimensional transvaginal ultrasound were compared with the diagnosis results in hysteroscope surgery. The results showed that the HSegNet algorithm model realized the automatic labeling of intrauterine adhesion in the transvaginal ultrasound image and the final accuracy coefficient was 97.3%. It suggested that the three-dimensional transvaginal ultrasound diagnosis based on deep learning was efficient and accurate. The accuracy of the three-dimensional transvaginal ultrasound was 97.14%, the sensitivity was 96.6%, and the specificity was 72%. In conclusion, the three-dimensional transvaginal examination can effectively improve the diagnostic efficiency of intrauterine adhesion, providing theoretical support for the subsequent diagnosis and grading of intrauterine adhesion.


Author(s):  
T. Peters ◽  
C. Brenner ◽  
M. Song

Abstract. The goal of this paper is to use transfer learning for semi supervised semantic segmentation in 2D images: given a pretrained deep convolutional network (DCNN), our aim is to adapt it to a new camera-sensor system by enforcing predictions to be consistent for the same object in space. This is enabled by projecting 3D object points into multi-view 2D images. Since every 3D object point is usually mapped to a number of 2D images, each of which undergoes a pixelwise classification using the pretrained DCNN, we obtain a number of predictions (labels) for the same object point. This makes it possible to detect and correct outlier predictions. Ultimately, we retrain the DCNN on the corrected dataset in order to adapt the network to the new input data. We demonstrate the effectiveness of our approach on a mobile mapping dataset containing over 10’000 images and more than 1 billion 3D points. Moreover, we manually annotated a subset of the mobile mapping images and show that we were able to rise the mean intersection over union (mIoU) by approximately 10% with Deeplabv3+, using our approach.


Sign in / Sign up

Export Citation Format

Share Document