Region-to-boundary deep learning model with multi-scale feature fusion for medical image segmentation

2022 ◽  
Vol 71 ◽  
pp. 103165
Author(s):  
Xiaowei Liu ◽  
Lei Yang ◽  
Jianguo Chen ◽  
Siyang Yu ◽  
Keqin Li
Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Feng-Ping An ◽  
Jun-e Liu

Medical image segmentation is a key technology for image guidance. Therefore, the advantages and disadvantages of image segmentation play an important role in image-guided surgery. Traditional machine learning methods have achieved certain beneficial effects in medical image segmentation, but they have problems such as low classification accuracy and poor robustness. Deep learning theory has good generalizability and feature extraction ability, which provides a new idea for solving medical image segmentation problems. However, deep learning has problems in terms of its application to medical image segmentation: one is that the deep learning network structure cannot be constructed according to medical image characteristics; the other is that the generalizability y of the deep learning model is weak. To address these issues, this paper first adapts a neural network to medical image features by adding cross-layer connections to a traditional convolutional neural network. In addition, an optimized convolutional neural network model is established. The optimized convolutional neural network model can segment medical images using the features of two scales simultaneously. At the same time, to solve the generalizability problem of the deep learning model, an adaptive distribution function is designed according to the position of the hidden layer, and then the activation probability of each layer of neurons is set. This enhances the generalizability of the dropout model, and an adaptive dropout model is proposed. This model better addresses the problem of the weak generalizability of deep learning models. Based on the above ideas, this paper proposes a medical image segmentation algorithm based on an optimized convolutional neural network with adaptive dropout depth calculation. An ultrasonic tomographic image and lumbar CT medical image were separately segmented by the method of this paper. The experimental results show that not only are the segmentation effects of the proposed method improved compared with those of the traditional machine learning and other deep learning methods but also the method has a high adaptive segmentation ability for various medical images. The research work in this paper provides a new perspective for research on medical image segmentation.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Guangyi Yang ◽  
Xingyu Ding ◽  
Tian Huang ◽  
Kun Cheng ◽  
Weizheng Jin

Abstract Communications industry has remarkably changed with the development of fifth-generation cellular networks. Image, as an indispensable component of communication, has attracted wide attention. Thus, finding a suitable approach to assess image quality is important. Therefore, we propose a deep learning model for image quality assessment (IQA) based on explicit-implicit dual stream network. We use frequency domain features of kurtosis based on wavelet transform to represent explicit features and spatial features extracted by convolutional neural network (CNN) to represent implicit features. Thus, we constructed an explicit-implicit (EI) parallel deep learning model, namely, EI-IQA model. The EI-IQA model is based on the VGGNet that extracts the spatial domain features. On this basis, the number of network layers of VGGNet is reduced by adding the parallel wavelet kurtosis value frequency domain features. Thus, the training parameters and the sample requirements decline. We verified, by cross-validation of different databases, that the wavelet kurtosis feature fusion method based on deep learning has a more complete feature extraction effect and a better generalisation ability. Thus, the method can simulate the human visual perception system better, and subjective feelings become closer to the human eye. The source code about the proposed EI-IQA model is available on github https://github.com/jacob6/EI-IQA.


Author(s):  
Yujie Chen ◽  
Tengfei Ma ◽  
Xixi Yang ◽  
Jianmin Wang ◽  
Bosheng Song ◽  
...  

Abstract Motivation Adverse drug–drug interactions (DDIs) are crucial for drug research and mainly cause morbidity and mortality. Thus, the identification of potential DDIs is essential for doctors, patients and the society. Existing traditional machine learning models rely heavily on handcraft features and lack generalization. Recently, the deep learning approaches that can automatically learn drug features from the molecular graph or drug-related network have improved the ability of computational models to predict unknown DDIs. However, previous works utilized large labeled data and merely considered the structure or sequence information of drugs without considering the relations or topological information between drug and other biomedical objects (e.g. gene, disease and pathway), or considered knowledge graph (KG) without considering the information from the drug molecular structure. Results Accordingly, to effectively explore the joint effect of drug molecular structure and semantic information of drugs in knowledge graph for DDI prediction, we propose a multi-scale feature fusion deep learning model named MUFFIN. MUFFIN can jointly learn the drug representation based on both the drug-self structure information and the KG with rich bio-medical information. In MUFFIN, we designed a bi-level cross strategy that includes cross- and scalar-level components to fuse multi-modal features well. MUFFIN can alleviate the restriction of limited labeled data on deep learning models by crossing the features learned from large-scale KG and drug molecular graph. We evaluated our approach on three datasets and three different tasks including binary-class, multi-class and multi-label DDI prediction tasks. The results showed that MUFFIN outperformed other state-of-the-art baselines. Availability and implementation The source code and data are available at https://github.com/xzenglab/MUFFIN.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2107
Author(s):  
Xin Wei ◽  
Huan Wan ◽  
Fanghua Ye ◽  
Weidong Min

In recent years, medical image segmentation (MIS) has made a huge breakthrough due to the success of deep learning. However, the existing MIS algorithms still suffer from two types of uncertainties: (1) the uncertainty of the plausible segmentation hypotheses and (2) the uncertainty of segmentation performance. These two types of uncertainties affect the effectiveness of the MIS algorithm and then affect the reliability of medical diagnosis. Many studies have been done on the former but ignore the latter. Therefore, we proposed the hierarchical predictable segmentation network (HPS-Net), which consists of a new network structure, a new loss function, and a cooperative training mode. According to our knowledge, HPS-Net is the first network in the MIS area that can generate both the diverse segmentation hypotheses to avoid the uncertainty of the plausible segmentation hypotheses and the measure predictions about these hypotheses to avoid the uncertainty of segmentation performance. Extensive experiments were conducted on the LIDC-IDRI dataset and the ISIC2018 dataset. The results show that HPS-Net has the highest Dice score compared with the benchmark methods, which means it has the best segmentation performance. The results also confirmed that the proposed HPS-Net can effectively predict TNR and TPR.


Sign in / Sign up

Export Citation Format

Share Document