segmentation boundary
Recently Published Documents


TOTAL DOCUMENTS

18
(FIVE YEARS 10)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Jian Yin ◽  
Zhibo Zhou ◽  
Shaohua Xu ◽  
Ruiping Yang ◽  
Kun Liu

Aiming at the problem of insignificant target morphological features, inaccurate detection and unclear boundary of small-target regions, and multitarget boundary overlap in multitarget complex image segmentation, combining the image segmentation mechanism of generative adversarial network with the feature enhancement method of nonlocal attention, a generative adversarial network fused with attention mechanism (AM-GAN) is proposed. The generative network in the model is composed of residual network and nonlocal attention module, which use the feature extraction and multiscale fusion mechanism of residual network, as well as feature enhancement and global information fusion ability of nonlocal spatial-channel dual attention to enhance the target features in the detection area and improve the continuity and clarity of the segmentation boundary. The adversarial network is composed of fully convolutional networks, which penalizes the loss of information in small-target regions by judging the authenticity of prediction and label segmentation and improves the detection ability of the generative adversarial model for small targets and the accuracy of multitarget segmentation. AM-GAN can use the GAN’s inherent mechanism that reconstruct and repair high-resolution image, as well as the ability of nonlocal attention global receptive field to strengthen detail features, automatically learn to focus on target structures of different shapes and sizes, highlight salient features useful for specific tasks, reduce the loss of image detail features, improve the accuracy of small-target detection, and optimize the segmentation boundary of multitargets. Taking medical MRI abdominal image segmentation as a verification experiment, multitargets such as liver, left/right kidney, and spleen are selected for segmentation and abnormal tissue detection. In the case of small and unbalanced sample datasets, the class pixels’ accuracy reaches 87.37%, the intersection over union is 92.42%, and the average Dice coefficient is 93%. Compared with other methods in the experiment, the segmentation precision and accuracy are greatly improved. It shows that the proposed method has good applicability for solving typical multitarget image segmentation problems such as small-target feature detection, boundary overlap, and offset deformation.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Qianqian Chen ◽  
Xiaohong Wang ◽  
Rong Ding ◽  
Ziyao Wang

The study focused on the dual-source computed tomography (CT) images segmented by the decision tree algorithm, to explore the efficacy of docetaxel combined with fluorouracil therapy on gastric patients undergoing chemotherapy. In this study, 98 patients with gastric cancer who were treated in the hospital were selected as the research subjects. The decision tree algorithm was applied to segment dual-source CT images of gastric cancer patients. The decision tree is established according to the feature ring and the segmentation position. The machine inductively learns from the decision tree to extract the features of the CT image to obtain the optimal segmentation boundary. The observation group was treated with docetaxel combined with fluorouracil, and the control group was treated with docetaxel combined with tegafur gimeracil oteracil potassium capsules. The general data of the two groups of patients were comparable and not statistically significant ( P > 0.05 ). The two groups were compared for clinical efficacy, physical status, KPS score, improvement rate, and adverse drug reactions after treatment. The results showed that the improvement rate of physical fitness in the observation group was 38.78%, and the improvement rate in the control group was 18.37%. The total effective rate in the observation group was 42.85%, and the total effective rate in the control group was 36.73%. Obviously, the curative effect and improvement rate of physical fitness in the observation group were significantly better than those in the control group ( P < 0.05 ). In conclusion, the decision tree algorithm proposed in this study demonstrates superb capabilities in feature extraction of CT images. The machine inductively learns from the decision tree to extract the features of the CT image to obtain the optimal segmentation boundary. The effect of docetaxel combined with fluorouracil is better than that of docetaxel combined with tegafur gimeracil oteracil potassium capsules.


2021 ◽  
Author(s):  
Richard Rzeszutek

This thesis proposes an extension to the Random Walks assisted segmentation algorithm that allows it to operate on a scale-space. Scale-space is a multi-resolution signal analysis method that retains all of the structures in an image through progressive blurring with a Gaussian kernel. The input of the algorithm is setup so that Random Walks will operate on the scale-space, rather than the image itself. The result is that the finer scales retain the detail in the image and the coarser scales filter out the noise. This augmented algorithm is referred to as "Scale-Space Random Walks" (SSRW) and it is shown in both artifical and natural images to be superior to Random Walks when an image has been corrupted by noise. It is also shown that SSRW can impove the segmentation when texture, such as the artifical edges created by JPEG compression, has made the segmentation boundary less accurate. This thesis also presents a practical application of the SSRW in an assisted rotoscoping tool. The tool is implemented as a plugin for a popular commerical compositing application that leverages the power of a Graphics Processing Unit (GPU) to improve the algorithm's performance so that it is near-realtime. Issues such as memory handling, user input and performing vector-matrix algebra are addressed.


2021 ◽  
Author(s):  
Richard Rzeszutek

This thesis proposes an extension to the Random Walks assisted segmentation algorithm that allows it to operate on a scale-space. Scale-space is a multi-resolution signal analysis method that retains all of the structures in an image through progressive blurring with a Gaussian kernel. The input of the algorithm is setup so that Random Walks will operate on the scale-space, rather than the image itself. The result is that the finer scales retain the detail in the image and the coarser scales filter out the noise. This augmented algorithm is referred to as "Scale-Space Random Walks" (SSRW) and it is shown in both artifical and natural images to be superior to Random Walks when an image has been corrupted by noise. It is also shown that SSRW can impove the segmentation when texture, such as the artifical edges created by JPEG compression, has made the segmentation boundary less accurate. This thesis also presents a practical application of the SSRW in an assisted rotoscoping tool. The tool is implemented as a plugin for a popular commerical compositing application that leverages the power of a Graphics Processing Unit (GPU) to improve the algorithm's performance so that it is near-realtime. Issues such as memory handling, user input and performing vector-matrix algebra are addressed.


2021 ◽  
Vol 11 (10) ◽  
pp. 4528
Author(s):  
Tran-Dac-Thinh Phan ◽  
Soo-Hyung Kim ◽  
Hyung-Jeong Yang ◽  
Guee-Sang Lee

Skin lesion segmentation is one of the pivotal stages in the diagnosis of melanoma. Many methods have been proposed but, to date, this is still a challenging task. Variations in size and color, the fuzzy boundary and the low contrast between lesion and normal skin are the adverse factors for deficient or excessive delineation of lesions, or even inaccurate lesion location detection. In this paper, to counter these problems, we introduce a deep learning method based on U-Net architecture, which performs three tasks, namely lesion segmentation, boundary distance map regression and contour detection. The two auxiliary tasks provide an awareness of boundary and shape to the main encoder, which improves the object localization and pixel-wise classification in the transition region from lesion tissues to healthy tissues. Moreover, concerning the large variation in size, the Selective Kernel modules, which are placed in the skip connections, transfer the multi-receptive field features from the encoder to the decoder. Our methods are evaluated on three publicly available datasets: ISBI2016, ISBI 2017 and PH2. The extensive experimental results show the effectiveness of the proposed method in the task of skin lesion segmentation.


2021 ◽  
Vol 11 (2) ◽  
pp. 337-344
Author(s):  
Yao Zeng ◽  
Huanhuan Dai

The liver is the largest substantial organ in the abdominal cavity of the human body. Its structure is complex, the incidence of vascular abundance is high, and it has been seriously ribbed, human health and life. In this study, an automatic segmentation method based on deep convolutional neural network is proposed. Image data blocks of different sizes are extracted as training data and different network structures are designed, and features are automatically learned to obtain a segmentation structure of the tumor. Secondly, in order to further refine the segmentation boundary, we establish a multi-region segmentation model with region mutual exclusion constraints. The model combines the image grayscale, gradient and prior probability information, and overcomes the problem that the boundary point attribution area caused by boundary blur and regional adhesion is difficult to determine. Finally, the model is solved quickly using the time-invisible multi-phase level set. Compared with the traditional multi-organ segmentation method, this method does not require registration or model initialization. The experimental results show that the model can segment the liver, kidney and spleen quickly and effectively, and the segmentation accuracy reaches the advanced level of current methods.


Geophysics ◽  
2020 ◽  
Vol 85 (5) ◽  
pp. U109-U119
Author(s):  
Pengyu Yuan ◽  
Shirui Wang ◽  
Wenyi Hu ◽  
Xuqing Wu ◽  
Jiefu Chen ◽  
...  

A deep-learning-based workflow is proposed in this paper to solve the first-arrival picking problem for near-surface velocity model building. Traditional methods, such as the short-term average/long-term average method, perform poorly when the signal-to-noise ratio is low or near-surface geologic structures are complex. This challenging task is formulated as a segmentation problem accompanied by a novel postprocessing approach to identify pickings along the segmentation boundary. The workflow includes three parts: a deep U-net for segmentation, a recurrent neural network (RNN) for picking, and a weight adaptation approach to be generalized for new data sets. In particular, we have evaluated the importance of selecting a proper loss function for training the network. Instead of taking an end-to-end approach to solve the picking problem, we emphasize the performance gain obtained by using an RNN to optimize the picking. Finally, we adopt a simple transfer learning scheme and test its robustness via a weight adaptation approach to maintain the picking performance on new data sets. Our tests on synthetic data sets reveal the advantage of our workflow compared with existing deep-learning methods that focus only on segmentation performance. Our tests on field data sets illustrate that a good postprocessing picking step is essential for correcting the segmentation errors and that the overall workflow is efficient in minimizing human interventions for the first-arrival picking task.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Shuai Yang ◽  
Ruikun Wang ◽  
Wenjie Zhao ◽  
Yongzhen Ke

Teeth segmentation is a crucial technologic component of the digital dentistry system. The limitations of the live-wire segmentation include two aspects: (1) computing the wire as the segmentation boundary is time-consuming and (2) a great deal of interactions for dental mesh is inevitable. For overcoming these disadvantages, 3D intelligent scissors for dental mesh segmentation based on live-wire is presented. Two tensor-based anisotropic metrics for making wire lie at valleys and ridges are defined, and a timesaving anisotropic Dijkstra is adopted. Besides, to improve with the smoothness of the path tracking back by the traditional Dijkstra, a 3D midpoint smoothing algorithm is proposed. Experiments show that the method is effective for dental mesh segmentation and the proposed tool outperforms in time complexity and interactivity.


2019 ◽  
Vol 11 (21) ◽  
pp. 2505 ◽  
Author(s):  
Crommelinck ◽  
Koeva ◽  
Yang ◽  
Vosselman

Cadastral boundaries are often demarcated by objects that are visible in remote sensing imagery. Indirect surveying relies on the delineation of visible parcel boundaries from such images. Despite advances in automated detection and localization of objects from images, indirect surveying is rarely automated and relies on manual on-screen delineation. We have previously introduced a boundary delineation workflow, comprising image segmentation, boundary classification and interactive delineation that we applied on Unmanned Aerial Vehicle (UAV) data to delineate roads. In this study, we improve each of these steps. For image segmentation, we remove the need to reduce the image resolution and we limit over-segmentation by reducing the number of segment lines by 80% through filtering. For boundary classification, we show how Convolutional Neural Networks (CNN) can be used for boundary line classification, thereby eliminating the previous need for Random Forest (RF) feature generation and thus achieving 71% accuracy. For interactive delineation, we develop additional and more intuitive delineation functionalities that cover more application cases. We test our approach on more varied and larger data sets by applying it to UAV and aerial imagery of 0.02–0.25 m resolution from Kenya, Rwanda and Ethiopia. We show that it is more effective in terms of clicks and time compared to manual delineation for parcels surrounded by visible boundaries. Strongest advantages are obtained for rural scenes delineated from aerial imagery, where the delineation effort per parcel requires 38% less time and 80% fewer clicks compared to manual delineation.


2019 ◽  
Vol 36 (6) ◽  
pp. 1913-1933
Author(s):  
Amitava Choudhury ◽  
Snehanshu Pal ◽  
Ruchira Naskar ◽  
Amitava Basumallick

PurposeThe purpose of this paper is to develop an automated phase segmentation model from complex microstructure. The mechanical and physical properties of metals and alloys are influenced by their microstructure, and therefore the investigation of microstructure is essential. Coexistence of random or sometimes patterned distribution of different microstructural features such as phase, grains and defects makes microstructure highly complex, and accordingly identification or recognition of individual phase, grains and defects within a microstructure is difficult.Design/methodology/approachIn this perspective, computer vision and image processing techniques are effective to help in understanding and proper interpretation of microscopic image. Microstructure-based image processing mainly focuses on image segmentation, boundary detection and grain size approximation. In this paper, a new approach is presented for automated phase segmentation from 2D microstructure images. The benefit of the proposed work is to identify dominated phase from complex microstructure images. The proposed model is trained and tested with 373 different ultra-high carbon steel (UHCS) microscopic images.FindingsIn this paper, Sobel and Watershed transformation algorithms are used for identification of dominating phases, and deep learning model has been used for identification of phase class from microstructural images.Originality/valueFor the first time, the authors have implemented edge detection followed by watershed segmentation and deep learning (convolutional neural network) to identify phases of UHCS microstructure.


Sign in / Sign up

Export Citation Format

Share Document