interactive segmentation
Recently Published Documents


TOTAL DOCUMENTS

275
(FIVE YEARS 40)

H-INDEX

18
(FIVE YEARS 2)

2022 ◽  
Vol 71 ◽  
pp. 103113
Author(s):  
Carmelo Militello ◽  
Leonardo Rundo ◽  
Mariangela Dimarco ◽  
Alessia Orlando ◽  
Vincenzo Conti ◽  
...  

2021 ◽  
Vol 13 (24) ◽  
pp. 5111
Author(s):  
Zhen Shu ◽  
Xiangyun Hu ◽  
Hengming Dai

Accurate building extraction from remotely sensed images is essential for topographic mapping, cadastral surveying and many other applications. Fully automatic segmentation methods still remain a great challenge due to the poor generalization ability and the inaccurate segmentation results. In this work, we are committed to robust click-based interactive building extraction in remote sensing imagery. We argue that stability is vital to an interactive segmentation system, and we observe that the distance of the newly added click to the boundaries of the previous segmentation mask contains progress guidance information of the interactive segmentation process. To promote the robustness of the interactive segmentation, we exploit this information with the previous segmentation mask, positive and negative clicks to form a progress guidance map, and feed it to a convolutional neural network (CNN) with the original RGB image, we name the network as PGR-Net. In addition, an adaptive zoom-in strategy and an iterative training scheme are proposed to further promote the stability of PGR-Net. Compared with the latest methods FCA and f-BRS, the proposed PGR-Net basically requires 1–2 fewer clicks to achieve the same segmentation results. Comprehensive experiments have demonstrated that the PGR-Net outperforms related state-of-the-art methods on five natural image datasets and three building datasets of remote sensing images.


Author(s):  
Tri Arief Sardjono ◽  
Ahmad Fauzi Habiba Chozin ◽  
Muhammad Nuh

Currently, many image analysis methods have been developed on X-Ray of scoliotic patients. However, segmentation of spinal curvature is still a challenge, and needs to be improved. In this research, we proposed a semi-automatic spinal image segmentation of scoliotic patients from X-Ray images. This method is divided into 2 steps: preprocessing and segmentation process. A conversion process from RGB to grayscale and CLAHE (Contrast Limited Adequate Histogram Equalization) method was used in image preprocessing. The active contour method was used for the segmentation process. The result shows that segmentation of spinal X-ray images of scoliotic patients using active contour method interactively, can give better results. The average of ME and RAE values are 12.98% and 26.75 %. instead of using the interactive region splitting method which gets 21.17% and 89.27%. Keywords: active contour, interactive segmentation, pre-processing, scoliosis. 


2021 ◽  
Author(s):  
Yuying Hao ◽  
Yi Liu ◽  
Zewu Wu ◽  
Lin Han ◽  
Yizhou Chen ◽  
...  

Author(s):  
Kok Luong Goh ◽  
Giap Weng Ng ◽  
Muzaffar Hamzah ◽  
Soo See Chai

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6100
Author(s):  
Guoqing Li ◽  
Guoping Zhang ◽  
Chanchan Qin

In the task of interactive image segmentation, the Inside-Outside Guidance (IOG) algorithm has demonstrated superior segmentation performance leveraging Inside-Outside Guidance information. Nevertheless, we observe that the inconsistent input between training and testing when selecting the inside point will result in significant performance degradation. In this paper, a deep reinforcement learning framework, named Inside Point Localization Network (IPL-Net), is proposed to infer the suitable position for the inside point to help the IOG algorithm. Concretely, when a user first clicks two outside points at the symmetrical corner locations of the target object, our proposed system automatically generates the sequence of movement to localize the inside point. We then perform the IOG interactive segmentation method for precisely segmenting the target object of interest. The inside point localization problem is difficult to define as a supervised learning framework because it is expensive to collect image and their corresponding inside points. Therefore, we formulate this problem as Markov Decision Process (MDP) and then optimize it with Dueling Double Deep Q-Network (D3QN). We train our network on the PASCAL dataset and demonstrate that the network achieves excellent performance.


2021 ◽  
Vol 11 (14) ◽  
pp. 6279
Author(s):  
Xiaokang Li ◽  
Mengyun Qiao ◽  
Yi Guo ◽  
Jin Zhou ◽  
Shichong Zhou ◽  
...  

Accurate tumor segmentation is important for aided diagnosis using breast ultrasound. Interactive segmentation methods can obtain highly accurate results by continuously optimizing the segmentation result via user interactions. However, traditional interactive segmentation methods usually require a large number of interactions to make the result meet the requirements due to the performance limitations of the underlying model. With greater ability in extracting image information, convolutional neural network (CNN)-based interactive segmentation methods have been shown to effectively reduce the number of user interactions. In this paper, we proposed a one-stage interactive segmentation framework (interactive segmentation using weighted distance transform, WDTISeg) for breast ultrasound image using weighted distance transform and shape-aware compound loss. First, we used a pre-trained CNN to attain an initial automatic segmentation, based on which the user provided interaction points of mis-segmented areas. Then, we combined Euclidean distance transform and geodesic distance transform to convert interaction points into weighted distance maps to transfer segmentation guidance information to the model. The same CNN accepted the input image, the initial segmentation, and weighted distance maps as a concatenation input and provided a refined result, without another additional segmentation network. In addition, a shape-aware compound loss function using prior knowledge was designed to reduce the number of user interactions. In the testing phase on 200 cases, our method achieved a dice of 82.86 ± 16.22 (%) for automatic segmentation task and a dice of 94.45 ± 3.26 (%) for interactive segmentation task after 8 interactions. The results of comparative experiments proved that our method could obtain higher accuracy with fewer simple interactions than other interactive segmentation methods.


2021 ◽  
pp. 102102
Author(s):  
Xiangde Luo ◽  
Guotai Wang ◽  
Tao Song ◽  
Jingyang Zhang ◽  
Michael Aertsen ◽  
...  

2021 ◽  
Vol 6 (1) ◽  
pp. 1-3
Author(s):  
Zaid Abbas Al-Sabbag ◽  
Jason Paul Connelly ◽  
Chul Min Yeum ◽  
Sriram Narasimhan

In this study, we propose a technique for quantitative visual inspection that can quantify structural damage using extended reality (XR). The XR headset can display and overlay graphical information on the physical space and process the data from the built-in camera and depth sensor. Also, the device permits accessing and analyzing image and video stream in real-time and utilizing 3D meshes of the environment and camera pose information. By leveraging these features for the XR headset, we build a workflow and graphic interface to capture the images, segment damage regions, and evaluate the physical size of damage. A deep learning-based interactive segmentation algorithm called f-BRS was deployed to precisely segment damage regions through the XR headset. A ray-casting algorithm is implemented to obtain 3D locations corresponding to the pixel locations of the damage region on the image. The size of the damage region is computed from the 3D locations of its boundary. The performance of the proposed method is demonstrated through a field experiment at an in-service bridge where spalling damage is present at its abutment. The experiment shows that the proposed method provides sub-centimeter accuracy for the size estimation.


Sign in / Sign up

Export Citation Format

Share Document