scholarly journals Integrated region-based segmentation using color components and texture features with prior shape knowledge

Author(s):  
Mehryar Emambakhsh ◽  
Hossein Ebrahimnezhad ◽  
Mohammad Sedaaghi

Integrated region-based segmentation using color components and texture features with prior shape knowledgeSegmentation is the art of partitioning an image into different regions where each one has some degree of uniformity in its feature space. A number of methods have been proposed and blind segmentation is one of them. It uses intrinsic image features, such as pixel intensity, color components and texture. However, some virtues, like poor contrast, noise and occlusion, can weaken the procedure. To overcome them, prior knowledge of the object of interest has to be incorporated in a top-down procedure for segmentation. Consequently, in this work, a novel integrated algorithm is proposed combining bottom-up (blind) and top-down (including shape prior) techniques. First, a color space transformation is performed. Then, an energy function (based on nonlinear diffusion of color components and directional derivatives) is defined. Next, signeddistance functions are generated from different shapes of the object of interest. Finally, a variational framework (based on the level set) is employed to minimize the energy function. The experimental results demonstrate a good performance of the proposed method compared with others and show its robustness in the presence of noise and occlusion. The proposed algorithm is applicable in outdoor and medical image segmentation and also in optical character recognition (OCR).

2021 ◽  
pp. 1-19
Author(s):  
Mingzhou Liu ◽  
Xin Xu ◽  
Jing Hu ◽  
Qiannan Jiang

Road detection algorithms with high robustness as well as timeliness are the basis for developing intelligent assisted driving systems. To improve the robustness as well as the timeliness of unstructured road detection, a new algorithm is proposed in this paper. First, for the first frame in the video, the homography matrix H is estimated based on the improved random sample consensus (RANSAC) algorithm for different regions in the image, and the features of H are automatically extracted using convolutional neural network (CNN), which in turn enables road detection. Secondly, in order to improve the rate of subsequent similar frame detection, the color as well as texture features of the road are extracted from the detection results of the first frame, and the corresponding Gaussian mixture models (GMMs) are constructed based on Orchard-Bouman, and then the Gibbs energy function is used to achieve road detection in subsequent frames. Finally, the above algorithm is verified in a real unstructured road scene, and the experimental results show that the algorithm is 98.4% accurate and can process 58 frames per second with 1024×960 pixels.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Yang Zhang ◽  
Chaoyue Chen ◽  
Zerong Tian ◽  
Yangfan Cheng ◽  
Jianguo Xu

Objectives. To differentiate pituitary adenoma from Rathke cleft cyst in magnetic resonance (MR) scan by combing MR image features with texture features. Methods. A total number of 133 patients were included in this study, 83 with pituitary adenoma and 50 with Rathke cleft cyst. Qualitative MR image features and quantitative texture features were evaluated by using the chi-square tests or Mann–Whitney U test. Binary logistic regression analysis was conducted to investigate their ability as independent predictors. ROC analysis was conducted subsequently on the independent predictors to assess their practical value in discrimination and was used to investigate the association between two types of features. Results. Signal intensity on the contrast-enhanced image was found to be the only significantly different MR image feature between the two lesions. Two texture features from the contrast-enhanced images (Histo-Skewness and GLCM-Correlation) were found to be the independent predictors in discrimination, of which AUC values were 0.80 and 0.75, respectively. Besides, the above two texture features (Histo-Skewness and GLCM-Contrast) were suggested to be associated with signal intensity on the contrast-enhanced image. Conclusion. Signal intensity on the contrast-enhanced image was the most significant MR image feature in differentiation between pituitary adenoma and Rathke cleft cyst, and texture features also showed promising and practical ability in discrimination. Moreover, two types of features could be coordinated with each other.


2021 ◽  
Vol 8 (7) ◽  
pp. 97-105
Author(s):  
Ali Ahmed ◽  
◽  
Sara Mohamed ◽  

Content-Based Image Retrieval (CBIR) systems retrieve images from the image repository or database in which they are visually similar to the query image. CBIR plays an important role in various fields such as medical diagnosis, crime prevention, web-based searching, and architecture. CBIR consists mainly of two stages: The first is the extraction of features and the second is the matching of similarities. There are several ways to improve the efficiency and performance of CBIR, such as segmentation, relevance feedback, expansion of queries, and fusion-based methods. The literature has suggested several methods for combining and fusing various image descriptors. In general, fusion strategies are typically divided into two groups, namely early and late fusion strategies. Early fusion is the combination of image features from more than one descriptor into a single vector before the similarity computation, while late fusion refers either to the combination of outputs produced by various retrieval systems or to the combination of different rankings of similarity. In this study, a group of color and texture features is proposed to be used for both methods of fusion strategies. Firstly, an early combination of eighteen color features and twelve texture features are combined into a single vector representation and secondly, the late fusion of three of the most common distance measures are used in the late fusion stage. Our experimental results on two common image datasets show that our proposed method has good performance retrieval results compared to the traditional way of using single features descriptor and also has an acceptable retrieval performance compared to some of the state-of-the-art methods. The overall accuracy of our proposed method is 60.6% and 39.07% for Corel-1K and GHIM-10K ‎datasets, respectively.


Author(s):  
Juan Zhu ◽  
Jipeng Huang ◽  
Lianming Wang

A novel laser printing files detection method is proposed in this paper to solve the problem of low efficiency and difficulty in traditional detection. The new method is based on improved scale-invariant feature transform (SIFT) feature and histogram feature. Firstly, analyze the graphical features of different laser printing files. Different files have different printing texture features in valid data area. So segment the valid data area to remove the interference of background. Secondly, extract the histogram feature of the same character in the printing file. Normalize the histogram and then calculate the Bhattacharyya coefficient between the detected file and the original file to determine whether the detected file is right or fake. At the same time, calculate the SIFT features and match the detected file and the original file. To focus on the letter or character region, the SIFT features which are out of contour are deleted. Finally, the results of the two different methods are both used as the result of the identification. When any of the result is fake, the end result will be fake. In the self-built database experiment, in different printing files from different printers, the inkjet areas possess different image features. When scanning different files using 600 dpi, the detect accuracy is higher than 97%. This method was able to meet the reliability requirements of law.


2021 ◽  
pp. 20200384
Author(s):  
Zhe-Yi Jiang ◽  
Tian-Jun Lan ◽  
Wei-Xin Cai ◽  
Qian Tao

Objective: To screen the radiomic features of simple bone cysts of the jaws and explore the potential application of radiomics in pre-operative diagnosis of jaw simple bone cysts. Methods: The investigators designed and implemented a case–control study. 19 patients with simple bone cysts who were admitted to the Department of Maxillofacial Surgery, Sun Yat-sen University Affiliated Stomatology Hospital from 2013 to 2019 were included in this study. Their clinical data and cone-beam computed tomography (CBCT) images were examined. The control group consisted of patients with odontogenic keratocyst. CBCT imaging features were analyzed and compared between the patient and control groups. Results: Overall, 10,323 image features were extracted through feature analysis. A subset of 25 radiomic features obtained after feature selection were analyzed further. These 25 features were significantly different between the 2 groups (p < 0.05). The absolute value of correlation coefficient was 0.487–0.775. Gray-level co-occurrence matrix (GLCM) contrast, neighborhood gray tone difference matrix (NGTDM) contrast, and GLCM variance were the features with the highest correlation coefficients. Conclusions: Pre-operative radiomics analysis showed the differences between simple bone cysts and odontogenic keratocysts, can help to diagnose simple bone cysts. Three specific texture features—GLCM contrast, NGTDM contrast, and GLCM variance—may be the characteristic imaging features of simple bone cysts of the jaw.


2016 ◽  
Vol 61 (4) ◽  
pp. 401-412 ◽  
Author(s):  
Saif Dawood Salman Al-Shaikhli ◽  
Michael Ying Yang ◽  
Bodo Rosenhahn

Abstract Automatic 3D liver segmentation is a fundamental step in the liver disease diagnosis and surgery planning. This paper presents a novel fully automatic algorithm for 3D liver segmentation in clinical 3D computed tomography (CT) images. Based on image features, we propose a new Mahalanobis distance cost function using an active shape model (ASM). We call our method MD-ASM. Unlike the standard active shape model (ST-ASM), the proposed method introduces a new feature-constrained Mahalanobis distance cost function to measure the distance between the generated shape during the iterative step and the mean shape model. The proposed Mahalanobis distance function is learned from a public database of liver segmentation challenge (MICCAI-SLiver07). As a refinement step, we propose the use of a 3D graph-cut segmentation. Foreground and background labels are automatically selected using texture features of the learned Mahalanobis distance. Quantitatively, the proposed method is evaluated using two clinical 3D CT scan databases (MICCAI-SLiver07 and MIDAS). The evaluation of the MICCAI-SLiver07 database is obtained by the challenge organizers using five different metric scores. The experimental results demonstrate the availability of the proposed method by achieving an accurate liver segmentation compared to the state-of-the-art methods.


2016 ◽  
Vol 6 (1) ◽  
pp. 1-22 ◽  
Author(s):  
Meng Li ◽  
Yi Zhan

AbstractA feature-dependent variational level set formulation is proposed for image segmentation. Two second order directional derivatives act as the external constraint in the level set evolution, with the directional derivative across the image features direction playing a key role in contour extraction and another only slightly contributes. To overcome the local gradient limit, we integrate the information from the maximal (in magnitude) second-order directional derivative into a common variational framework. It naturally encourages the level set function to deform (up or down) in opposite directions on either side of the image edges, and thus automatically generates object contours. An additional benefit of this proposed model is that it does not require manual initial contours, and our method can capture weak objects in noisy or intensity-inhomogeneous images. Experiments on infrared and medical images demonstrate its advantages.


Author(s):  
Shaohua Li ◽  
Xiuchao Sui ◽  
Xiangde Luo ◽  
Xinxing Xu ◽  
Yong Liu ◽  
...  

Medical image segmentation is important for computer-aided diagnosis. Good segmentation demands the model to see the big picture and fine details simultaneously, i.e., to learn image features that incorporate large context while keep high spatial resolutions. To approach this goal, the most widely used methods -- U-Net and variants, extract and fuse multi-scale features. However, the fused features still have small "effective receptive fields" with a focus on local image cues, limiting their performance. In this work, we propose Segtran, an alternative segmentation framework based on transformers, which have unlimited "effective receptive fields" even at high feature resolutions. The core of Segtran is a novel Squeeze-and-Expansion transformer: a squeezed attention block regularizes the self attention of transformers, and an expansion block learns diversified representations. Additionally, we propose a new positional encoding scheme for transformers, imposing a continuity inductive bias for images. Experiments were performed on 2D and 3D medical image segmentation tasks: optic disc/cup segmentation in fundus images (REFUGE'20 challenge), polyp segmentation in colonoscopy images, and brain tumor segmentation in MRI scans (BraTS'19 challenge). Compared with representative existing methods, Segtran consistently achieved the highest segmentation accuracy, and exhibited good cross-domain generalization capabilities.


Sign in / Sign up

Export Citation Format

Share Document