Performance comparison of feature vector extraction techniques in RGB color space using block truncation coding for content based image classification with discrete classifiers

Author(s):  
Sudeep Thepade ◽  
Rik Das ◽  
Saurav Ghosh
2016 ◽  
Vol 26 (2) ◽  
pp. 451-465 ◽  
Author(s):  
Francisco A. Pujol ◽  
Higinio Mora ◽  
José A. Girona-Selva

AbstractIn this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.


2013 ◽  
Vol 662 ◽  
pp. 926-930 ◽  
Author(s):  
Xuri Tang ◽  
Mai Jiang ◽  
Yu Ping Wang ◽  
Zhi Gang Pi

According to the ceramic tile color difference classification detection problem, this paper presents a method for color difference based on Histogram statistical values. First, the color image in RGB color space is converted to HSI color space, median filter was selected for image preprocessing. Then, the ceramic samples HSI Histogram statistical of each channel was calculated respectively. Take the Histogram statistical as the color difference classification character value. For real timerequirement, using minimum distance classifier as classification basis. Compared with the S, I channel, the results showed that adopted the H channel Histogram statistical value as feature vector has higher accuracy for ceramic tile color difference classification.


2013 ◽  
Vol 393 ◽  
pp. 550-555 ◽  
Author(s):  
Nursabillilah Mohd Ali ◽  
Nahrul Khair Alang Md Rashid ◽  
Yasir Mohd Mustafah

This paper compares the performance of RGB and HSV color segmentations method in road signs detection. The road signs images are taken under various illumination changes, partial occlusion and rotational changes. The proposed algorithms using both RGB and HSV color space are able to detect the 3 standard types of colored images namely Red, Yellow and Blue. The experiment shows that the HSV color algorithm achieved better detection accuracy compared to RGB color space.


In these years, there has been a gigantic growth in the generation of data. Innovations such as the Internet, social media and smart phones are the facilitators of this information boom. Since ancient times images were treated as an effective mode of communication. Even today most of the data generated is image data. The technology for capturing, storing and transferring images is well developed but efficient image retrieval is still a primitive area of research. Content Based Image Retrieval (CBIR) is one such area where lot of research is still going on. CBIR systems rely on three aspects of the image content namely texture, shape and color. Application specific CBIR systems are effective whereas Generic CBIR systems are being explored. Previously, descriptors are used to extract shape, color or texture content features, but the effect of using more than one descriptor is under research and may yield better results. The paper presents the fusion of TSBTC n-ary (Thepade's Sorted n-ary Block Truncation Coding) Global Color Features and Local Binary Pattern (LBP) Local Texture Features in Content Based Image with Different Color Places TSBTC n-ary devises global color features from an image. It is a faster and better technique compared to Block Truncation Coding. It is also rotation and scale invariant. When applied on an image TSBTC n-ary gives a feature vector based on the color space, if TSBTC n-ary is applied on the obtained LBP (Local Binary Patterns) of the image color planes, the feature vector obtained is be based on local texture content. Along with RGB, the Luminance chromaticity color space like YCbCr and Kekre’s LUV are also used in experimentation of proposed CBIR techniques. Wang dataset has been used for exploration of proposed method. It consists of 1000 images (10 categories having 100 images each). Obtained results have shown performance improvement using fusion of BTC extracted global color features and local texture features extracted with TSBTC n-ary applied on Local Binary Patterns (LBP).


Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


2021 ◽  
Vol 13 (6) ◽  
pp. 1211
Author(s):  
Pan Fan ◽  
Guodong Lang ◽  
Bin Yan ◽  
Xiaoyan Lei ◽  
Pengju Guo ◽  
...  

In recent years, many agriculture-related problems have been evaluated with the integration of artificial intelligence techniques and remote sensing systems. The rapid and accurate identification of apple targets in an illuminated and unstructured natural orchard is still a key challenge for the picking robot’s vision system. In this paper, by combining local image features and color information, we propose a pixel patch segmentation method based on gray-centered red–green–blue (RGB) color space to address this issue. Different from the existing methods, this method presents a novel color feature selection method that accounts for the influence of illumination and shadow in apple images. By exploring both color features and local variation in apple images, the proposed method could effectively distinguish the apple fruit pixels from other pixels. Compared with the classical segmentation methods and conventional clustering algorithms as well as the popular deep-learning segmentation algorithms, the proposed method can segment apple images more accurately and effectively. The proposed method was tested on 180 apple images. It offered an average accuracy rate of 99.26%, recall rate of 98.69%, false positive rate of 0.06%, and false negative rate of 1.44%. Experimental results demonstrate the outstanding performance of the proposed method.


2021 ◽  
Vol 13 (5) ◽  
pp. 939
Author(s):  
Yongan Xue ◽  
Jinling Zhao ◽  
Mingmei Zhang

To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.


Author(s):  
HUA YANG ◽  
MASAAKI KASHIMURA ◽  
NORIKADU ONDA ◽  
SHINJI OZAWA

This paper describes a new system for extracting and classifying bibliography regions from the color image of a book cover. The system consists of three major components: preprocessing, color space segmentation and text region extraction and classification. Preprocessing extracts the edge lines of the book and geometrically corrects and segments the input image, into the parts of front cover, spine and back cover. The same as all color image processing researches, the segmentation of color space is an essential and important step here. Instead of RGB color space, HSI color space is used in this system. The color space is segmented into achromatic and chromatic regions first; and both the achromatic and chromatic regions are segmented further to complete the color space segmentation. Then text region extraction and classification follow. After detecting fundamental features (stroke width and local label width) text regions are determined. By comparing the text regions on front cover with those on spine, all extracted text regions are classified into suitable bibliography categories: author, title, publisher and other information, without applying OCR.


2017 ◽  
Vol 10 (3) ◽  
pp. 310-331 ◽  
Author(s):  
Sudeep Thepade ◽  
Rik Das ◽  
Saurav Ghosh

Purpose Current practices in data classification and retrieval have experienced a surge in the use of multimedia content. Identification of desired information from the huge image databases has been facing increased complexities for designing an efficient feature extraction process. Conventional approaches of image classification with text-based image annotation have faced assorted limitations due to erroneous interpretation of vocabulary and huge time consumption involved due to manual annotation. Content-based image recognition has emerged as an alternative to combat the aforesaid limitations. However, exploring rich feature content in an image with a single technique has lesser probability of extract meaningful signatures compared to multi-technique feature extraction. Therefore, the purpose of this paper is to explore the possibilities of enhanced content-based image recognition by fusion of classification decision obtained using diverse feature extraction techniques. Design/methodology/approach Three novel techniques of feature extraction have been introduced in this paper and have been tested with four different classifiers individually. The four classifiers used for performance testing were K nearest neighbor (KNN) classifier, RIDOR classifier, artificial neural network classifier and support vector machine classifier. Thereafter, classification decisions obtained using KNN classifier for different feature extraction techniques have been integrated by Z-score normalization and feature scaling to create fusion-based framework of image recognition. It has been followed by the introduction of a fusion-based retrieval model to validate the retrieval performance with classified query. Earlier works on content-based image identification have adopted fusion-based approach. However, to the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work. Findings The proposed fusion techniques have successfully outclassed the state-of-the-art techniques in classification and retrieval performances. Four public data sets, namely, Wang data set, Oliva and Torralba (OT-scene) data set, Corel data set and Caltech data set comprising of 22,615 images on the whole are used for the evaluation purpose. Originality/value To the best of the authors’ knowledge, fusion-based query classification has been addressed for the first time as a precursor of retrieval in this work. The novel idea of exploring rich image features by fusion of multiple feature extraction techniques has also encouraged further research on dimensionality reduction of feature vectors for enhanced classification results.


Sign in / Sign up

Export Citation Format

Share Document