scholarly journals An Effective Contour Detection based Image Retrieval using Multi-Fusion Method and Neural Network

Author(s):  
Rohit Raja ◽  
Sandeep Kumar ◽  
Shilpa Choudhary ◽  
Hemlata Dalmia

Abstract Day by day, rapidly increasing the number of images on digital platforms and digital image databases has increased. Generally, the user requires image retrieval and it is a challenging task to search effectively from the enormous database. Mainly content-based image retrieval (CBIR) algorithm considered the visual image feature such as color, texture, shape, etc. The non-visual features also play a significant role in image retrieval, mainly in the security concern and selection of image features is an essential issue in CBIR. Performance is one of the challenging tasks in image retrieval, according to current CBIR studies. To overcome this gap, the new method used for CBIR using histogram of gradient (HOG), dominant color descriptor (DCD) & hue moment (HM) features. This work uses color features and shapes texture in-depth for CBIR. HOG is used to extract texture features. DCD on RGB and HSV are used to improve efficiency and computation. A neural network (NN) is used to extract the image features, which improves the computation using the Corel dataset. The experimental results evaluated on various standard benchmarks Corel-1k, Corel-5k datasets, and outcomes of the proposed work illustrate that the proposed CBIR is efficient for other state-of-the-art image retrieval methods. Intensive analysis of the proposed work proved that the proposed work has better precision, recall, accuracy

Author(s):  
Shikha Bhardwaj ◽  
Gitanjali Pandove ◽  
Pawan Kumar Dahiya

Background: In order to retrieve a particular image from vast repository of images, an efficient system is required and such an eminent system is well-known by the name Content-based image retrieval (CBIR) system. Color is indeed an important attribute of an image and the proposed system consist of a hybrid color descriptor which is used for color feature extraction. Deep learning, has gained a prominent importance in the current era. So, the performance of this fusion based color descriptor is also analyzed in the presence of Deep learning classifiers. Method: This paper describes a comparative experimental analysis on various color descriptors and the best two are chosen to form an efficient color based hybrid system denoted as combined color moment-color autocorrelogram (Co-CMCAC). Then, to increase the retrieval accuracy of the hybrid system, a Cascade forward back propagation neural network (CFBPNN) is used. The classification accuracy obtained by using CFBPNN is also compared to Patternnet neural network. Results: The results of the hybrid color descriptor depict that the proposed system has superior results of the order of 95.4%, 88.2%, 84.4% and 96.05% on Corel-1K, Corel-5K, Corel-10K and Oxford flower benchmark datasets respectively as compared to many state-of-the-art related techniques. Conclusion: This paper depict an experimental and analytical analysis on different color feature descriptors namely, Color moment (CM), Color auto-correlogram (CAC), Color histogram (CH), Color coherence vector (CCV) and Dominant color descriptor (DCD). The proposed hybrid color descriptor (Co-CMCAC) is utilized for the withdrawal of color features with Cascade forward back propagation neural network (CFBPNN) is used as a classifier on four benchmark datasets namely Corel-1K, Corel-5K and Corel-10K and Oxford flower.


Author(s):  
Hong Shao ◽  
Yueshu Wu ◽  
Wencheng Cui ◽  
Jinxia Zhang

2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Yang Zhang ◽  
Chaoyue Chen ◽  
Zerong Tian ◽  
Yangfan Cheng ◽  
Jianguo Xu

Objectives. To differentiate pituitary adenoma from Rathke cleft cyst in magnetic resonance (MR) scan by combing MR image features with texture features. Methods. A total number of 133 patients were included in this study, 83 with pituitary adenoma and 50 with Rathke cleft cyst. Qualitative MR image features and quantitative texture features were evaluated by using the chi-square tests or Mann–Whitney U test. Binary logistic regression analysis was conducted to investigate their ability as independent predictors. ROC analysis was conducted subsequently on the independent predictors to assess their practical value in discrimination and was used to investigate the association between two types of features. Results. Signal intensity on the contrast-enhanced image was found to be the only significantly different MR image feature between the two lesions. Two texture features from the contrast-enhanced images (Histo-Skewness and GLCM-Correlation) were found to be the independent predictors in discrimination, of which AUC values were 0.80 and 0.75, respectively. Besides, the above two texture features (Histo-Skewness and GLCM-Contrast) were suggested to be associated with signal intensity on the contrast-enhanced image. Conclusion. Signal intensity on the contrast-enhanced image was the most significant MR image feature in differentiation between pituitary adenoma and Rathke cleft cyst, and texture features also showed promising and practical ability in discrimination. Moreover, two types of features could be coordinated with each other.


2018 ◽  
Vol 7 (2) ◽  
pp. 62-65
Author(s):  
Shivani . ◽  
Sharanjit Singh

Fruit disease detection is critical at early stage since it will affect the farming industry. Farming industry is critical for the growth of the economic conditions of India. To this end, proposed system uses universal filter for the enhancement of image captured from source. This filter eliminates the noise if any from the image. This filter is not only tackle’s salt and pepper noise but also Gaussian noise from the image. Feature extraction operation is applied to extract colour and texture features. Segmented image so obtained is applied with Convolution neural network and k mean clustering for classification. CNN layers are applied to obtain optimised result in terms of classification accuracy. Clustering operation increases the speed with which classification operation is performed. The clusters contain the information about the disease information. Since clusters are formed so entire feature set is not required to be searched. Labelling information is compared against the appropriate clusters only. Results are improved by significant margin proving worth of the study.


2021 ◽  
Vol 8 (7) ◽  
pp. 97-105
Author(s):  
Ali Ahmed ◽  
◽  
Sara Mohamed ◽  

Content-Based Image Retrieval (CBIR) systems retrieve images from the image repository or database in which they are visually similar to the query image. CBIR plays an important role in various fields such as medical diagnosis, crime prevention, web-based searching, and architecture. CBIR consists mainly of two stages: The first is the extraction of features and the second is the matching of similarities. There are several ways to improve the efficiency and performance of CBIR, such as segmentation, relevance feedback, expansion of queries, and fusion-based methods. The literature has suggested several methods for combining and fusing various image descriptors. In general, fusion strategies are typically divided into two groups, namely early and late fusion strategies. Early fusion is the combination of image features from more than one descriptor into a single vector before the similarity computation, while late fusion refers either to the combination of outputs produced by various retrieval systems or to the combination of different rankings of similarity. In this study, a group of color and texture features is proposed to be used for both methods of fusion strategies. Firstly, an early combination of eighteen color features and twelve texture features are combined into a single vector representation and secondly, the late fusion of three of the most common distance measures are used in the late fusion stage. Our experimental results on two common image datasets show that our proposed method has good performance retrieval results compared to the traditional way of using single features descriptor and also has an acceptable retrieval performance compared to some of the state-of-the-art methods. The overall accuracy of our proposed method is 60.6% and 39.07% for Corel-1K and GHIM-10K ‎datasets, respectively.


Content-Based Image Retrieval (CBIR) is extensively used technique for image retrieval from large image databases. However, users are not satisfied with the conventional image retrieval techniques. In addition, the advent of web development and transmission networks, the number of images available to users continues to increase. Therefore, a permanent and considerable digital image production in many areas takes place. Quick access to the similar images of a given query image from this extensive collection of images pose great challenges and require proficient techniques. From query by image to retrieval of relevant images, CBIR has key phases such as feature extraction, similarity measurement, and retrieval of relevant images. However, extracting the features of the images is one of the important steps. Recently Convolutional Neural Network (CNN) shows good results in the field of computer vision due to the ability of feature extraction from the images. Alex Net is a classical Deep CNN for image feature extraction. We have modified the Alex Net Architecture with a few changes and proposed a novel framework to improve its ability for feature extraction and for similarity measurement. The proposal approach optimizes Alex Net in the aspect of pooling layer. In particular, average pooling is replaced by max-avg pooling and the non-linear activation function Maxout is used after every Convolution layer for better feature extraction. This paper introduces CNN for features extraction from images in CBIR system and also presents Euclidean distance along with the Comprehensive Values for better results. The proposed framework goes beyond image retrieval, including the large-scale database. The performance of the proposed work is evaluated using precision. The proposed work show better results than existing works.


2021 ◽  
Vol 32 (4) ◽  
pp. 1-13
Author(s):  
Xia Feng ◽  
Zhiyi Hu ◽  
Caihua Liu ◽  
W. H. Ip ◽  
Huiying Chen

In recent years, deep learning has achieved remarkable results in the text-image retrieval task. However, only global image features are considered, and the vital local information is ignored. This results in a failure to match the text well. Considering that object-level image features can help the matching between text and image, this article proposes a text-image retrieval method that fuses salient image feature representation. Fusion of salient features at the object level can improve the understanding of image semantics and thus improve the performance of text-image retrieval. The experimental results show that the method proposed in the paper is comparable to the latest methods, and the recall rate of some retrieval results is better than the current work.


Author(s):  
Priyesh Tiwari ◽  
Shivendra Nath Sharan ◽  
Kulwant Singh ◽  
Suraj Kamya

Content based image retrieval (CBIR), is an application of real-world computer vision domain where from a query image, similar images are searched from the database. The research presented in this paper aims to find out best features and classification model for optimum results for CBIR system.Five different set of feature combinations in two different color domains (i.e., RGB & HSV) are compared and evaluated using Neural Network Classifier, where best results obtained are 88.2% in terms of classifier accuracy. Color moments feature used comprises of: Mean, Standard Deviation,Kurtosis and Skewness. Histogram features is calculated via 10 probability bins. Wang-1k dataset is used to evaluate the CBIR system performance for image retrieval.Research concludes that integrated multi-level 3D color-texture feature yields most accurate results and also performs better in comparison to individually computed color and texture features.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Tianming Song ◽  
Xiaoyang Yu ◽  
Shuang Yu ◽  
Zhe Ren ◽  
Yawei Qu

Medical image technology is becoming more and more important in the medical field. It not only provides important information about internal organs of the body for clinical analysis and medical treatment but also assists doctors in diagnosing and treating various diseases. However, in the process of medical image feature extraction, there are some problems, such as inconspicuous feature extraction and low feature preparation rate. Combined with the learning idea of convolution neural network, the image multifeature vectors are quantized in a deeper level, which makes the image features further abstract and not only makes up for the one-sidedness of single feature description but also improves the robustness of feature descriptors. This paper presents a medical image processing method based on multifeature fusion, which has high feature extraction effect on medical images of chest, lung, brain and liver, and can better express the feature relationship of medical images. Experimental results show that the accuracy of the proposed method is more than 5% higher than that of other methods, which shows that the performance of the proposed method is better.


Sign in / Sign up

Export Citation Format

Share Document