A Comparative study for Feature Extraction and Classification of Images

Author(s):  
Samar M. Abdelmoneim ◽  
Mohammed Kayed ◽  
Shereen A. Taie
2013 ◽  
Vol 475-476 ◽  
pp. 374-378
Author(s):  
Xue Ming Zhai ◽  
Dong Ya Zhang ◽  
Yu Jia Zhai ◽  
Ruo Chen Li ◽  
De Wen Wang

Image feature extraction and classification is increasingly important in all sectors of the images system management. Aiming at the problems that applying Hu invariant moments to extract image feature computes large and too dimensions, this paper presented Harris corner invariant moments algorithm. This algorithm only calculates corner coordinates, so can reduce the corner matching dimensions. Combined with the SVM (Support Vector Machine) classification method, we conducted a classification for a large number of images, and the result shows that using this algorithm to extract invariant moments and classifying can achieve better classification accuracy.


2020 ◽  
Vol 1 (2) ◽  
pp. 69
Author(s):  
Yoga Widi Pamungkas ◽  
Adiwijaya Adiwijaya ◽  
Dody Qori Utama

Indonesia has a high biodiversity of snakes. Snake species that exist throughout Indonesia, consisting of venomous and non-venomous snakes. One of the dangers that can be posed by snakes is the bite of several types of deadly snakes. Snake bite cases recorded in Indonesia are quite high with not a few fatalities. Most of the deaths caused by snakebite occur due to errors in the handling procedure for the bite wound. This problem can be overcome one of them if we know how to classify snake bite wounds, whether venomous or non-venomous. In this study, a classification system for snake bite wound image was built using Regionprops feature extraction and Decision Tree algorithm. Snake bite images are classified as either venomous or non-venomous without knowing the kind of the snake. In Regionprops several features are used to help the process of feature extraction, including the number of centroids, area, distance, and eccentricity. Evaluation of the model that was built was found that the parameters of the number of centroids and the distance between centroids had the most significant influence in helping the classification of images of snakebite wounds with an accuracy of 97.14%, precision 92.85%, recall 91.42%, and F1 score 92.06%.


Glaucoma is an eye decease that can be recognized as the second most common cause of blindness. Glaucoma is an irreversible decease, so that it is necessary to prevent from glaucoma before the complete loss of sight. Manual screening of glaucoma among larger amount of count is complex due to the availability of experienced manpower in Ophthalmology is less. The research focuses on the analysis each and every features of retinal image in glaucoma and builds an optimistic automatic glaucoma screening system with reduced complexity. Presently, there are so many treatments are available to prevent vision loss due to glaucoma, but it should be detected in the begging stage. Thus, the objective is to develop an automated identification method of Glaucoma from retinal images. The steps involved in this work are Disc segmentation, texture feature extraction in different colour models and classification of images in glaucomatous or not. The obtained results having 94% accuracy.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2011 ◽  
Vol 8 (1) ◽  
pp. 201-210
Author(s):  
R.M. Bogdanov

The problem of determining the repair sections of the main oil pipeline is solved, basing on the classification of images using distance functions and the clustering principle, The criteria characterizing the cluster are determined by certain given values, based on a comparison with which the defect is assigned to a given cluster, procedures for the redistribution of defects in cluster zones are provided, and the cluster zones parameters are being changed. Calculations are demonstrating the range of defect density variation depending on pipeline sections and the universal capabilities of linear objects configuration with arbitrary density, provided by cluster analysis.


Author(s):  
Chaoqing Wang ◽  
Junlong Cheng ◽  
Yuefei Wang ◽  
Yurong Qian

A vehicle make and model recognition (VMMR) system is a common requirement in the field of intelligent transportation systems (ITS). However, it is a challenging task because of the subtle differences between vehicle categories. In this paper, we propose a hierarchical scheme for VMMR. Specifically, the scheme consists of (1) a feature extraction framework called weighted mask hierarchical bilinear pooling (WMHBP) based on hierarchical bilinear pooling (HBP) which weakens the influence of invalid background regions by generating a weighted mask while extracting features from discriminative regions to form a more robust feature descriptor; (2) a hierarchical loss function that can learn the appearance differences between vehicle brands, and enhance vehicle recognition accuracy; (3) collection of vehicle images from the Internet and classification of images with hierarchical labels to augment data for solving the problem of insufficient data and low picture resolution and improving the model’s generalization ability and robustness. We evaluate the proposed framework for accuracy and real-time performance and the experiment results indicate a recognition accuracy of 95.1% and an FPS (frames per second) of 107 for the framework for the Stanford Cars public dataset, which demonstrates the superiority of the method and its availability for ITS.


Sign in / Sign up

Export Citation Format

Share Document