orientation histogram
Recently Published Documents


TOTAL DOCUMENTS

65
(FIVE YEARS 7)

H-INDEX

8
(FIVE YEARS 1)

Author(s):  
Sharad Pratap Singh ◽  
Shahanaz Ayub ◽  
J.P. Saini

Fingerprint matching is based on the number of minute matches between two fingerprints. Implementation mainly includes image enhancement, segmentation, orientation histogram, etc., extraction (completeness) and corresponding minutiae. Finally, a matching score is generated that indicates whether two fingerprints coincide with the help of coding with MATLAB to find the matching score and simulation of Artificial Neural Network extending the feedback of the network. Using the artificial neural network tool, an important advantage is the similarity index between the sample data or unknown data. A neural network is a massively parallel distributed processor consisting of simple processing units that have a natural property to store knowledge and computer experiences are available for use. A fingerprint comparison essentially consists of two fingerprints to generate a fingerprint match score the match score is used to determine whether the two impressions they are of the same finger. The decision is made this study shows the comparison of normal and altered fingerprints using MATLAB coding and data used to study in the self-generated data using biometric scanner also the open source data available on the web is used for finding out matching score or similarity index, The study shows that there is hardly any matching between normal and altered fingerprints of the same person.


2020 ◽  
Vol 12 (21) ◽  
pp. 3612
Author(s):  
Jiguang Dai ◽  
Chengcheng Li ◽  
Yuqiang Zuo ◽  
Haibin Ai

Determining samples is considered to be a precondition in deep network training and learning, but at present, samples are usually created manually, which limits the application of deep networks. Therefore, this article proposes an OpenStreetMap (OSM) data-driven method for creating road-positive samples. First, based on the OSM data, a line segment orientation histogram (LSOH) model is constructed to determine the local road direction. Secondly, a road homogeneity constraint rule and road texture feature statistical model are constructed to extract the local road line, and on the basis of the local road lines with the same direction, a polar constraint rule is proposed to determine the local road line set. Then, an iterative interpolation algorithm is used to connect the local road lines on both sides of the gaps between the road lines. Finally, a local texture self-similarity (LTSS) model is implemented to determine the road width, and the centerpoint autocorrection model and random sample consensus (RANSAC) algorithm are used to extract the road centerline; the road width and road centerline are used to complete the creation of the road-positive samples. Experiments are conducted on different scenes and different types of images to demonstrate the proposed method and compare it with other approaches. The results demonstrate that the proposed method for creating road-positive samples has great advantages in terms of accuracy and integrity.


There is tremendous requirement of such technique which can fulfill the entire requirement for retrieval of an image from available dataset which comes under computer vision. In this paper we discussed about the one of the application of CBIR using an efficient combination of two techniques. The application is retrieval of people images from database that comes under minority. In this paper we used an efficient combination of color image histogram technique and edge orientation histogram technique by dividing original image into small subblocks. The feature vector is formed by combination of two features obtained by above methodologies. The final features obtained by query image will be compared with the feature vector of database images using a new Canberra Distance classifier. Proposed method is designed for multiple self-prepared and some collected from internet databases. Our method includes the efficient integration of features such as color, texture, shape and orientation. The proposed method is compared with state of art techniques to prove the stable and highest accuracy of proposed work.


Author(s):  
Tamarafinide Victory Dittimi ◽  
Ching Yee Suen

This research presents a client and server-based mobile application for recognition and authentication of banknotes; the system extracted the shape context (SC), Scale Invariant Feature Transform (SIFT), gradient location and orientation histogram (GLOH), and Histogram of Gradient (HOG). It then reduces the feature set using Principal Component Analysis (PCA), Bag of Words and proposed two-dimension reduction approach based on low variance and high correlation filter. The classification was done using a 2-fold Weighted Majority Average (WMA) Ensemble technique with MPLNN and MCSVM as base classifiers. The application was built using Unity 3D; it was tested on Naira, USD, CAD and Euro banknotes and the experimental results proved that the implemented feature vector and the proposed feature reduction and classification technique presented the best results and with promising recognition accuracy, detection rate, and processing time.


2019 ◽  
Vol 15 (1) ◽  
Author(s):  
Archana Harsing Sable ◽  
Sanjay N. Talbar

Abstract Numerous algorithms have met complexity in recognizing the face, which is invariant to plastic surgery, owing to the texture variations in the skin. Though plastic surgery serves to be a challenging issue in the domain of face recognition, the concerned theme has to be restudied for its hypothetical and experimental perspectives. In this paper, Adaptive Gradient Location and Orientation Histogram (AGLOH)-based feature extraction is proposed to accomplish effective plastic surgery face recognition. The proposed features are extracted from the granular space of the faces. Additionally, the variants of the local binary pattern are also extracted to accompany the AGLOH features. Subsequently, the feature dimensionality is reduced using principal component analysis (PCA) to train the artificial neural network. The paper trains the neural network using particle swarm optimization, despite utilizing the traditional learning algorithms. The experimentation involved 452 plastic surgery faces from blepharoplasty, brow lift, liposhaving, malar augmentation, mentoplasty, otoplasty, rhinoplasty, rhytidectomy and skin peeling. Finally, the proposed AGLOH proves its performance dominance.


Information ◽  
2018 ◽  
Vol 9 (12) ◽  
pp. 299
Author(s):  
Ende Wang ◽  
Jinlei Jiao ◽  
Jingchao Yang ◽  
Dongyi Liang ◽  
Jiandong Tian

Keypoint matching is of fundamental importance in computer vision applications. Fish-eye lenses are convenient in such applications that involve a very wide angle of view. However, their use has been limited by the lack of an effective matching algorithm. The Scale Invariant Feature Transform (SIFT) algorithm is an important technique in computer vision to detect and describe local features in images. Thus, we present a Tri-SIFT algorithm, which has a set of modifications to the SIFT algorithm that improve the descriptor accuracy and matching performance for fish-eye images, while preserving its original robustness to scale and rotation. After the keypoint detection of the SIFT algorithm is completed, the points in and around the keypoints are back-projected to a unit sphere following a fish-eye camera model. To simplify the calculation in which the image is on the sphere, the form of descriptor is based on the modification of the Gradient Location and Orientation Histogram (GLOH). In addition, to improve the invariance to the scale and the rotation in fish-eye images, the gradient magnitudes are replaced by the area of the surface, and the orientation is calculated on the sphere. Extensive experiments demonstrate that the performance of our modified algorithms outweigh that of SIFT and other related algorithms for fish-eye images.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3780 ◽  
Author(s):  
Xuehui Wu ◽  
Xiaobo Lu ◽  
Henry Leung

This work considers using camera sensors to detect fire smoke. Static features including texture, wavelet, color, edge orientation histogram, irregularity, and dynamic features including motion direction, change of motion direction and motion speed, are extracted from fire smoke to train and test with different combinations. A robust AdaBoost (RAB) classifier is proposed to improve training and classification accuracy. Extensive experiments on well known challenging datasets and application for fire smoke detection demonstrate that the proposed fire smoke detector leads to a satisfactory performance.


2018 ◽  
Vol 7 (4.6) ◽  
pp. 299
Author(s):  
G. N. Balaji ◽  
S. V. Suryanarayana ◽  
C. Veeramani

Hand gesture recognition plays a vital role in numerous applications, which can run from mobile phones to 3D analysis of anatomy and from gaming to medicinal science. In a large portion of research applications and current business hand gestures recognition, has been implemented by utilizing either vision based or sensor-based gloves strategies where hues, paperclips of synthetic substances are used on to capture the gestures. Another essential issue associated with vision-based procedures is illuminated conditions. The threshold used for the segmentation is changed based on the light variations. A system is proposed in this paper, which extracts the gesture part from the hand image by preprocessing, followed by extraction of orientation histogram based feature is done. Further, in order to recognize the gestures, the extracted HOG feature vectors are provide for support vector machine (SVM). The proposed system is tested with 84 images and it outperforms with an accuracy of 94.04%.  


Sign in / Sign up

Export Citation Format

Share Document