scholarly journals Dog Identification Method Based on Muzzle Pattern Image

2020 ◽  
Vol 10 (24) ◽  
pp. 8994
Author(s):  
Dong-Hwa Jang ◽  
Kyeong-Seok Kwon ◽  
Jung-Kon Kim ◽  
Ka-Young Yang ◽  
Jong-Bok Kim

Currently, invasive and external radio frequency identification (RFID) devices and pet tags are widely used for dog identification. However, social problems such as abandoning and losing dogs are constantly increasing. A more effective alternative to the existing identification method is required and the biometrics can be the alternative. This paper proposes an effective dog muzzle recognition method to identify individual dogs. The proposed method consists of preprocessing, feature extraction, matching, and postprocessing. For preprocessing, proposed resize and histogram equalization are used. For feature extraction algorithm, Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Binary Robust Invariant Scaling Keypoints (BRISK) and Oriented FAST, and Rotated BRIEF (ORB) are applied and compared. For matching, Fast Library for Approximate Nearest Neighbors (FLANN) is used for SIFT and SURF, and hamming distance are used for BRISK and ORB. For postprocessing, two techniques to reduce incorrect matches are proposed. The proposed method was evaluated with 55 dog muzzle pattern images acquired from 11 dogs and 990 images augmented by the image deformation (i.e., angle, illumination, noise, affine transform). The best Equal Error Rate (EER) of the proposed method was 0.35%, and ORB was the most appropriate for the dog muzzle pattern recognition.

2020 ◽  
Vol 33 (1) ◽  
pp. 133-153 ◽  
Author(s):  
Fereshteh Abedini ◽  
Mahdi Bahaghighat ◽  
Misak S’hoyan

Wind Turbine Towers (WTTs) are the main structures of wind farms. They are costly devices that must be thoroughly inspected according to maintenance plans. Today, existence of machine vision techniques along with unmanned aerial vehicles (UAVs) enable fast, easy, and intelligent visual inspection of the structures. Our work is aimed towards developing a vision-based system to perform Nondestructive tests (NDTs) for wind turbines using UAVs. In order to navigate the flying machine toward the wind turbine tower and reliably land on it, the exact position of the wind turbine and its tower must be detected. We employ several strong computer vision approaches such as Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Features from Accelerated Segment Test (FAST), Brute-Force, Fast Library for Approximate Nearest Neighbors (FLANN) to detect the WTT. Then, in order to increase the reliability of the system, we apply the ResNet, MobileNet, ShuffleNet, EffNet, and SqueezeNet pre-trained classifiers in order to verify whether a detected object is indeed a turbine tower or not. This intelligent monitoring system has auto navigation ability and can be used for future goals including intelligent fault diagnosis and maintenance purposes. The simulation results show the accuracy of the proposed model are 89.4% in WTT detection and 97.74% in verification (classification) problems.


Data ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 52 ◽  
Author(s):  
Oleksii Gorokhovatskyi ◽  
Volodymyr Gorokhovatskyi ◽  
Olena Peredrii

In this paper, we propose an investigation of the properties of structural image recognition methods in the cluster space of characteristic features. Recognition, which is based on key point descriptors like SIFT (Scale-invariant Feature Transform), SURF (Speeded Up Robust Features), ORB (Oriented FAST and Rotated BRIEF), etc., often relating to the search for corresponding descriptor values between an input image and all etalon images, which require many operations and time. Recognition of the previously quantized (clustered) sets of descriptor features is described. Clustering is performed across the complete set of etalon image descriptors and followed by screening, which allows for representation of each etalon image in vector form as a distribution of clusters. Due to such representations, the number of computation and comparison procedures, which are the core of the recognition process, might be reduced tens of times. Respectively, the preprocessing stage takes additional time for clustering. The implementation of the proposed approach was tested on the Leeds Butterfly dataset. The dependence of cluster amount on recognition performance and processing time was investigated. It was proven that recognition may be performed up to nine times faster with only a moderate decrease in quality recognition compared to searching for correspondences between all existing descriptors in etalon images and input one without quantization.


2019 ◽  
pp. 1-3
Author(s):  
Anita Kaklotar

Breast cancer is the primary and the most common disease found among women. Today, mammography is the most powerful screening technique used for early detection of cancer which increases the chance of successful treatment. In order to correctly detect the mammogram images as being cancerous or malignant, there is a need of a classier. With this objective, an attempt is made to analyze different feature extraction techniques and classiers. In the proposed system we rst do the preprocessing of the mammogram images, where the unwanted noise and disturbances in the mammograms are removed. Features are then extracted from the mammogram images using Gray Level Co-Occurrences Matrix (GLCM) and Scale Invariant Feature Transform (SIFT). Finally, the features are classied using classiers like HiCARe (Classier based on High Condence Association Rule Agreements), Support Vector Machine (SVM), Naïve Bayes classier and K-NN Classier. Further we test the images and classify them as benign or malignant class.


Author(s):  
Fan Zhang

With the development of computer technology, the simulation authenticity of virtual reality technology is getting higher and higher, and the accurate recognition of human–computer interaction gestures is also the key technology to enhance the authenticity of virtual reality. This article briefly introduced three different gesture feature extraction methods: scale invariant feature transform, local binary pattern and histogram of oriented gradients (HOG), and back-propagation (BP) neural network for classifying and recognizing different gestures. The gesture feature vectors obtained by three feature extraction methods were used as input data of BP neural network respectively and were simulated in MATLAB software. The results showed that the information of feature gesture diagram extracted by HOG was the closest to the original one; the BP neural network that applied HOG extracted feature vectors converged to stability faster and had the smallest error when it was stable; in the aspect of gesture recognition, the BP neural network that applied HOG extracted feature vector had higher accuracy and precision and lower false alarm rate.


Sign in / Sign up

Export Citation Format

Share Document