scholarly journals Fast Pedestrian Recognition Based on Multisensor Fusion

2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
Hongyu Hu ◽  
Zhaowei Qu ◽  
Zhihui Li ◽  
Jinhui Hu ◽  
Fulu Wei

A fast pedestrian recognition algorithm based on multisensor fusion is presented in this paper. Firstly, potential pedestrian locations are estimated by laser radar scanning in the world coordinates, and then their corresponding candidate regions in the image are located by camera calibration and the perspective mapping model. For avoiding time consuming in the training and recognition process caused by large numbers of feature vector dimensions, region of interest-based integral histograms of oriented gradients (ROI-IHOG) feature extraction method is proposed later. A support vector machine (SVM) classifier is trained by a novel pedestrian sample dataset which adapt to the urban road environment for online recognition. Finally, we test the validity of the proposed approach with several video sequences from realistic urban road scenarios. Reliable and timewise performances are shown based on our multisensor fusing method.

Author(s):  
Htwe Pa Pa Win ◽  
Phyo Thu Thu Khine ◽  
Khin Nwe Ni Tun

This paper proposes a new feature extraction method for off-line recognition of Myanmar printed documents. One of the most important factors to achieve high recognition performance in Optical Character Recognition (OCR) system is the selection of the feature extraction methods. Different types of existing OCR systems used various feature extraction methods because of the diversity of the scripts’ natures. One major contribution of the work in this paper is the design of logically rigorous coding based features. To show the effectiveness of the proposed method, this paper assumed the documents are successfully segmented into characters and extracted features from these isolated Myanmar characters. These features are extracted using structural analysis of the Myanmar scripts. The experimental results have been carried out using the Support Vector Machine (SVM) classifier and compare the pervious proposed feature extraction method.


2018 ◽  
Vol 10 (7) ◽  
pp. 1123 ◽  
Author(s):  
Yuhang Zhang ◽  
Hao Sun ◽  
Jiawei Zuo ◽  
Hongqi Wang ◽  
Guangluan Xu ◽  
...  

Aircraft type recognition plays an important role in remote sensing image interpretation. Traditional methods suffer from bad generalization performance, while deep learning methods require large amounts of data with type labels, which are quite expensive and time-consuming to obtain. To overcome the aforementioned problems, in this paper, we propose an aircraft type recognition framework based on conditional generative adversarial networks (GANs). First, we design a new method to precisely detect aircrafts’ keypoints, which are used to generate aircraft masks and locate the positions of the aircrafts. Second, a conditional GAN with a region of interest (ROI)-weighted loss function is trained on unlabeled aircraft images and their corresponding masks. Third, an ROI feature extraction method is carefully designed to extract multi-scale features from the GAN in the regions of aircrafts. After that, a linear support vector machine (SVM) classifier is adopted to classify each sample using their features. Benefiting from the GAN, we can learn features which are strong enough to represent aircrafts based on a large unlabeled dataset. Additionally, the ROI-weighted loss function and the ROI feature extraction method make the features more related to the aircrafts rather than the background, which improves the quality of features and increases the recognition accuracy significantly. Thorough experiments were conducted on a challenging dataset, and the results prove the effectiveness of the proposed aircraft type recognition framework.


Author(s):  
Aswathy K Cherian ◽  
Poovammal E ◽  
Malathy C

Objective: Cancer is the uncontrollable multiplication of cells in human body. The expansion of cancerous cells in the breast area of the women is identified as breast cancer. It is mostly identified among women aged above 40. With the current advancement in the medical field, various automatic tests are available for the identification of cancerous tissues. The cancerous cells are spotted by taking the photo imprint in the form of X-ray comprising the breast area of the woman. Such images are called mammograms. Segmentation of mammograms is the primary step toward diagnosis. It involves the pre-processing of the image to identify the region of interest (ROI). Later, features are extracted from the image which involves the learned features that may be statistical and textural features [7]. When these features are used as input to the simple classifier, it helps us to predict the risk of cancer. The support vector machine (SVM) classifier was proved to produce a better accuracy percentage with the features extracted.Methods: The mammograms are subjected to a pre-processing stage, where the images are processed to identify the ROI. Next, the features are extracted from these images to identify the statistical [9] and textural features. Finally, these features are used as input to the simple classifier, it helps us to predict the risk of cancer.Results: The SVM classifier was proved to produce the maximum accuracy of about 88.67% considering 13 features including both statistical and textural features. The features taken for the study are mean, inverse difference moment, energy, entropy, root mean square, correlation, homogeneity, variance, skewness, range, contrast, kurtosis, and smoothness.Conclusion: Computer-aided diagnosis is one of the most common methods of detection of cancer with mammograms, and it involves minor human intervention. The dataset of mammograms was analyzed and found that SVM provided the highest accuracy of 88.67%. A wide range of the study is progressing in the field of cancer as this disease causes a high threat of human life in this era.


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 2023 ◽  
Author(s):  
Guoxu Liu ◽  
Shuyi Mao ◽  
Jae Ho Kim

An algorithm was proposed for automatic tomato detection in regular color images to reduce the influence of illumination and occlusion. In this method, the Histograms of Oriented Gradients (HOG) descriptor was used to train a Support Vector Machine (SVM) classifier. A coarse-to-fine scanning method was developed to detect tomatoes, followed by a proposed False Color Removal (FCR) method to remove the false-positive detections. Non-Maximum Suppression (NMS) was used to merge the overlapped results. Compared with other methods, the proposed algorithm showed substantial improvement in tomato detection. The results of tomato detection in the test images showed that the recall, precision, and F1 score of the proposed method were 90.00%, 94.41 and 92.15%, respectively.


2014 ◽  
Vol 2014 ◽  
pp. 1-17 ◽  
Author(s):  
Chao Mi ◽  
Xin He ◽  
Haiwei Liu ◽  
Youfang Huang ◽  
Weijian Mi

With the development of port automation, most operational fields utilizing heavy equipment have gradually become unmanned. It is therefore imperative to monitor these fields in an effective and real-time manner. In this paper, a fast human-detection algorithm is proposed based on image processing. To speed up the detection process, the optimized histograms of oriented gradients (HOG) algorithm that can avoid the large number of double calculations of the original HOG and ignore insignificant features is used to describe the contour of the human body in real time. Based on the HOG features, using a training sample set consisting of scene images of a bulk port, a support vector machine (SVM) classifier combined with the AdaBoost classifier is trained to detect human. Finally, the results of the human detection experiments on Tianjin Port show that the accuracy of the proposed optimized algorithm has roughly the same accuracy as a traditional algorithm, while the proposed algorithm only takes 1/7 the amount of time. The accuracy and computing time of the proposed fast human-detection algorithm were verified to meet the security requirements of unmanned port areas.


Theoretical—This paper shows a camera based assistive content perusing of item marks from articles to support outwardly tested individuals. Camera fills in as fundamental wellspring of info. To recognize the items, the client will move the article before camera and this moving item will be identified by Background Subtraction (BGS) Method. Content district will be naturally confined as Region of Interest (ROI). Content is extricated from ROI by consolidating both guideline based and learning based technique. A tale standard based content limitation calculation is utilized by recognizing geometric highlights like pixel esteem, shading force, character size and so forth and furthermore highlights like Gradient size, slope width and stroke width are found out utilizing SVM classifier and a model is worked to separate content and non-content area. This framework is coordinated with OCR (Optical Character Recognition) to extricate content and the separated content is given as a voice yield to the client. The framework is assessed utilizing ICDAR-2011 dataset which comprise of 509 common scene pictures with ground truth.


Agriculture is an important sector in Economic and Social life. Crop disease detection is an emerging field in India. We can minimize the diseases infection on sugarcane leaf by detecting and grading the leaf disease in early stages. In this paper, we are detecting and recognize Sugar cane leaf diseases by using grey scale and color image processing and analyze the efficacy by comparing both. In grey scale processing, we presented Gradient Magnitude, Otsu method, Morphological Operations and Normalization to extract the Region of interest (ROI) i.e., disease part. In color processing initially converted RGB to L*a*b format, later K-means clustering and edge detection operations are applied on L*a*b image format. The features of Grey scale & color processed image are extracted and feed to Support Vector Machine (SVM) classifier which classifies ring, rust & yellow spot sugarcane leaf diseases. The Sugarcane leaf diseases are classified successfully with an average accuracy of 84% & 92% for grey scale & color features respectively.


Entropy ◽  
2019 ◽  
Vol 21 (7) ◽  
pp. 693 ◽  
Author(s):  
Zhaoxi Li ◽  
Yaan Li ◽  
Kai Zhang

To improve the feature extraction of ship-radiated noise in a complex ocean environment, fluctuation-based dispersion entropy is used to extract the features of ten types of ship-radiated noise. Since fluctuation-based dispersion entropy only analyzes the ship-radiated noise signal in single scale and it cannot distinguish different types of ship-radiated noise effectively, a new method of ship-radiated noise feature extraction is proposed based on fluctuation-based dispersion entropy (FDispEn) and intrinsic time-scale decomposition (ITD). Firstly, ten types of ship-radiated noise signals are decomposed into a series of proper rotation components (PRCs) by ITD, and the FDispEn of each PRC is calculated. Then, the correlation between each PRC and the original signal are calculated, and the FDispEn of each PRC is analyzed to select the Max-relative PRC fluctuation-based dispersion entropy as the feature parameter. Finally, by comparing the Max-relative PRC fluctuation-based dispersion entropy of a certain number of the above ten types of ship-radiated noise signals with FDispEn, it is discovered that the Max-relative PRC fluctuation-based dispersion entropy is at the same level for similar ship-radiated noise, but is distinct for different types of ship-radiated noise. The Max-relative PRC fluctuation-based dispersion entropy as the feature vector is sent into the support vector machine (SVM) classifier to classify and recognize ten types of ship-radiated noise. The experimental results demonstrate that the recognition rate of the proposed method reaches 95.8763%. Consequently, the proposed method can effectively achieve the classification of ship-radiated noise.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Xiaoyun Liu ◽  
Xugang Xi ◽  
Xian Hua ◽  
Hujiao Wang ◽  
Wei Zhang

The feature extraction of surface electromyography (sEMG) signals has been an important aspect of myoelectric prosthesis control. To improve the practicability of myoelectric prosthetic hands, we proposed a feature extraction method for sEMG signals that uses wavelet weighted permutation entropy (WWPE). First, wavelet transform was used to decompose and preprocess sEMG signals collected from the relevant muscles of the upper limbs to obtain the wavelet sub-bands in each frequency segment. Then, the weighted permutation entropies (WPEs) of the wavelet sub-bands were extracted to construct WWPE feature set. Lastly, the WWPE feature set was used as input to a support vector machine (SVM) classifier and a backpropagation neural network (BPNN) classifier to recognize seven hand movements. Experimental results show that the proposed method exhibits remarkable recognition accuracy that is superior to those of single sub-band feature set and commonly used time-domain feature set. The maximum recognition accuracy rate is 100% for hand movements, and the average recognition accuracy rates of SVM and BPNN are 100% and 98%, respectively.


Author(s):  
Nibras Ar Rakib ◽  
SM Zamshed Farhan ◽  
Md Mashrur Bari Sobhan ◽  
Jia Uddin ◽  
Arafat Habib

The field of biometrics has evolved tremendously for over the last century. Yet scientists are still continuing to come up with precise and efficient algorithms to facilitate automatic fingerprint recognition systems. Like other applications, an efficient feature extraction method plays an important role in fingerprint based recognition systems. This paper proposes a novel feature extraction method using minutiae points of a fingerprint image and their intersections. In this method, initially, it calculates the ridge ends and ridge bifurcations of each fingerprint image. And then, it estimates the minutiae points for the intersection of each ridge end and ridge bifurcation. In the experimental evaluation, we tested the extracted features of our proposed model using a support vector machine (SVM) classifier and experimental results show that the proposed method can accurately classify different fingerprint images.


Sign in / Sign up

Export Citation Format

Share Document