Human motion analysis from UAV video

2018 ◽  
Vol 6 (2) ◽  
pp. 69-92 ◽  
Author(s):  
Asanka G. Perera ◽  
Yee Wei Law ◽  
Ali Al-Naji ◽  
Javaan Chahl

Purpose The purpose of this paper is to present a preliminary solution to address the problem of estimating human pose and trajectory by an aerial robot with a monocular camera in near real time. Design/methodology/approach The distinguishing feature of the solution is a dynamic classifier selection architecture. Each video frame is corrected for perspective using projective transformation. Then, a silhouette is extracted as a Histogram of Oriented Gradients (HOG). The HOG is then classified using a dynamic classifier. A class is defined as a pose-viewpoint pair, and a total of 64 classes are defined to represent a forward walking and turning gait sequence. The dynamic classifier consists of a Support Vector Machine (SVM) classifier C64 that recognizes all 64 classes, and 64 SVM classifiers that recognize four classes each – these four classes are chosen based on the temporal relationship between them, dictated by the gait sequence. Findings The solution provides three main advantages: first, classification is efficient due to dynamic selection (4-class vs 64-class classification). Second, classification errors are confined to neighbors of the true viewpoints. This means a wrongly estimated viewpoint is at most an adjacent viewpoint of the true viewpoint, enabling fast recovery from incorrect estimations. Third, the robust temporal relationship between poses is used to resolve the left-right ambiguities of human silhouettes. Originality/value Experiments conducted on both fronto-parallel videos and aerial videos confirm that the solution can achieve accurate pose and trajectory estimation for these different kinds of videos. For example, the “walking on an 8-shaped path” data set (1,652 frames) can achieve the following estimation accuracies: 85 percent for viewpoints and 98.14 percent for poses.

2019 ◽  
Vol 12 (4) ◽  
pp. 466-480
Author(s):  
Li Na ◽  
Xiong Zhiyong ◽  
Deng Tianqi ◽  
Ren Kai

Purpose The precise segmentation of brain tumors is the most important and crucial step in their diagnosis and treatment. Due to the presence of noise, uneven gray levels, blurred boundaries and edema around the brain tumor region, the brain tumor image has indistinct features in the tumor region, which pose a problem for diagnostics. The paper aims to discuss these issues. Design/methodology/approach In this paper, the authors propose an original solution for segmentation using Tamura Texture and ensemble Support Vector Machine (SVM) structure. In the proposed technique, 124 features of each voxel are extracted, including Tamura texture features and grayscale features. Then, these features are ranked using the SVM-Recursive Feature Elimination method, which is also adopted to optimize the parameters of the Radial Basis Function kernel of SVMs. Finally, the bagging random sampling method is utilized to construct the ensemble SVM classifier based on a weighted voting mechanism to classify the types of voxel. Findings The experiments are conducted over a sample data set to be called BraTS2015. The experiments demonstrate that Tamura texture is very useful in the segmentation of brain tumors, especially the feature of line-likeness. The superior performance of the proposed ensemble SVM classifier is demonstrated by comparison with single SVM classifiers as well as other methods. Originality/value The authors propose an original solution for segmentation using Tamura Texture and ensemble SVM structure.


2021 ◽  
Vol 13 (19) ◽  
pp. 3956
Author(s):  
Shan He ◽  
Huaiyong Shao ◽  
Wei Xian ◽  
Shuhui Zhang ◽  
Jialong Zhong ◽  
...  

Hilly areas are important parts of the world’s landscape. A marginal phenomenon can be observed in some hilly areas, leading to serious land abandonment. Extracting the spatio-temporal distribution of abandoned land in such hilly areas can protect food security, improve people’s livelihoods, and serve as a tool for a rational land plan. However, mapping the distribution of abandoned land using a single type of remote sensing image is still challenging and problematic due to the fragmentation of such hilly areas and severe cloud pollution. In this study, a new approach by integrating Linear stretch (Ls), Maximum Value Composite (MVC), and Flexible Spatiotemporal DAta Fusion (FSDAF) was proposed to analyze the time-series changes and extract the spatial distribution of abandoned land. MOD09GA, MOD13Q1, and Sentinel-2 were selected as the basis of remote sensing images to fuse a monthly 10 m spatio-temporal data set. Three pieces of vegetation indices (VIs: ndvi, savi, ndwi) were utilized as the measures to identify the abandoned land. A multiple spatio-temporal scales sample database was established, and the Support Vector Machine (SVM) was used to extract abandoned land from cultivated land and woodland. The best extraction result with an overall accuracy of 88.1% was achieved by integrating Ls, MVC, and FSDAF, with the assistance of an SVM classifier. The fused VIs image set transcended the single source method (Sentinel-2) with greater accuracy by a margin of 10.8–23.6% for abandoned land extraction. On the other hand, VIs appeared to contribute positively to extract abandoned land from cultivated land and woodland. This study not only provides technical guidance for the quick acquirement of abandoned land distribution in hilly areas, but it also provides strong data support for the connection of targeted poverty alleviation to rural revitalization.


GEOMATICA ◽  
2021 ◽  
pp. 1-23
Author(s):  
Roholah Yazdan ◽  
Masood Varshosaz ◽  
Saied Pirasteh ◽  
Fabio Remondino

Automatic detection and recognition of traffic signs from images is an important topic in many applications. At first, we segmented the images using a classification algorithm to delineate the areas where the signs are more likely to be found. In this regard, shadows, objects having similar colours, and extreme illumination changes can significantly affect the segmentation results. We propose a new shape-based algorithm to improve the accuracy of the segmentation. The algorithm works by incorporating the sign geometry to filter out the wrong pixels from the classification results. We performed several tests to compare the performance of our algorithm against those obtained by popular techniques such as Support Vector Machine (SVM), K-Means, and K-Nearest Neighbours. In these tests, to overcome the unwanted illumination effects, the images are transformed into colour spaces Hue, Saturation, and Intensity, YUV, normalized red green blue, and Gaussian. Among the traditional techniques used in this study, the best results were obtained with SVM applied to the images transformed into the Gaussian colour space. The comparison results also suggested that by adding the geometric constraints proposed in this study, the quality of sign image segmentation is improved by 10%–25%. We also comparted the SVM classifier enhanced by incorporating the geometry of signs with a U-Shaped deep learning algorithm. Results suggested the performance of both techniques is very close. Perhaps the deep learning results could be improved if a more comprehensive data set is provided.


2019 ◽  
Vol 47 (3) ◽  
pp. 154-170
Author(s):  
Janani Balakumar ◽  
S. Vijayarani Mohan

Purpose Owing to the huge volume of documents available on the internet, text classification becomes a necessary task to handle these documents. To achieve optimal text classification results, feature selection, an important stage, is used to curtail the dimensionality of text documents by choosing suitable features. The main purpose of this research work is to classify the personal computer documents based on their content. Design/methodology/approach This paper proposes a new algorithm for feature selection based on artificial bee colony (ABCFS) to enhance the text classification accuracy. The proposed algorithm (ABCFS) is scrutinized with the real and benchmark data sets, which is contrary to the other existing feature selection approaches such as information gain and χ2 statistic. To justify the efficiency of the proposed algorithm, the support vector machine (SVM) and improved SVM classifier are used in this paper. Findings The experiment was conducted on real and benchmark data sets. The real data set was collected in the form of documents that were stored in the personal computer, and the benchmark data set was collected from Reuters and 20 Newsgroups corpus. The results prove the performance of the proposed feature selection algorithm by enhancing the text document classification accuracy. Originality/value This paper proposes a new ABCFS algorithm for feature selection, evaluates the efficiency of the ABCFS algorithm and improves the support vector machine. In this paper, the ABCFS algorithm is used to select the features from text (unstructured) documents. Although, there is no text feature selection algorithm in the existing work, the ABCFS algorithm is used to select the data (structured) features. The proposed algorithm will classify the documents automatically based on their content.


Sensor Review ◽  
2020 ◽  
Vol 40 (2) ◽  
pp. 203-216
Author(s):  
S. Veluchamy ◽  
L.R. Karlmarx

Purpose Biometric identification system has become emerging research field because of its wide applications in the fields of security. This study (multimodal system) aims to find more applications than the unimodal system because of their high user acceptance value, better recognition accuracy and low-cost sensors. The biometric identification using the finger knuckle and the palmprint finds more application than other features because of its unique features. Design/methodology/approach The proposed model performs the user authentication through the extracted features from both the palmprint and the finger knuckle images. The two major processes in the proposed system are feature extraction and classification. The proposed model extracts the features from the palmprint and the finger knuckle with the proposed HE-Co-HOG model after the pre-processing. The proposed HE-Co-HOG model finds the Palmprint HE-Co-HOG vector and the finger knuckle HE-Co-HOG vector. These features from both the palmprint and the finger knuckle are combined with the optimal weight score from the fractional firefly (FFF) algorithm. The layered k-SVM classifier classifies each person's identity from the fused vector. Findings Two standard data sets with the palmprint and the finger knuckle images were used for the simulation. The simulation results were analyzed in two ways. In the first method, the bin sizes of the HE-Co-HOG vector were varied for the various training of the data set. In the second method, the performance of the proposed model was compared with the existing models for the different training size of the data set. From the simulation results, the proposed model has achieved a maximum accuracy of 0.95 and the lowest false acceptance rate and false rejection rate with a value of 0.1. Originality/value In this paper, the multimodal biometric recognition system based on the proposed HE-Co-HOG with the k-SVM and the FFF is developed. The proposed model uses the palmprint and the finger knuckle images as the biometrics. The development of the proposed HE-Co-HOG vector is done by modifying the Co-HOG with the holoentropy weights.


2016 ◽  
Vol 26 (3) ◽  
pp. 293-313 ◽  
Author(s):  
André Vellino ◽  
Inge Alberts

Purpose This paper aims to investigate how automatic classification can assist employees and records managers with the appraisal of e-mails as records of value for the organization. Design/methodology/approach The study performed a qualitative analysis of the appraisal behaviours of eight records management experts to train a series of support vector machine classifiers to replicate the decision process for identifying e-mails of business value. Automatic classification experiments were performed on a corpus of 846 e-mails from two of these experts’ mailboxes. Findings Despite the highly contextual nature of record value, these experiments show that classifiers have a high degree of accuracy. Unlike existing manual practices in corporate e-mail archiving, machine classification models are not highly dependent on features such as the identity of the sender and receiver or on threading, forwarding or importance flags. Rather, the dominant discriminating features are textual features from the e-mail body and subject field. Research limitations/implications The need to automatically classify corporate e-mails is growing in importance, as e-mail remains one of the prevalent recordkeeping challenges. Practical implications Automated methods for identifying e-mail records promise to be of significant benefit to organizations that need to appraise e-mail for long-term preservation and access on demand. Social implications The research adopts an innovative approach to assist employees and records managers with the appraisal of digital records. By doing so, the research fosters new insights on the adoption of technological strategies to automate recordkeeping tasks, an important research gap. Originality/value Our experiment show that a SVM classifier can be trained to replicate an expert's decision process for identifying e-mails of business value with a reasonably high degree of accuracy. In principle, such a classifier could be integrated into a corporate Electronic Document and Records Management System (EDRMS) to improve the quality of e-mail records appraisal.


Kybernetes ◽  
2014 ◽  
Vol 43 (8) ◽  
pp. 1150-1164 ◽  
Author(s):  
Bilal M’hamed Abidine ◽  
Belkacem Fergani ◽  
Mourad Oussalah ◽  
Lamya Fergani

Purpose – The task of identifying activity classes from sensor information in smart home is very challenging because of the imbalanced nature of such data set where some activities occur more frequently than others. Typically probabilistic models such as Hidden Markov Model (HMM) and Conditional Random Fields (CRF) are known as commonly employed for such purpose. The paper aims to discuss these issues. Design/methodology/approach – In this work, the authors propose a robust strategy combining the Synthetic Minority Over-sampling Technique (SMOTE) with Cost Sensitive Support Vector Machines (CS-SVM) with an adaptive tuning of cost parameter in order to handle imbalanced data problem. Findings – The results have demonstrated the usefulness of the approach through comparison with state of art of approaches including HMM, CRF, the traditional C-Support vector machines (C-SVM) and the Cost-Sensitive-SVM (CS-SVM) for classifying the activities using binary and ubiquitous sensors. Originality/value – Performance metrics in the experiment/simulation include Accuracy, Precision/Recall and F measure.


2017 ◽  
Vol 10 (2) ◽  
pp. 111-129 ◽  
Author(s):  
Ali Hasan Alsaffar

Purpose The purpose of this paper is to present an empirical study on the effect of two synthetic attributes to popular classification algorithms on data originating from student transcripts. The attributes represent past performance achievements in a course, which are defined as global performance (GP) and local performance (LP). GP of a course is an aggregated performance achieved by all students who have taken this course, and LP of a course is an aggregated performance achieved in the prerequisite courses by the student taking the course. Design/methodology/approach The paper uses Educational Data Mining techniques to predict student performance in courses, where it identifies the relevant attributes that are the most key influencers for predicting the final grade (performance) and reports the effect of the two suggested attributes on the classification algorithms. As a research paradigm, the paper follows Cross-Industry Standard Process for Data Mining using RapidMiner Studio software tool. Six classification algorithms are experimented: C4.5 and CART Decision Trees, Naive Bayes, k-neighboring, rule-based induction and support vector machines. Findings The outcomes of the paper show that the synthetic attributes have positively improved the performance of the classification algorithms, and also they have been highly ranked according to their influence to the target variable. Originality/value This paper proposes two synthetic attributes that are integrated into real data set. The key motivation is to improve the quality of the data and make classification algorithms perform better. The paper also presents empirical results showing the effect of these attributes on selected classification algorithms.


2016 ◽  
Vol 27 (3) ◽  
pp. 299-312
Author(s):  
Nadia Ziani ◽  
Khadidja Amirat ◽  
Djelloul Messadi

Purpose – The purpose of this paper is to predict the aquatic toxicity (LC50) of 92 substituted benzenes derivatives in Pimephales promelas. Design/methodology/approach – Quantitative structure-activity relationship analysis was performed on a series of 92 substituted benzenes derivatives using multiple linear regression (MLR), artificial neural network (ANN) and support vector machines (SVM) methods, which correlate aquatic toxicity (LC50) values of these chemicals to their structural descriptors. At first, the entire data set was split according to Kennard and Stone algorithm into a training set (74 chemicals) and a test set (18 chemical) for statistical external validation. Findings – Models with six descriptors were developed using as independent variables theoretical descriptors derived from Dragon software when applying genetic algorithm – variable subset selection procedure. Originality/value – The values of Q2 and RMSE in internal validation for MLR, SVM, and ANN model were: (0.8829; 0.225), (0.8882; 0.222); (0.8980; 0.214), respectively and also for external validation were: (0.9538; 0.141); (0.947; 0.146); (0.9564; 0.146). The statistical parameters obtained for the three approaches are very similar, which confirm that our six parameters model is stable, robust and significant.


Kybernetes ◽  
2019 ◽  
Vol 49 (10) ◽  
pp. 2547-2567 ◽  
Author(s):  
Himanshu Sharma ◽  
Anu G. Aggarwal

Purpose The experiential nature of travel and tourism services has popularized the importance of electronic word-of-mouth (EWOM) among potential customers. EWOM has a significant influence on hotel booking intention of customers as they tend to trust EWOM more than the messages spread by marketers. Amid abundant reviews available online, it becomes difficult for travelers to identify the most significant ones. This questions the credibility of reviewers as various online businesses allow reviewers to post their feedback using nickname or email address rather than using real name, photo or other personal information. Therefore, this study aims to determine the factors leading to reviewer credibility. Design/methodology/approach The paper proposes an econometric model to determine the variables that affect the reviewer’s credibility in the hospitality and tourism sector. The proposed model uses quantifiable variables of reviewers and reviews to estimate reviewer credibility, defined in terms of proportion of number of helpful votes received by a reviewer to the number of total reviews written by him. This covers both aspects of source credibility i.e. trustworthiness and expertness. The authors have used the data set of TripAdvisor.com to validate the models. Findings Regression analysis significantly validated the econometric models proposed here. To check the predictive efficiency of the models, predictive modeling using five commonly used classifiers such as random forest (RF), linear discriminant analysis, k-nearest neighbor, decision tree and support vector machine is performed. RF gave the best accuracy for the overall model. Practical implications The findings of this research paper suggest various implications for hoteliers and managers to help retain credible reviewers in the online travel community. This will help them to achieve long term relationships with the clients and increase their trust in the brand. Originality/value To the best of authors’ knowledge, this study performs an econometric modeling approach to find determinants of reviewer credibility, not conducted in previous studies. Moreover, the study contracts from earlier works by considering it to be an endogenous variable, rather than an exogenous one.


Sign in / Sign up

Export Citation Format

Share Document