HE-Co-HOG and k-SVM classifier for finger knuckle and palm print-based multimodal biometric recognition

Sensor Review ◽  
2020 ◽  
Vol 40 (2) ◽  
pp. 203-216
Author(s):  
S. Veluchamy ◽  
L.R. Karlmarx

Purpose Biometric identification system has become emerging research field because of its wide applications in the fields of security. This study (multimodal system) aims to find more applications than the unimodal system because of their high user acceptance value, better recognition accuracy and low-cost sensors. The biometric identification using the finger knuckle and the palmprint finds more application than other features because of its unique features. Design/methodology/approach The proposed model performs the user authentication through the extracted features from both the palmprint and the finger knuckle images. The two major processes in the proposed system are feature extraction and classification. The proposed model extracts the features from the palmprint and the finger knuckle with the proposed HE-Co-HOG model after the pre-processing. The proposed HE-Co-HOG model finds the Palmprint HE-Co-HOG vector and the finger knuckle HE-Co-HOG vector. These features from both the palmprint and the finger knuckle are combined with the optimal weight score from the fractional firefly (FFF) algorithm. The layered k-SVM classifier classifies each person's identity from the fused vector. Findings Two standard data sets with the palmprint and the finger knuckle images were used for the simulation. The simulation results were analyzed in two ways. In the first method, the bin sizes of the HE-Co-HOG vector were varied for the various training of the data set. In the second method, the performance of the proposed model was compared with the existing models for the different training size of the data set. From the simulation results, the proposed model has achieved a maximum accuracy of 0.95 and the lowest false acceptance rate and false rejection rate with a value of 0.1. Originality/value In this paper, the multimodal biometric recognition system based on the proposed HE-Co-HOG with the k-SVM and the FFF is developed. The proposed model uses the palmprint and the finger knuckle images as the biometrics. The development of the proposed HE-Co-HOG vector is done by modifying the Co-HOG with the holoentropy weights.

2018 ◽  
Vol 6 (2) ◽  
pp. 69-92 ◽  
Author(s):  
Asanka G. Perera ◽  
Yee Wei Law ◽  
Ali Al-Naji ◽  
Javaan Chahl

Purpose The purpose of this paper is to present a preliminary solution to address the problem of estimating human pose and trajectory by an aerial robot with a monocular camera in near real time. Design/methodology/approach The distinguishing feature of the solution is a dynamic classifier selection architecture. Each video frame is corrected for perspective using projective transformation. Then, a silhouette is extracted as a Histogram of Oriented Gradients (HOG). The HOG is then classified using a dynamic classifier. A class is defined as a pose-viewpoint pair, and a total of 64 classes are defined to represent a forward walking and turning gait sequence. The dynamic classifier consists of a Support Vector Machine (SVM) classifier C64 that recognizes all 64 classes, and 64 SVM classifiers that recognize four classes each – these four classes are chosen based on the temporal relationship between them, dictated by the gait sequence. Findings The solution provides three main advantages: first, classification is efficient due to dynamic selection (4-class vs 64-class classification). Second, classification errors are confined to neighbors of the true viewpoints. This means a wrongly estimated viewpoint is at most an adjacent viewpoint of the true viewpoint, enabling fast recovery from incorrect estimations. Third, the robust temporal relationship between poses is used to resolve the left-right ambiguities of human silhouettes. Originality/value Experiments conducted on both fronto-parallel videos and aerial videos confirm that the solution can achieve accurate pose and trajectory estimation for these different kinds of videos. For example, the “walking on an 8-shaped path” data set (1,652 frames) can achieve the following estimation accuracies: 85 percent for viewpoints and 98.14 percent for poses.


2020 ◽  
Vol 26 (8) ◽  
pp. 1769-1786 ◽  
Author(s):  
Sascha Kraus ◽  
Hongbo Li ◽  
Qi Kang ◽  
Paul Westhead ◽  
Victor Tiberius

PurposeQuantitative bibliometric approaches were used to statistically and objectively explore patterns in the sharing economy literature.Design/methodology/approachJournal (co-)citation analysis, author (co-)citation analysis, institution citation and co-operation analysis, keyword co-occurrence analysis, document (co-)citation analysis and burst detection analysis were conducted based on a bibliometric data set relating to sharing economy publications.FindingsSharing economy research is multi- and interdisciplinary. Journals focused upon products liability, organizing framework, profile characteristics, diverse economies, consumption system and everyday life themes. Authors focused upon profile characteristics, sharing economy organization, social connections, first principle and diverse economy themes. No institution dominated the research field. Keyword co-occurrence analysis identified organizing framework, tourism industry, consumer behavior, food waste, generous exchange and quality cue as research themes. Document co-citation analysis found research themes relating to the tourism industry, exploring public acceptability, agri-food system, commercial orientation, products liability and social connection. Most cited authors, institutions and documents are reported.Research limitations/implicationsThe study did not exclusively focus on publications in top-tier journals. Future studies could run analyses relating to top-tier journals alone, and then run analyses relating to less renowned journals alone. To address the potential fuzzy results concern, reviews could focus on business and/or management research alone. Longitudinal reviews conducted over several points in time are warranted. Future reviews could combine qualitative and quantitative approaches.Originality/valueWe contribute by analyzing information relating to the population of all sharing economy articles. In addition, we contribute by employing several quantitative bibliometric approaches that enable the identification of trends relating to the themes and patterns in the growing literature.


2019 ◽  
Vol 12 (4) ◽  
pp. 466-480
Author(s):  
Li Na ◽  
Xiong Zhiyong ◽  
Deng Tianqi ◽  
Ren Kai

Purpose The precise segmentation of brain tumors is the most important and crucial step in their diagnosis and treatment. Due to the presence of noise, uneven gray levels, blurred boundaries and edema around the brain tumor region, the brain tumor image has indistinct features in the tumor region, which pose a problem for diagnostics. The paper aims to discuss these issues. Design/methodology/approach In this paper, the authors propose an original solution for segmentation using Tamura Texture and ensemble Support Vector Machine (SVM) structure. In the proposed technique, 124 features of each voxel are extracted, including Tamura texture features and grayscale features. Then, these features are ranked using the SVM-Recursive Feature Elimination method, which is also adopted to optimize the parameters of the Radial Basis Function kernel of SVMs. Finally, the bagging random sampling method is utilized to construct the ensemble SVM classifier based on a weighted voting mechanism to classify the types of voxel. Findings The experiments are conducted over a sample data set to be called BraTS2015. The experiments demonstrate that Tamura texture is very useful in the segmentation of brain tumors, especially the feature of line-likeness. The superior performance of the proposed ensemble SVM classifier is demonstrated by comparison with single SVM classifiers as well as other methods. Originality/value The authors propose an original solution for segmentation using Tamura Texture and ensemble SVM structure.


2020 ◽  
Vol 3 (2) ◽  
pp. 67-78
Author(s):  
Qing Xu ◽  
Jiangfeng Wang ◽  
Botong Wang ◽  
Xuedong Yan

Purpose This study aims to propose a speed guidance model of the CV environment to alleviate traffic congestion at intersections and improve traffic efficiency. By introducing the theory of moving block section for high-speed train control, a speed guidance model based on the quasi-moving block speed guidance (QMBSG) is proposed to direct platoon including human-driven vehicles and connected vehicles (CV) through the intersection coordinately. Design/methodology/approach In this model, the green time of the intersection is divided into multiple block intervals according to the minimal safety headway. Connected vehicles can pass through the intersection by following the block interval using the QMBSG model. The block interval is assigned dynamically according to the traveling relation of HV and CV, when entering the communication range of the intersection. To validate the comprehensive guidance effect of the proposed model, a general evaluation function (GEF) is established. Compared to CVs without speed guidance, the simulation results show that the GEF of QMBSG model has an obvious improvement. Findings Compared to CVs without speed guidance, the simulation results show that the GEF of QMBSG model has an obvious improvement. Also, compared to the single intersection speed guidance model, the GEF value of the QMBSG model improves over 17.1%. To further explore the guidance effect, the impact of sensitivity factors of the CVs’ environment, such as intersection environment, communication range and penetration rate (PR) is analyzed. When the PR reaches 75.0%, the GEF value will change suddenly and the model guidance effect will be significantly improved. This paper also analyzes the impact of the length of block interval under different PR and traffic demands. It is found that the proposed model has a better guidance effect when the length of the block section is 2 s, which facilitates traffic congestion alleviation of the intersection in practice. Originality/value Based on the aforementioned discussion, the contributions of this paper are three-fold. Based on the traveling information of HV/CV and the signal phase and timing plans, the QMBSG model is proposed to direct platoon consisting of HV and CV through the intersection coordinately, by following the block interval assigned dynamically. Considering comprehensively the indexes of mobility, safety and environment, a GEF is provided to evaluate the guidance effect of vehicles through the intersection. Sensitivity analysis is carried out on the QMBSG model. The key communication and traffic parameters of the CV environment are analyzed, such as path attenuation, PR, etc. Finally, the effect of the length of block interval is explored.


Kybernetes ◽  
2015 ◽  
Vol 44 (6/7) ◽  
pp. 1067-1081 ◽  
Author(s):  
Oswaldo Terán ◽  
Christophe Sibertin-Blanc ◽  
Benoit Gaudou

Purpose – The purpose of this paper is to present how to model moral sensitivity and emotions in organizational setting by using the SocLab formal framework. SocLab is a platform for the modelling, simulation and analysis of cooperation relationships within social organizations – and more generally Systems of Organized Action. Design/methodology/approach – Simulation results, including an interesting tendency for a Free Rider model, will be given. Considering that actors’ decision-making processes are not just driven by instrumental interest, the SocLab learning simulation algorithm has been extended to represent moral sensitivity, making actors trying to prevent bad emotions and feel good ones. Findings – Some simulation results about actors’ collaboration and emotions in a Free Rider model were presented. A noteworthy tendency is that actors’ unconditional collaboration, which occurs when their moral sensitivity reaches its highest value, is not so good since it exempts other actors from collaboration (they take advantage from the unconditional collaboration), while values of moral sensitivity somewhat below the highest value (between 0.7 and 0.9) still induces collaboration from others. Originality/value – The research and results presented in this paper have not been presented in other papers or workshops. The presented quantitative definition of emotions (determining indexes of emotions) is different to previous approaches – for instance, to Ortony, Clore and Collins (OCC) qualitative descriptions and to logical descriptions. Similarly, simulation of morality in organizations is a new research field, which has received scarce attention up to now.


2020 ◽  
Vol 39 (4) ◽  
pp. 97-103
Author(s):  
Basharat Ahmad Malik ◽  
Ashiya Ahmadi

Purpose The purpose of this study is the application of a recently developed quantitative method named Referenced Publication Year Spectroscopy (RPYS) in the spectrum of Collection Development. RPYS portrays peak years to be recognized in citations in a research field that guarantees to assist in the identification of significant contributions and groundbreaking revelations in a research field. Design/methodology/approach Preliminary data of the study has been extracted from Web of Science (WoS) by using two phrases “collection development” and “collection building” to search in terms of the topic (comprising four parts: title, abstract, author keywords and KeyWords Plus). The search was restricted to the time period 1974-2017, which formulated a data set of 1,682 documents covering 29,017 cited references. The program CRExplorer (www.crexplorer.net) was used for the extraction of cited references from the data sets downloaded from WoS. Further analysis was performed manually using MS-Excel 2016. Findings The present study identified seminal works, which contributed to a high extent to the evolution and development of collection development. The analysis of all cited references using the RPYS method showed nine peaks, which present historical roots of collection development and revealed that the basic idea of this very subfield of library science dates centuries back. Moreover, the results of the investigation on most effective documents (in the form of peaks) revealed that the field of collection development significantly influenced by the works of authors such as Gabriel Naudé, Gabriel Peignot, Giulio Petzholdt, P L Gross, E M Gross, Richard Trueswell, Allen Kent, Ross Atkinson, etc. Practical implications The analysis of works cited in publications helps to ascertain important intellectual contributions related to a particular domain of knowledge. It not only helps in extracting the most important works but also it helps to reconstruct the history of a specific research field by examining the specific role of the cited references. Therefore, the results of the study could be useful for researchers, practitioners, scholars and more specifically bibliophiles, bibliographers and librarians to gain a better understanding of seminal works in the spectrum of collection development. Originality/value To the best of authors’ knowledge, the present research work is unique and novel in the spectrum of collection development, which explored and examined the pivotal works in the field by using the RPYS method.


2017 ◽  
Vol 73 (3) ◽  
pp. 481-499 ◽  
Author(s):  
Amed Leiva-Mederos ◽  
Jose A. Senso ◽  
Yusniel Hidalgo-Delgado ◽  
Pedro Hipola

Purpose Information from Current Research Information Systems (CRIS) is stored in different formats, in platforms that are not compatible, or even in independent networks. It would be helpful to have a well-defined methodology to allow for management data processing from a single site, so as to take advantage of the capacity to link disperse data found in different systems, platforms, sources and/or formats. Based on functionalities and materials of the VLIR project, the purpose of this paper is to present a model that provides for interoperability by means of semantic alignment techniques and metadata crosswalks, and facilitates the fusion of information stored in diverse sources. Design/methodology/approach After reviewing the state of the art regarding the diverse mechanisms for achieving semantic interoperability, the paper analyzes the following: the specific coverage of the data sets (type of data, thematic coverage and geographic coverage); the technical specifications needed to retrieve and analyze a distribution of the data set (format, protocol, etc.); the conditions of re-utilization (copyright and licenses); and the “dimensions” included in the data set as well as the semantics of these dimensions (the syntax and the taxonomies of reference). The semantic interoperability framework here presented implements semantic alignment and metadata crosswalk to convert information from three different systems (ABCD, Moodle and DSpace) to integrate all the databases in a single RDF file. Findings The paper also includes an evaluation based on the comparison – by means of calculations of recall and precision – of the proposed model and identical consultations made on Open Archives Initiative and SQL, in order to estimate its efficiency. The results have been satisfactory enough, due to the fact that the semantic interoperability facilitates the exact retrieval of information. Originality/value The proposed model enhances management of the syntactic and semantic interoperability of the CRIS system designed. In a real setting of use it achieves very positive results.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mohammad Rishehchi Fayyaz ◽  
Mohammad R. Rasouli ◽  
Babak Amiri

PurposeThe purpose of this paper is to propose a data-driven model to predict credit risks of actors collaborating within a supply chain finance (SCF) network based on the analysis of their network attributes. This can support applying reverse factoring mechanisms in SCFs.Design/methodology/approachBased on network science, the network measures of the actors collaborating in the investigated SCF are derived through a social network analysis. Then several supervised machine learning algorithms are applied to predict the credit risks of the actors on the basis of their network level and organizational-level characteristics. For this purpose, a data set from an SCF within an automotive industry in Iran is used.FindingsThe findings of the research clearly demonstrate that considering the network attributes of the actors within the prediction models can significantly enhance the accuracy and precision of the models.Research limitations/implicationsThe main limitation of this research is to investigate the applicability and effectiveness of the proposed model within a single case.Practical implicationsThe proposed model can provide a well-established basis for financial intermediaries in SCFs to make more sophisticated decisions within financial facilitation mechanisms.Originality/valueThis study contributes to the existing literature of credit risk evaluation by considering credit risk as a systematic risk that can be influenced by network measures of collaborating actors. To do so, the paper proposes an approach that considers network characteristics of SCFs as critical attributes to predict credit risk.


2020 ◽  
Vol 54 (3) ◽  
pp. 383-405
Author(s):  
Balachandra Kumaraswamy ◽  
Poonacha P G

PurposeIn general, Indian Classical Music (ICM) is classified into two: Carnatic and Hindustani. Even though, both the music formats have a similar foundation, the way of presentation is varied in many manners. The fundamental components of ICM are raga and taala. Taala basically represents the rhythmic patterns or beats (Dandawate et al., 2015; Kirthika and Chattamvelli, 2012). Raga is determined from the flow of swaras (notes), which is denoted as the wider terminology. The raga is defined based on some vital factors such as swaras, aarohana-avarohna and typical phrases. Technically, the fundamental frequency is swara, which is definite through duration. Moreover, there are many other problems for automatic raga recognition model. Thus, in this work, raga is recognized without utilizing explicit note series information and necessary to adopt an efficient classification model.Design/methodology/approachThis paper proposes an efficient raga identification system through which music of Carnatic genre can be effectively recognized. This paper also proposes an adaptive classifier based on NN in which the feature set is used for learning. The adaptive classifier exploits advanced metaheuristic-based learning algorithm to get the knowledge of the extracted feature set. Since the learning algorithm plays a crucial role in defining the precision of the raga recognition, this model prefers to use the GWO.FindingsThrough the performance analysis, it is witnessed that the accuracy of proposed model is 16.6% better than NN with LM, NN with GD and NN with FF respectively, 14.7% better than NN with PSO. Specificity measure of the proposed model is 19.6, 24.0, 13.5 and 17.5% superior to NN with LM, NN with GD, NN with FF and NN with PSO, respectively. NPV of the proposed model is 19.6, 24, 13.5 and 17.5% better than NN with LM, NN with GD, NN with FF and NN with PSO, respectively. Thus it has proven that the proposed model has provided the best result than other conventional classification methods.Originality/valueThis paper intends to propose an efficient raga identification system through which music of Carnatic genre can be effectively recognized. This paper also proposes an adaptive classifier based on NN.


Sign in / Sign up

Export Citation Format

Share Document