scholarly journals Top-k Matching Queries for Filter-Based Profile Matching in Knowledge Bases

2017 ◽  
Author(s):  
Alejandra Lorena Paoletti ◽  
Jorge Martinez-Gil ◽  
Klaus-Dieter Schewe

Finding the best matching job offers for a candidate profile or, the best candidates profiles for a particular job offer, respectively constitutes the most common and most relevant type of queries in the Human Resources (HR) sector. This technically requires to investigate top-k queries on top of knowledge bases and relational databases. We propose in this paper a top-k query algorithm on relational databases able to produce effective and efficient results. The approach is to consider the partial order of matching relations between jobs and candidates profiles together with an efficient design of the data involved. In particular, the focus on a single relation, the matching relation, is crucial to achieve the expectations.

2017 ◽  
Author(s):  
Alejandra Lorena Paoletti ◽  
Jorge Martinez-Gil ◽  
Klaus-Dieter Schewe

In the Human Resources domain the accurate matching between job positions and job applicants profiles is crucial for job seekers and recruiters. The use of recruitment taxonomies has proven to be of significant advantage in the area by enabling semantic matching and reasoning. Hence, the development of Knowledge Bases (KB) where curricula vitae and job offers can be uploaded and queried in order to obtain the best matches by both, applicants and recruiters is highly important. We introduce an approach to improve matching of profiles, starting by expressing jobs and applicants profiles by filters representing skills and competencies. Filters are used to calculate the similarity between concepts in the subsumption hierarchy of a KB. This is enhanced by adding weights and aggregates on filters. Moreover, we present an approach to evaluate over-qualification and introduce blow-up operators that transform certain role relations such that matching of filters can be applied.


Author(s):  
D. J. RANDALL ◽  
H. J. HAMILTON ◽  
R. J. HILDERMAN

This paper addresses the problem of using domain generalization graphs to generalize temporal data extracted from relational databases. A domain generalization graph associated with an attribute defines a partial order which represents a set of generalization relations for the attribute. We propose formal specifications for domain generalization graphs associated with calendar (date and time) attributes. These graphs are reusable (i.e. can be used to generalize any calendar attributes), adaptable (i.e. can be extended or restricted as appropriate for particular applications), and transportable (i.e. can be used with any database containing a calendar attribute).


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


2017 ◽  
Vol 48 (1) ◽  
pp. 220-242 ◽  
Author(s):  
Fu Zhang ◽  
Z. M. Ma ◽  
Qiang Tong ◽  
Jingwei Cheng

bit-Tech ◽  
2019 ◽  
Vol 2 (1) ◽  
pp. 28-42
Author(s):  
Muhammad Subhana ◽  
Yakub Yakub

An employee performance evaluation of the Buddhist Dharma University is needed to see the potential of its human resources. To get an employee performance appraisal in one year requires a decision support system that is fast and measurable so that the information obtained is accurate. The method used in assessing employee performance uses profile matching and is compared with the SAW (simple additive weight) method so that the results can be properly compared. The purpose of employee appraisal is so that leaders can easily obtain information about employee performance ratings at Buddhii Dharma University. The results of the value using the profile matching method can be recommended for salary increases and positions of 4 employees. Which can be recommended for salary increases there are 17 employees and those who are not eligible for salary increases and positions are valued at 12 employees. And comparing with the Simple Additive Weight (SAW) method, there are 19 employees who are eligible to raise salaries and 14 employees who are not eligible to raise salaries and positions


2012 ◽  
Vol 5 (2) ◽  
pp. 225-232 ◽  
Author(s):  
Harald Wahl ◽  
Christian Kaufmann ◽  
Florian Eckkrammer ◽  
Alexander Mense ◽  
Helmut Gollner ◽  
...  

The paper measures the soft skills needs of companies and industry to technical oriented academic graduates, especially coming from IT course programs like business informatics, computer science, or information management. Therefore, between March and September 2010, two groups of researchers at the University of Applied Sciences (UAS) Technikum Vienna analyzed job profiles and the intended denotation of certain keywords. Primarily, one group worked on the statistical content analysis of job offers which could be found in Austrian newspapers or were provided by online job platforms. The other group developed a survey to be sent to several companies in Austria and was addressed to human resources departments. The paper explains the evaluation results in details and discusses its necessary implication on academic curriculum design.


Author(s):  
Z.M. Ma ◽  
Yanhui Lv ◽  
Li Yan

Ontology is an important part of the W3C standards for the Semantic Web used to specify standard conceptual vocabularies to exchange data among systems, provide reusable knowledge bases, and facilitate interoperability across multiple heterogeneous systems and databases. However, current ontology is not sufficient for handling vague information that is commonly found in many application domains. A feasible solution is to import the fuzzy ability to extend the classical ontology. In this article, we propose a fuzzy ontology generation framework from the fuzzy relational databases, in which the fuzzy ontology consists of fuzzy ontology structure and instances. We simultaneously consider the schema and instances of the fuzzy relational databases, and respectively transform them to fuzzy ontology structure and fuzzy RDF data model. This can ensure the integrality of the original structure as well as the completeness and consistency of the original instances in the fuzzy relational databases.


Author(s):  
Trương Thị Thu Hà ◽  
Nguyễn Thị Vân ◽  
Nguyễn Xuân Huy

The algorithms for closures and keys in relation schemas with functional dependencies are well-known in theory of relational databases. However, the problems of closures and keys in relation schemas with positive Boolean dependencies are still opened. This paper proposes a solution to these problems. The results are presented by unification method which is a new technique to construct the basic algorithms for logic dependencies in data and knowledge bases.  


Sign in / Sign up

Export Citation Format

Share Document