description file
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 5)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Mohammed Alghazal

Abstract Employers commonly use time-consuming screening tools or online matching engines that are driven by manual roles and predefined keywords, to search for potential job applicants. Such traditional techniques have not kept pace with the new digital revolution in machine learning and big data analytics. This paper presents advanced artificial intelligent solutions employed for ranking resumes and CV-to-Job Description matching. Open source resumes and job descriptions' documents were used to construct and validate the machine learning models in this paper. Documents were converted to images and processed via Google cloud using Optical Character Recognition algorithm (OCR) to extract text information from all resumes and job descriptions' documents, with more than 97% accuracy. Prior to modeling, the extracted text were processed via a series of Natural Language Processing (NLP) techniques by splitting/tokenizing common words, grouping together inflected form of words, i.e. lemmatization, and removal of stop words and punctuation marks. After text processing, resumes were trained using the unsupervised machine learning algorithm, Latent Dirichlet Allocation (LDA), for topic modeling and categorization. Given the type of resumes used, the algorithm was able to categorize them into 4 main job sectors: marketing and business, engineering, computer science/IT and health. Scores were assigned to each resume to represent the maximum LDA probability for ranking. Another more advanced deep learning algorithm, called Doc2Vec, was also used to train and match potential resumes to relevant job descriptions. In this model, resumes are represented by unique vectors that can be used to group similar documents, match and retrieve resumes related to a given job description document provided by HR. The similarity is measured between each resume and the given job description file to query the top job candidates. The model was tested against several job description files related to engineering, IT and human resources, and was able to identify the top-ranking resumes from over hundreds of trained resumes. This paper presents an innovative method for processing, categorizing and ranking resumes using advanced computational models empowered by the latest fourth industrial resolution technologies. This solution is beneficial to both job seekers and employers, providing efficient and unbiased data-driven method for finding top applicants for a given job.


As a promising innovation to totally change how we design, install and oversee different organizations, Software-based Network Function Virtualization (NFV) empowers hardware-free, adaptable, quickand proficient network service process. With the expanding prevalence of NFV, the Internet has likewise changed to be a hybrid environment where NFV-based network exists together with traditional gadgets. To ease our understanding, plan, assess and handling of such novel organization environment, there is an incredible requirement for new NFV-compatible measurement framework which is still missing till now. To overcome this issue, a framework, named Software Defined Network Measurement System (SDNMS) is introduced. First the architecture of SDNMS is proposed. In this design, a conventional strategy to portray the working method of the network measurement is defined. This technique can likewise be used to produce network measurement middle box (NMMB) in a particular location of the NFV network as per the customized description file, and to flexibly install the network measurement function. Besides, the innovation of virtual network measurement function (VNMF) joined with LXC is said to be shape NMMB function.Thirdly, a control strategy is introduced to control the beginning, stop, and update NMMB to produce a particular network measurement system. At last, a model of SDNMS with network observing function to screen network performance anomalies and locate fault is presented. Test results have shown that SDNMS design and related advancements are practical and successful to launch and control network measurement functions in NFV network. We trust SDNMS could give another strategy to experts to do network management and examine at the age of NFV


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 495
Author(s):  
Chih-Yuan Huang ◽  
Hsin-Hsien Chen

Sensor Web and Internet of Things (IoT) (SW-IoT) have been attracting attention from various fields. Both of them deploy networks of embedded devices to monitor physical properties (i.e., sensing capability) or to be controlled (i.e., tasking capability). One of the most important tasks to realize the SW-IoT vision is to establish an open and interoperable architecture, across the device layer, gateway layer, service layer, and application layer. To achieve this objective, many organizations and alliances propose standards for different layers. Among the standards, Open Geospatial Consortium (OGC) SensorThings API is arguably one of the most complete and flexible service standards. However, the SensorThings API only address heterogeneity issues in the service layer. Embedded devices following proprietary protocols need to join closed ecosystems and then link to the SensorThings API ecosystem via customized connectors. To address this issue, one could first follow another device layer and gateway layer open standards and then perform data model mapping with the SensorThings API. However, the data model mapping is not always straightforward as the standards were designed independently. Therefore, this research tries to propose a more direct solution to unify the entire SW-IoT architecture by extending the SensorThings API ecosystem to the gateway layer and the device layer. To be specific, this research proposes SW-IoT Plug and Play (IoT-PNP) to achieve an automatic registration procedure for embedded devices. The IoT-PNP contains three main components: (1) A description file describing device metadata and capabilities, (2) a communication protocol between the gateway layer and the device layer for establishing connections, and (3) an automatic registration procedure for both sensing and tasking capabilities. Overall, we believe the proposed solution could help achieve an open and interoperable SW-IoT end-to-end architecture based on the OGC SensorThings API.


2018 ◽  
Vol 14 (2) ◽  
pp. 233-258 ◽  
Author(s):  
Efthimia Mavridou ◽  
Konstantinos M. Giannoutakis ◽  
Dionysios Kehagias ◽  
Dimitrios Tzovaras ◽  
George Hassapis

Purpose Semantic categorization of Web services comprises a fundamental requirement for enabling more efficient and accurate search and discovery of services in the semantic Web era. However, to efficiently deal with the growing presence of Web services, more automated mechanisms are required. This paper aims to introduce an automatic Web service categorization mechanism, by exploiting various techniques that aim to increase the overall prediction accuracy. Design/methodology/approach The paper proposes the use of Error Correcting Output Codes on top of a Logistic Model Trees-based classifier, in conjunction with a data pre-processing technique that reduces the original feature-space dimension without affecting data integrity. The proposed technique is generalized so as to adhere to all Web services with a description file. A semantic matchmaking scheme is also proposed for enabling the semantic annotation of the input and output parameters of each operation. Findings The proposed Web service categorization framework was tested with the OWLS-TC v4.0, as well as a synthetic data set with a systematic evaluation procedure that enables comparison with well-known approaches. After conducting exhaustive evaluation experiments, categorization efficiency in terms of accuracy, precision, recall and F-measure was measured. The presented Web service categorization framework outperformed the other benchmark techniques, which comprise different variations of it and also third-party implementations. Originality/value The proposed three-level categorization approach is a significant contribution to the Web service community, as it allows the automatic semantic categorization of all functional elements of Web services that are equipped with a service description file.


2017 ◽  
Author(s):  
Ernur Saka ◽  
Benjamin J. Harrison ◽  
Kirk West ◽  
Jeffrey C. Petruska ◽  
Eric C. Rouchka

AbstractBackgroundSince the introduction of microarrays in 1995, researchers world-wide have used both commercial and custom-designed microarrays for understanding differential expression of transcribed genes. Public databases such as ArrayExpress and the Gene Expression Omnibus (GEO) have made millions of samples readily available. One main drawback to microarray data analysis involves the selection of probes to represent a specific transcript of interest, particularly in light of the fact that transcript-specific knowledge (notably alternative splicing) is dynamic in nature.ResultsWe therefore developed a framework for reannotating and reassigning probe groups for Affymetrix® GeneChip® technology based on functional regions of interest. This framework addresses three issues of Affymetrix® GeneChip® data analyses: removing nonspecific probes, updating probe target mapping based on the latest genome knowledge and grouping probes into gene, transcript and region-based (UTR, individual exon, CDS) probe sets. Updated gene and transcript probe sets provide more specific analysis results based on current genomic and transcriptomic knowledge. The framework selects unique probes, aligns them to gene annotations and generates a custom Chip Description File (CDF). The analysis reveals only 87% of the Affymetrix® GeneChip® HG-U133 Plus 2 probes uniquely align to the current hg38 human assembly without mismatches. We also tested new mappings on the publicly available data series using rat and human data from GSE48611 and GSE72551 obtained from GEO, and illustrate that functional grouping allows for the subtle detection of regions of interest likely to have phenotypical consequences.ConclusionThrough reanalysis of the publicly available data series GSE48611 and GSE72551, we profiled the contribution of UTR and CDS regions to the gene expression levels globally. The comparison between region and gene based results indicated that the detected expressed genes by gene-based and region-based CDFs show high consistency and regions based results allows us to detection of changes in transcript formation.


Sign in / Sign up

Export Citation Format

Share Document