scholarly journals Better understanding of the labour market using Big Data

2021 ◽  
Vol 20 (3) ◽  
pp. 677-692
Author(s):  
Alena Vankevich ◽  
Iryna Kalinouskaya

Motivation: As the result of digitalisation of the economy, the number of Internet users is increasing, which leads to an increase in the number of vacancies posted on online platforms and services. The description of vacancies includes information about skills and competencies, which is the source of additional data for the labour market analysis. This information cannot be received through the analysis of statistical and administrative data. Therefore, it is important: — to learn how to evaluate new information sources, and use the data they generate; — to develop tools that people and organizations will use for finding an employee or a vacant post. The study focuses on the analysis and forecast of labour demand in the context of skills and competencies, which significantly enriches and adds to the information about the labour market and facilitates effective decision-making. Aim: The main goals of this article are the following: (1) identification of the methodological approaches in the labour market analyses using Big Data; (2) assessment of the labour demand and labour supply in the context of skills and competencies listed in the vacancy description posted on job portals; and (3) determination of the matches (mismatches) between skills and competencies in order to help the companies and individuals get better employment and education. Empirical data used in the research were collected from the description of job vacancies (16 401 vacancies) and CVs (227 215 CVs) from the most popular open job portals in Belarus through the scraping approach and classified according to the ESCO and ISCO codes. Quantitative analysis by the means of artificial intelligence was used in the research. Results: The study results revealed that the information about the volume and structure of skills and competencies obtained by scraping data from vacancy descriptions and Cvs, which are posted on online portals, allows for more precise diagnostics of labour demand and supply and overcoming of bilateral information asymmetry in the labour market. Based on the analysis, the parameters of scarcity and excess in competencies for individual occupations in the labour market are determined (the level of the correlation ratio between applicants’ competencies and those requested by employers in the context of occupations (four digits according to the ISCO classification) is less 0.8; the deviation of the ranks of competencies listed in CVs and vacancy descriptions according to the ESCO groups of skills/competencies and a sign of revealed deviations). The methodology is developed to set areas for necessary knowledge acquisition (by the analysis of competencies listed in CVs and vacancy descriptions at the 3rd and 4th digit level of ISCED classification) and skills (by the analysis of competencies at the 2nd digit level in ESCO groups). The paper illustrates limitations in using Big Data as an empirical database and explains the measures to eliminate those limitations.

2021 ◽  
Vol 14 (6) ◽  
pp. 125
Author(s):  
Charles Éric Manyombé ◽  
Sébastien H. Azondékon

In a multi-project environment, organizational complexity refers to the difficulties that organizations often face in choosing projects to build their portfolios, since they do not aim to achieve the same strategic business objectives. It is for this reason that the project selection process requires the implementation of an effective decision-making tool when composing a project portfolio. The objective of this paper is to propose an adapted framework for a better project selection procedure inspired by the approaches of strategic relevance, profitability criteria, uncertainty, and risk analysis, the ability to dispose of scarce resources, and the determination of interdependencies between different projects. 


Web Services ◽  
2019 ◽  
pp. 953-978
Author(s):  
Krishnan Umachandran ◽  
Debra Sharon Ferdinand-James

Continued technological advancements of the 21st Century afford massive data generation in sectors of our economy to include the domains of agriculture, manufacturing, and education. However, harnessing such large-scale data, using modern technologies for effective decision-making appears to be an evolving science that requires knowledge of Big Data management and analytics. Big data in agriculture, manufacturing, and education are varied such as voluminous text, images, and graphs. Applying Big data science techniques (e.g., functional algorithms) for extracting intelligence data affords decision markers quick response to productivity, market resilience, and student enrollment challenges in today's unpredictable markets. This chapter serves to employ data science for potential solutions to Big Data applications in the sectors of agriculture, manufacturing and education to a lesser extent, using modern technological tools such as Hadoop, Hive, Sqoop, and MongoDB.


Author(s):  
Krishnan Umachandran ◽  
Debra Sharon Ferdinand-James

Continued technological advancements of the 21st Century afford massive data generation in sectors of our economy to include the domains of agriculture, manufacturing, and education. However, harnessing such large-scale data, using modern technologies for effective decision-making appears to be an evolving science that requires knowledge of Big Data management and analytics. Big data in agriculture, manufacturing, and education are varied such as voluminous text, images, and graphs. Applying Big data science techniques (e.g., functional algorithms) for extracting intelligence data affords decision markers quick response to productivity, market resilience, and student enrollment challenges in today's unpredictable markets. This chapter serves to employ data science for potential solutions to Big Data applications in the sectors of agriculture, manufacturing and education to a lesser extent, using modern technological tools such as Hadoop, Hive, Sqoop, and MongoDB.


Author(s):  
Mona Bakri Hassan ◽  
Elmustafa Sayed Ali Ahmed ◽  
Rashid A. Saeed

The use of AI algorithms in the IoT enhances the ability to analyse big data and various platforms for a number of IoT applications, including industrial applications. AI provides unique solutions in support of managing each of the different types of data for the IoT in terms of identification, classification, and decision making. In industrial IoT (IIoT), sensors, and other intelligence can be added to new or existing plants in order to monitor exterior parameters like energy consumption and other industrial parameters levels. In addition, smart devices designed as factory robots, specialized decision-making systems, and other online auxiliary systems are used in the industries IoT. Industrial IoT systems need smart operations management methods. The use of machine learning achieves methods that analyse big data developed for decision-making purposes. Machine learning drives efficient and effective decision making, particularly in the field of data flow and real-time analytics associated with advanced industrial computing networks.


2019 ◽  
Vol 9 (4) ◽  
pp. 293-302
Author(s):  
Oded Koren ◽  
Carina Antonia Hallin ◽  
Nir Perel ◽  
Dror Bendet

Abstract Big data research has become an important discipline in information systems research. However, the flood of data being generated on the Internet is increasingly unstructured and non-numeric in the form of images and texts. Thus, research indicates that there is an increasing need to develop more efficient algorithms for treating mixed data in big data for effective decision making. In this paper, we apply the classical K-means algorithm to both numeric and categorical attributes in big data platforms. We first present an algorithm that handles the problem of mixed data. We then use big data platforms to implement the algorithm, demonstrating its functionalities by applying the algorithm in a detailed case study. This provides us with a solid basis for performing more targeted profiling for decision making and research using big data. Consequently, the decision makers will be able to treat mixed data, numerical and categorical data, to explain and predict phenomena in the big data ecosystem. Our research includes a detailed end-to-end case study that presents an implementation of the suggested procedure. This demonstrates its capabilities and the advantages that allow it to improve the decision-making process by targeting organizations’ business requirements to a specific cluster[s]/profiles[s] based on the enhancement outcomes.


Author(s):  
Joseph Cluever ◽  
Thomas Esselman ◽  
Sam Harvey

The Electric Power Research Institute (EPRI) with Électricité de France (EDF) developed the Integrated Life Cycle Management (ILCM) computer code to provide a standard methodology to support effective decision making for the long-term management of selected nuclear station components. In 2016, a Likelihood of Replacement (LoR) expert elicitation was developed to provide reliability curves for determination of replacement options for components that were not initially included in ILCM. The LoR methodology required expert’s to estimate future replacement probabilities which were then combined with historical failures using Bayesian analysis. Although this methodology was effective, parts of the industry were accustomed to providing a High/Medium/Low (HML) probability categorization for selected periods of operation. This paper presents an approach for calculating Weibull replacement probability curves from HML categorical replacement probability estimates. Additional questions beyond the initial HML categorization were developed. These focused on the timing of category transitions to refine parameter likelihood functions, reduce parameter uncertainty, and offset the significant Weibull parameter uncertainty introduced by using categorical estimates.


2009 ◽  
Vol 52 (1) ◽  
pp. 104-110
Author(s):  
Bernard Fortin ◽  
Jean-François Gautrin

Abstract The paper is a brief critical evaluation of the treatment of the "supply of labour" in the CANDIDE model. The authors note that the "supply of labour block" does not occur to have retained much of the effort of the CANDIDE developing team. With regards to the specification of the equation of supply of labour, no attempts have been made to integrate new developments in labour economics (a better treatment of the secondary workers supply of labour, a Fisherian utility maximising approach). We note certain econometric difficulties which have not been overcome: The limited dependant variable problem, the lag structure problem… Finally, though it is encouraging to see attempts in macro-economic models to integrate explicitly a labour demand and supply block with a certain disagregation, important economic problems in the labour market can be dealt only with much more flexible and disagregated models than CANDIDE.


2020 ◽  
Vol 8 (6) ◽  
pp. 2127-2131

To improve quality in healthcare, it is very much important to store, manage and retrieve as well as use the data & information properly as it has great potential to help leaders in effective decision making. Managing data in healthcare is not easy task as it has many associated risk. As per new trends in healthcare industry, it is observed that the data volume generation in healthcare is growing rapidly. Big Data is substantial, less organized and mixed in nature. In addition to that, big data is considered one of the best tools to reduce the associated and functional cost of healthcare providers worldwide. While income should not be only a main or prime indicator, it is equally important for healthcare providers to gather the most valuable present tools and techniques and setup to force or inculcate big data effectively otherwise it can harm or risk organization to lose money in business as well as profit. Objective: This paper is focusing on the special factors of Big Data in healthcare. The main aim of this study is to find the roles of big data in healthcare and know how big data is helping in data transaction in healthcare industry. Methods: More than 30(n=30) published papers have been reviewed and suitable papers (n=18) have been included to make the conclusion. Information was condensed utilizing distinct measurable assessment & techniques. Findings: According to investigation of published articles, it has established that the role of big data is very much unique and important as well as it is helping healthcare providers to improve the patient safety and quality by providing smooth health information storage and exchange with high privacy and security. Conclusion: Big Data in healthcare is a new concept introduced in healthcare data analytics and management which is basically focusing in improving the drug and disease discovery, personal healthcare record, electronic health record, effective decision in diagnosis and treatment by healthcare practitioners and at most helps in getting desired and positive health outcome. The data is one of the crucial factors in healthcare and it is high time for healthcare providers to look into those matters in enormous way.


2022 ◽  
Vol 7 (1) ◽  
Author(s):  
Kelly Gerakoudi-Ventouri

AbstractDecision-making is a prolific research area in the internet era, which has propelled globalization and the virtual elimination of many country border barriers. However, effective decision-making in the shipping industry is a time consuming and often complicated process. Digital evolution has provided new innovative organizational operation methods. Blockchain technology—a basic component of the Fourth Industrial Revolution—is one such innovation that promises to alter the process of decision-making. However, only a few academic studies have explored the decision-making aspect of blockchain technology. Moreover, there is a dearth of comprehensive research on how blockchain affects decisions in the shipping industry. This study explored how this novice technology can address issues, such as vast documentation and information asymmetry in the shipping industry. Specifically, grounded theory was used to qualitatively investigate extant practices and examine the potential impact of blockchain technology on decision-making in the shipping industry and the potential of using blockchain technology to emancipate decision-making. The study results indicate that the instant and reliable data-sharing capability of blockchain can significantly impact the shipping industry, while transforming its decision-making processes.


Sign in / Sign up

Export Citation Format

Share Document