job applicant
Recently Published Documents


TOTAL DOCUMENTS

137
(FIVE YEARS 34)

H-INDEX

20
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Jeromy Anglim ◽  
Karlyn Molloy ◽  
Patrick Damien Dunlop ◽  
Simon Albrecht ◽  
Filip Lievens ◽  
...  

Some scholars suggest that organizations could improve their hiring decisions by measuring the personal values of job applicants, arguing that values provide insights into applicants’ cultural fit, retention prospects, and performance outcomes. However, others have expressed concerns about response distortion and faking. The current study provides the first large-scale investigation of the effect of the job applicant context on the psychometric structure and scale means of a self-reported values measure. Participants comprised 7,884 job applicants (41% male; age M = 43.32, SD = 10.76) and a country-, age-, and gender-matched comparison sample of 1,806 non-applicants (41% male; age M = 44.72, SD = 10.97), along with a small repeated-measures, cross-context sample. Respondents completed the 57-item Portrait Values Questionnaire (PVQ) measuring Schwartz’ universal personal values. Compared to matched non-applicants, applicants reported valuing power and self-direction considerably less, and conformity and universalism considerably more. Applicants also reported valuing security, tradition, and benevolence more than non-applicants, and reported valuing stimulation, hedonism, and achievement less than non-applicants. Despite applicants appearing to embellish the degree to which their values aligned with being responsible and considerate workers, invariance testing suggested that the under- lying structure of values assessment is largely preserved in job applicant contexts.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1206
Author(s):  
Masyitah Mahadi

Background: When applying for a job, applicant reactions are defined as the amount to which the applicants of the job regard the process of selection as impartial or unbiased. The questions for the structured interview can be future-oriented (Situational Interview) or past-oriented (Patterned Behaviour Description Interview). Past research on using SI or PBDI in selection process and their effects on applicant reactions showed that applicant reactions are highest towards PBDI. Methods: The aim of this study was to investigate the effect of combining PBDI and SI (mixed SPBDI) as interview questions, and to differentiate its effect with PBDI. This study involved 46 lecturers from the International Islamic University Malaysia (IIUM). This study used (a) mixed SPBDI, and PBDI as interview questions, and (b) Applicant Reaction Questionnaires which was based on the Organizational Justice theory. The interview was conducted in a transcript form. After the participants answered the interview transcripts, they answered the applicant reactions questionnaires. The data was then analysed and presented. Results: The results showed a significant difference between mixed SPBDI and PBDI, with the PBDI’s mean (M = 13.61; SD = 1.57) is significantly higher than the mixed SPBDI’s mean (M = 10.89; SD 1.91), t (46) = 7.22; p < 0.01. Specifically, applicants reacted more positively to PBDI interview content compared to the mixed SPBDI. Conclusion: This research had few limitations such as the interview being conducted in the form of transcript and not verbally as in real workplace context. It is also limited to studying the reactions in terms of only perceived fairness and no other elements such as organizational effectiveness or the decision making of the applicants. Nevertheless, this study has contributed to the theoretical and research development in applicant reactions, and to the practical application for organizations in Malaysia.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Felix G. Rebitschek ◽  
Gerd Gigerenzer ◽  
Gert G. Wagner

AbstractThis study provides the first representative analysis of error estimations and willingness to accept errors in a Western country (Germany) with regards to algorithmic decision-making systems (ADM). We examine people’s expectations about the accuracy of algorithms that predict credit default, recidivism of an offender, suitability of a job applicant, and health behavior. Also, we ask whether expectations about algorithm errors vary between these domains and how they differ from expectations about errors made by human experts. In a nationwide representative study (N = 3086) we find that most respondents underestimated the actual errors made by algorithms and are willing to accept even fewer errors than estimated. Error estimates and error acceptance did not differ consistently for predictions made by algorithms or human experts, but people’s living conditions (e.g. unemployment, household income) affected domain-specific acceptance (job suitability, credit defaulting) of misses and false alarms. We conclude that people have unwarranted expectations about the performance of ADM systems and evaluate errors in terms of potential personal consequences. Given the general public’s low willingness to accept errors, we further conclude that acceptance of ADM appears to be conditional to strict accuracy requirements.


Author(s):  
Valanarasu R

The use of social media and leaving a digital footprint has recently increased all around the world. It is being used as a platform for people to communicate their sentiments, emotions, and expectations with their data. The data available in social media are publicly viewable and accessible. Any social media network user's personality is predicted based on their posts and status in order to deliver a better accuracy. In this perspective, the proposed research article proposes novel machine learning methods for predicting the personality of humans based on their social media digital footprints. The proposed model may be reviewed for any job applicant during the times of COVID'19 through online enrolment for any organisation. Previously, the personality prediction methods are failed due to the differing perspectives of recruiters on job applicants. Also, this estimation is modernized and the prediction time is also reduced due to the implementation of the proposed hybrid approach on machine learning prediction. The artificial intelligence based calculation is used for predicting the personality of job applicants or any person. The proposed algorithm is organized with dynamic multi-context information and it also contains the account information of multiple platforms such as Facebook, Twitter, and YouTube. The collection of the various dataset from different social media sites constitute to the increase in the prediction rate of any machine learning algorithm. Therefore, the accuracy of personality prediction is higher than any other existing methods. Despite the fact that a person's logic varies from season to season, the proposed algorithm consistently outperforms other existing and traditional approaches in predicting a person's mentality.


2021 ◽  
pp. 109442812110029
Author(s):  
Tianjun Sun ◽  
Bo Zhang ◽  
Mengyang Cao ◽  
Fritz Drasgow

With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.


Sign in / Sign up

Export Citation Format

Share Document