Retrieving Skills from Job Descriptions: A Language Model Based Extreme Multi-label Classification Framework

31Citations
Citations of this article
99Readers
Mendeley users who have this article in their library.

Abstract

We introduce a deep learning model to learn the set of enumerated job skills associated with a job description. In our analysis of a large-scale government job portal mycareersfuture.sg, we observe that as much as 65% of job descriptions miss describing a significant number of relevant skills. Our model addresses this task from the perspective of an extreme multi-label classification (XMLC) problem, where descriptions are the evidence for the binary relevance of thousands of individual skills. Building upon the current state-of-the-art language modeling approaches such as BERT, we show our XMLC method improves on an existing baseline solution by over 9% and 7% absolute improvements in terms of recall and normalized discounted cumulative gain. We further show that our approach effectively addresses the missing skills problem, and helps in recovering relevant skills that were missed out in the job postings by taking into account the structured semantic representation of skills and their co-occurrences through a Correlation Aware Bootstrapping process. To facilitate future research and replication of our work, we have made the dataset and the implementation of our model publicly available.

Cite

CITATION STYLE

APA

Bhola, A., Halder, K., Prasad, A., & Kan, M. Y. (2020). Retrieving Skills from Job Descriptions: A Language Model Based Extreme Multi-label Classification Framework. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 5832–5842). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.513

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free