Weakly Supervised Context-based Interview Question Generation

2Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

We explore the task of automated generation of technical interview questions from a given textbook. Such questions are different from those for reading comprehension studied in question generation literature. We curate a context-based interview questions data set for Machine Learning and Deep Learning from two popular textbooks. We first explore the possibility of using a large generative language model (GPT-3) for this task in a zero shot setting. We then evaluate the performance of smaller generative models such as BART fine-tuned on weakly supervised data obtained using GPT-3 and hand-crafted templates. We deploy an automatic question importance assignment technique to figure out suitability of a question in a technical interview. It improves the evaluation results in many dimensions. We dissect the performance of these models for this task and also scrutinize the suitability of questions generated by them for use in technical interviews.

Cite

CITATION STYLE

APA

Pal, S., Khan, K., Singh, A. K., Ghosh, S., Nayak, T., Palshikar, G., & Bhattacharya, I. (2022). Weakly Supervised Context-based Interview Question Generation. In GEM 2022 - 2nd Workshop on Natural Language Generation, Evaluation, and Metrics, Proceedings of the Workshop (pp. 43–53). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.gem-1.4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free