Graph Masked Autoencoder Enhanced Predictor for Neural Architecture Search

18Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

Performance estimation of neural architecture is a crucial component of neural architecture search (NAS). Meanwhile, neural predictor is a current mainstream performance estimation method. However, it is a challenging task to train the predictor with few architecture evaluations for efficient NAS. In this paper, we propose a graph masked autoencoder (GMAE) enhanced predictor, which can reduce the dependence on supervision data by self-supervised pre-training with untrained architectures. We compare our GMAE-enhanced predictor with existing predictors in different search spaces, and experimental results show that our predictor has high query utilization. Moreover, GMAE-enhanced predictor with different search strategies can discover competitive architectures in different search spaces. Code and supplementary materials are available at https://github.com/kunjing96/GMAENAS.git.

Cite

CITATION STYLE

APA

Jing, K., Xu, J., & Li, P. (2022). Graph Masked Autoencoder Enhanced Predictor for Neural Architecture Search. In IJCAI International Joint Conference on Artificial Intelligence (pp. 3114–3120). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2022/432

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free