Anytime Inference with Distilled Hierarchical Neural Ensembles

16Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Inference in deep neural networks can be computationally expensive, and networks capable of anytime inference are important in scenarios where the amount of compute or input data varies over time. In such networks the inference process can interrupted to provide a result faster, or continued to obtain a more accurate result. We propose Hierarchical Neural Ensembles (HNE), a novel framework to embed an ensemble of multiple networks in a hierarchical tree structure, sharing intermediate layers. In HNE we control the complexity of inference on-the-fly by evaluating more or less models in the ensemble. Our second contribution is a novel hierarchical distillation method to boost the predictions of small ensembles. This approach leverages the nested structure of our ensembles, to optimally allocate accuracy and diversity across the individual models. Our experiments show that, compared to previous anytime inference models, HNE provides state-of-the-art accuracy-computation trade-offs on the CIFAR-10/100 and ImageNet datasets.

Cite

CITATION STYLE

APA

Ruiz, A., & Verbeek, J. (2021). Anytime Inference with Distilled Hierarchical Neural Ensembles. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 11A, pp. 9463–9471). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i11.17140

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free