Adapting deep network features to capture psychological representations: An abridged report

16Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

Deep neural networks have become increasingly successful at solving classic perception problems (e.g., recognizing objects), often reaching or surpassing human-level accuracy. In this abridged report of Peterson et al. [2016], we examine the relationship between the image representations learned by these networks and those of humans. We find that deep features learned in service of object classification account for a significant amount of the variance in human similarity judgments for a set of animal images. However, these features do not appear to capture some key qualitative aspects of human representations. To close this gap, we present a method for adapting deep features to align with human similarity judgments, resulting in image representations that can potentially be used to extend the scope of psychological experiments and inform human-centric AI.

Cite

CITATION STYLE

APA

Peterson, J. C., Abbott, J. T., & Griffiths, T. L. (2017). Adapting deep network features to capture psychological representations: An abridged report. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 4934–4938). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/697

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free