Doubly sparsifying network

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

We propose the doubly sparsifying network (DSN), by drawing inspirations from the double sparsity model for dictionary learning. DSN emphasizes the joint utilization of both the problem structure and the parameter structure. It simultaneously sparsifies the output features and the learned model parameters, under one unified framework. DSN enjoys intuitive model interpretation, compact model size and low complexity. We compare DSN against a few carefully-designed baselines, and verify its consistently superior performance in a wide range of settings. Encouraged by its robustness to insufficient training data, we explore the applicability of DSN in brain signal processing that has been a challenging interdisciplinary area. DSN is evaluated for two mainstream tasks: electroencephalographic (EEG) signal classification and blood oxygenation level dependent (BOLD) response prediction, and achieves promising results in both cases.

Cite

CITATION STYLE

APA

Wang, Z., Huang, S., Zhou, J., & Huang, T. S. (2017). Doubly sparsifying network. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 3020–3026). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/421

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free