Doubly sparsifying network

3Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

We propose the doubly sparsifying network (DSN), by drawing inspirations from the double sparsity model for dictionary learning. DSN emphasizes the joint utilization of both the problem structure and the parameter structure. It simultaneously sparsifies the output features and the learned model parameters, under one unified framework. DSN enjoys intuitive model interpretation, compact model size and low complexity. We compare DSN against a few carefully-designed baselines, and verify its consistently superior performance in a wide range of settings. Encouraged by its robustness to insufficient training data, we explore the applicability of DSN in brain signal processing that has been a challenging interdisciplinary area. DSN is evaluated for two mainstream tasks: electroencephalographic (EEG) signal classification and blood oxygenation level dependent (BOLD) response prediction, and achieves promising results in both cases.

References Powered by Scopus

Large-scale machine learning with stochastic gradient descent

4728Citations
N/AReaders
Get full text

Sparse coding with an overcomplete basis set: A strategy employed by V1?

2821Citations
N/AReaders
Get full text

Brain-computer interface technology: A review of the first international meeting

1861Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Dual Dynamic Inference: Enabling More Efficient, Adaptive, and Controllable Deep Inference

64Citations
N/AReaders
Get full text

Frank-Wolfe Network: An Interpretable Deep Structure for Non-Sparse Coding

10Citations
N/AReaders
Get full text

Learning Simple Thresholded Features with Sparse Support Recovery

9Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Wang, Z., Huang, S., Zhou, J., & Huang, T. S. (2017). Doubly sparsifying network. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 3020–3026). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/421

Readers over time

‘18‘19‘20‘21‘2202468

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 6

86%

Professor / Associate Prof. 1

14%

Readers' Discipline

Tooltip

Computer Science 5

63%

Agricultural and Biological Sciences 1

13%

Physics and Astronomy 1

13%

Business, Management and Accounting 1

13%

Save time finding and organizing research with Mendeley

Sign up for free
0