Multi-Head Self-Attention-Based Deep Clustering for Single-Channel Speech Separation

20Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Turning attention to a particular speaker when many people talk simultaneously is known as the cocktail party problem. It is still a tough task that remained to be solved especially for single-channel speech separation. Inspired by the physiological phenomenon that humans tend to distinguish some attractive sounds from mixed signals, we propose the multi-head self-attention deep clustering network (ADCNet) for this problem. We creatively combine the widely used deep clustering network with multi-head self-attention mechanism and exploit how the number of heads in multi-head self-attention affects separation performance. We also adopt the density-based canopy K-means algorithm to further improve performance. We trained and evaluated our system using the Wall Street Journal dataset (WSJ0) on two and three talker mixtures. Experimental results show the new approach can achieve a better performance compared with many advanced models.

Cite

CITATION STYLE

APA

Jin, Y., Tang, C., Liu, Q., & Wang, Y. (2020). Multi-Head Self-Attention-Based Deep Clustering for Single-Channel Speech Separation. IEEE Access, 8, 100013–100021. https://doi.org/10.1109/ACCESS.2020.2997871

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free