Deep neural networks classification via binary error‐detecting output codes

18Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.

Abstract

One‐hot encoding is the prevalent method used in neural networks to represent multiclass categorical data. Its success stems from its ease of use and interpretability as a probability distribution when accompanied by a softmax activation function. However, one‐hot encoding leads to very high dimensional vector representations when the categorical data’s cardinality is high. The Hamming distance in one‐hot encoding is equal to two from the coding theory perspective, which does not allow detection or error‐correcting capabilities. Binary coding provides more possibilities for encoding categorical data into the output codes, which mitigates the limitations of the one‐hot encoding mentioned above. We propose a novel method based on Zadeh fuzzy logic to train binary output codes holistically. We study linear block codes for their possibility of separating class information from the checksum part of the codeword, showing their ability not only to detect recognition errors by calculating non‐zero syndrome, but also to evaluate the truth‐value of the decision. Experimental results show that the proposed approach achieves similar results as one‐hot encoding with a softmax function in terms of accuracy, reliability, and out‐of‐distribution performance. It suggests a good foundation for future applications, mainly classification tasks with a high number of classes.

Cite

CITATION STYLE

APA

Klimo, M., Lukáč, P., & Tarábek, P. (2021). Deep neural networks classification via binary error‐detecting output codes. Applied Sciences (Switzerland), 11(8). https://doi.org/10.3390/app11083563

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free