Transparent classification with multilayer logical perceptrons and random binarization

31Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.

Abstract

Models with transparent inner structure and high classification performance are required to reduce potential risk and provide trust for users in domains like health care, finance, security, etc. However, existing models are hard to simultaneously satisfy the above two properties. In this paper, we propose a new hierarchical rule-based model for classification tasks, named Concept Rule Sets (CRS), which has both a strong expressive ability and a transparent inner structure. To address the challenge of efficiently learning the non-differentiable CRS model, we propose a novel neural network architecture, Multilayer Logical Perceptron (MLLP), which is a continuous version of CRS. Using MLLP and the Random Binarization (RB) method we proposed, we can search the discrete solution of CRS in continuous space using gradient descent and ensure the discrete CRS acts almost the same as the corresponding continuous MLLP. Experiments on 12 public data sets show that CRS outperforms the state-of-the-art approaches and the complexity of the learned CRS is close to the simple decision tree.

Cite

CITATION STYLE

APA

Wang, Z., Zhang, W., Liu, N., & Wang, J. (2020). Transparent classification with multilayer logical perceptrons and random binarization. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6331–6339). AAAI press. https://doi.org/10.1609/aaai.v34i04.6102

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free