Extracting provably correct rules from artificial neural networks

  • Thrun S
N/ACitations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

Although connectionist learning procedures have been applied successfully to a variety of real-world scenarios, artificial neural networks have often been criticized for exhibiting a low degree of comprehensibility. Mechanisms that automatically compile neural networks into symbolic rules offer a promising perspective to overcome this practical shortcoming of neural network represen-tations. This paper describes an approach to neural network rule extraction based on Va-lidity Interval Analysis (VI-Analysis). VI-Analysis is a generic tool for extracting symbolic knowledge from Backpropagation-style artificial neural networks. It does this by propagating whole intervals of activations through the network in both the forward and backward directions. In the context of rule extraction, these intervals are used to prove or disprove the correctness of conjectured rules. We describe techniques for generating and testing rule hypotheses, and demonstrate these using some simple classification tasks including the MONK's benchmark problems. Rules extracted by VI-Analysis are provably correct. No assumptions are made about the topology of the network at hand, as well as the procedure employed for training the network.

Cite

CITATION STYLE

APA

Thrun, S. B. (1993). Extracting provably correct rules from artificial neural networks, 37. Retrieved from https://pdfs.semanticscholar.org/e0fb/fb6243bd4ca84f906413a656a4090782c8a5.pdf

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free