Taming large rule models in rough set approaches

11Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In knowledge discovery from uncertain data we usually wish to obtain models that have good predictive properties when applied to unseen objects. In several applications, it is also desirable to synthesize models that in addition have good descriptive properties. The ultimate goal therefore, is to maximize both properties, i.e. to obtain models that are amenable to human inspection and that have high predictive performance. Models consisting of decision or classification rules, such as those produced with rough sets [19], can exhibit both properties. In practice, however, the induced models are often too large to be inspected. This paper reports on two basic approaches to obtaining manageable rule-based models that do not sacrifice their predictive qualities: a priori and a posteriori pruning. The methods are discussed in the context of rough sets, but several of the results are applicable to rule-based models in general. Algorithms realizing these approaches have been implemented in the Rosetta system. Predictive performance of the models has been estimated using accuracy and receiver operating characteristics (ROC). The methods has been tested 011 real-world data sets, with encouraging results.

Cite

CITATION STYLE

APA

Ågotnos, T., Komorowski, J., & Løken, T. (1999). Taming large rule models in rough set approaches. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1704, pp. 193–203). Springer Verlag. https://doi.org/10.1007/978-3-540-48247-5_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free