Rough sets in data analysis: Foundations and applications

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Rough sets is a paradigm introduced in order to deal with uncertainty due to ambiguity of classification caused by incompleteness of knowledge. The idea proposed by Z. Pawlak in 1982 goes back to classical idea of representing uncertain and/or inexact notions due to the founder of modern logic, Gottlob Frege: uncertain notions should possess around them a region of uncertainty consisting of objects that can be qualified with certainty neither into the notion nor to its complement. The central tool in realizing this idea in rough sets is the relation of uncertainty based on the classical notion of indiscernibility due to Gottfried W. Leibniz: objects are indiscernible when no operator applied to each of them yields distinct values. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Polkowski, L., & Artiemjew, P. (2008). Rough sets in data analysis: Foundations and applications. Studies in Computational Intelligence, 122, 33–54. https://doi.org/10.1007/978-3-540-78534-7_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free