Evaluation of error-sensitive attributes

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Numerous attribute selection frameworks have been developed to improve performance and results in the research field of machine learning and data classification (Guyon & Elisseeff 2003; Saeys, Inza & Larranaga 2007), majority of the effort has focused on the performance and cost factors, with a primary aim to examine and enhance the logic and sophistication of the underlying components and methods of specific classification models, such as a variety of wrapper, filter and cluster algorithms for feature selection, to work as a data pre-process step or embedded as an integral part of a specific classification process. Taking a different approach, our research is to study the relationship between classification errors and data attributes not before, not during, but after the fact, to evaluate risk levels of attributes and identify the ones that may be more prone to errors based on such a post-classification analysis and a proposed attribute-risk evaluation routine. Possible benefits from this research can be to help develop error reduction measures and to investigate specific relationship between attributes and errors in a more efficient and effective way. Initial experiments have shown some supportive results, and the unsupportive results can also be explained by a hypothesis extended from this evaluation proposal. © Springer-Verlag 2013.

Cite

CITATION STYLE

APA

Wu, W., & Zhang, S. (2013). Evaluation of error-sensitive attributes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7867 LNAI, pp. 283–294). https://doi.org/10.1007/978-3-642-40319-4_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free