Bias detection in predictive models using fairml

ISSN: 22498958
N/ACitations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

In Machine Learning, predictive models are used in decision making processes. Policy- makers, auditors and end users have concern regarding the prediction model whether these predictive models are making bias/unfair decision or not. There is the possibility that model can generate wrong decision due to bias. This bias can be intentional or unintentional discrimination due to some of the features present in the dataset. Bias arises in many industries like Banking, Housing, Education, Finance, Insurance, etc. which uses AI model for prediction. If the significance of the feature is high and also the feature is considered as protected attribute, namely race, religion, gender, then the feature can possibly contribute to bias in the prediction. To deal with this problem FairML model could help us. FairML is a framework that is put to use to discover bias in the predictive ML models. Basically it consists of four ranking algorithms (Iterative orthogonal feature projection (IOFP), Minimum Redundancy, Maximum Relevance (mRMR), Lasso Regression, Random forest) which helps in finding the significance of the features. FairML ranking algorithms handles both linear and non-linear dependencies. In this paper we have studied different feature algorithm for different prediction models in order to get the significant features as prediction models are used in every field.

Cite

CITATION STYLE

APA

Kumawat, V., Bangwal, V., & Lavanya, K. (2019). Bias detection in predictive models using fairml. International Journal of Engineering and Advanced Technology, 8(4), 847–851.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free