arXiv Open Access 2020

Fairway: A Way to Build Fair ML Software

Joymallya Chakraborty Suvodeep Majumder Zhe Yu Tim Menzies
Lihat Sumber

Abstrak

Machine learning software is increasingly being used to make decisions that affect people's lives. But sometimes, the core part of this software (the learned model), behaves in a biased manner that gives undue advantages to a specific group of people (where those groups are determined by sex, race, etc.). This "algorithmic discrimination" in the AI software systems has become a matter of serious concern in the machine learning and software engineering community. There have been works done to find "algorithmic bias" or "ethical bias" in the software system. Once the bias is detected in the AI software system, the mitigation of bias is extremely important. In this work, we a)explain how ground-truth bias in training data affects machine learning model fairness and how to find that bias in AI software,b)propose a methodFairwaywhich combines pre-processing and in-processing approach to remove ethical bias from training data and trained model. Our results show that we can find bias and mitigate bias in a learned model, without much damaging the predictive performance of that model. We propose that (1) test-ing for bias and (2) bias mitigation should be a routine part of the machine learning software development life cycle. Fairway offers much support for these two purposes.

Topik & Kata Kunci

Penulis (4)

J

Joymallya Chakraborty

S

Suvodeep Majumder

Z

Zhe Yu

T

Tim Menzies

Format Sitasi

Chakraborty, J., Majumder, S., Yu, Z., Menzies, T. (2020). Fairway: A Way to Build Fair ML Software. https://arxiv.org/abs/2003.10354

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2020
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓