This project is an extension of LimeOut[1]. It aims at tackle process fairness for classification, while keeping the accuracy level (or improving).
More precisely, ExpOut incorporates different explainers.
FixOut addresses fairness issues of ML models based on decision outcomes, and shows how the simple idea of “feature dropout” followed by an “ensemble approach” can improve model fairness.
Originally, it was conceived to tackle process fairness of ML Models based on decision outcomes (see LimeOut [1]). For that it uses an explanation method to assess a model’s reliance on salient or sensitive features, that is integrated in a human-centered workflow: given a classifier M, a dataset D, a set F of sensitive features and an explanation method of choice, FIXOut outputs a competitive classifier M’ that improves in process fairness as well as in other fairness metrics.
[1] Vaishnavi Bhargava, Miguel Couceiro, Amedeo Napoli. LimeOut: An Ensemble Approach To Improve Process Fairness. 2020. ⟨hal-02864059v2⟩
[1] Vaishnavi Bhargava, Miguel Couceiro, Amedeo Napoli. LimeOut: An Ensemble Approach To Improve Process Fairness. XKDD Workshop 2020. ⟨hal-02864059v2⟩
[2] Guilherme Alves, Vaishnavi Bhargava, Miguel Couceiro, Amedeo Napoli. Making ML models fairer through explanations: the case of LimeOut. AIST 2020. ⟨hal-02864059v5⟩