Commit 941f44e6 authored by BERNIER Fabien's avatar BERNIER Fabien
Browse files

Merge branch 'master' of gitlab.inria.fr:galvesda/expout

parents 0050dff9 3b4c2d4c
BSD 3-Clause License
Copyright (c) 2020, The FIXOut developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
\ No newline at end of file
# ExpOut
This project is an extension of LimeOut[1]. It aims at tackle process fairness for classification, while keeping the accuracy level (or improving).
More precisely, ExpOut incorporates different explainers.
# FixOut
FixOut addresses fairness issues of ML models based on decision outcomes, and shows how the simple idea of “feature dropout” followed by an “ensemble approach” can improve model fairness.
Originally, it was conceived to tackle process fairness of ML Models based on decision outcomes (see LimeOut [1]). For that it uses an explanation method to assess a model’s reliance on salient or sensitive features, that is integrated in a human-centered workflow: given a classifier M, a dataset D, a set F of sensitive features and an explanation method of choice, FIXOut outputs a competitive classifier M’ that improves in process fairness as well as in other fairness metrics.
Classifiers available:
* Multilayer Perceptron
* Logistic Regression
* Random Forest
* Bagging
* AdaBoost
* Multilayer Perceptron
* Gaussian Mixture
* Gradiente Boosting
Explainers
* LIME
* Anchors
* SHAP
# Example
`python runner.py --data german.data --trainsize 0.8 --algo mlp --max_features 10 --cat_features 0 2 3 5 6 8 9 11 13 14 16 18 19 --drop 8 18 19 --exp anchors`
# References
[1] Vaishnavi Bhargava, Miguel Couceiro, Amedeo Napoli. LimeOut: An Ensemble Approach To Improve Process Fairness. 2020. ⟨hal-02864059v2⟩
[1] Vaishnavi Bhargava, Miguel Couceiro, Amedeo Napoli. LimeOut: An Ensemble Approach To Improve Process Fairness. XKDD Workshop 2020. ⟨hal-02864059v2⟩
[2] Guilherme Alves, Vaishnavi Bhargava, Miguel Couceiro, Amedeo Napoli. Making ML models fairer through explanations: the case of LimeOut. AIST 2020. ⟨hal-02864059v5⟩
## Dependencies
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment