# Fairness Algorithms

Fairness.jl provides with various algorithms that can help in mitigating bias and improving fairness metrics.

## Introduction

These algorithms are wrappers. As demonstrated in last section, these wrappers can be used to compose a complex pipeline with more than 1 fairness algorithm. These wrappers can be used only with binary classifiers.

The fairness algorithms have been divided into 3 categories based on the parts in the pipeline that the algorithm can control. These 3 categories are Preprocessing, Postprocessing and Inprocessing[WIP].

## Preprocessing Algorithms

These are the algorithms that have control over the training data to be fed into machine learning model. This class of algorithms improves the representation of groups in the training data.

### ReweighingSampling Algorithm

`Fairness.ReweighingSamplingWrapper`

— Type`ReweighingSamplingWrapper`

ReweighingSamplingWrapper is a preprocessing algorithm wrapper in which Weights for each group-label combination is calculated. Using the calculated weights, rows are sampled uniformly. The weight is used to sample uniformly. The number of datapoints used to train after sampling from the reweighed dataset can be controlled by `factor`

.

`Fairness.ReweighingSamplingWrapper`

— Method`ReweighingSamplingWrapper(classifier=nothing, grp=:class, factor=1, rng=Random.GLOBAL_RNG)`

Instantiates a ReweighingSamplingWrapper which wrapper the classifier with the Reweighing fairness algorithm together with sampling. The sensitive attribute can be specified by the parameter `grp`

. `factor`

*number*of*samples*in*original_data datapoints are sampled using calculated weights and then used to train after sampling from the reweighed dataset. A negative or no value value for `factor`

parameter instructs the algorithm to use the same number of datapoints as in original sample.

### Reweighing Algorithm

This model being wrapped with this wrapper needs to support weights. If the model doesn't support training using weights, then error is thrown. In case weights are not supported by your desired model, them switch to ReweighingSampling Algorithm. To find the models in MLJ that support weights, execute:

```
using MLJ
models(x-> x.supports_weights)
```

`Fairness.ReweighingWrapper`

— Type`ReweighingWrapper`

ReweighingWrapper is a preprocessing algorithm wrapper in which Weights for each group-label combination is calculated. These calculated weights are then passed to the classifier model which further uses it to make training fair.

`Fairness.ReweighingWrapper`

— Method`ReweighingWrapper(classifier=nothing, grp=:class)`

Instantiates a ReweighingWrapper which wrapper the `classifier`

with the Reweighing fairness algorithm. The sensitive attribute can be specified by the parameter `grp`

. If `classifier`

doesn't support weights while training, an error is thrown.

`Fairness._calculateWeights`

— FunctionHelper function for ReweighingWrapper and ReweighingSamplingWrapper. grps is an array of values of protected attribute. y is an array of ground truth values. Array of (Frequency) weights are returned from this function.

## Postprocessing

These are the algorithms that have control over the final predictions. They can tweak final predictions to optimise fairness constraints.

### Equalized Odds Algorithm

`Fairness.EqOddsWrapper`

— Type`EqOddsWrapper`

It is a postprocessing algorithm which uses Linear Programming to optimise the constraints for Equalized Odds.

`Fairness.EqOddsWrapper`

— Method`EqOddsWrapper(classifier=nothing, grp=:class)`

Instantiates EqOddsWrapper which wraps the classifier

### Calibrated Equalized Odds Algorithm

`Fairness.CalEqOddsWrapper`

— Type` CalEqOddsWrapper(classifier=nothing, grp=:class, fp_rate=1, fn_rate=1)`

Instantiates CalEqOddsWrapper which wraps the classifier

`Fairness.CalEqOddsWrapper`

— Method` CalEqOddsWrapper(classifier=nothing, grp=:class, fp_rate=1, fn_rate=1)`

Instantiates CalEqOddsWrapper which wraps the classifier

### LinProg Algorithm

This algorithm supports all the metrics provided by Fairness.

`Fairness.LinProgWrapper`

— Type`LinProgWrapper`

It is a postprocessing algorithm that uses JuMP and Ipopt library to minimise error and satisfy the equality of specified specified measures for all groups at the same time. Automatic differentiation and gradient based optimisation is used to find probabilities with which the predictions are changed for each group.

`Fairness.LinProgWrapper`

— Method`LinProgWrapper(classifier=nothing, grp=:class, measure=nothing, measures=nothing)`

Instantiates LinProgWrapper which wraps the classifier and containts the measure to optimised and the sensitive attribute(grp) You can optimize the all fairness metrics in measures. You can optimize for only a single metric using keyword measure.

## Inprocessing

These are the algorithms that have control over the training process. They can modify the training process to optimise fairness constraints.

### Penalty Algorithm

`Fairness.PenaltyWrapper`

— Type`PenaltyWrapper`

It is an inprocessing algorithm that wraps a Probabilistic classifier. Optimal thresholds for each group in protected attribute is found to minimise accuracy+ alpha*fairness_measure^2. Gradients are used to find optimal threshold values. alpha controls the fairness accuracy tradeoff.

`Fairness.PenaltyWrapper`

— Method`PenaltyWrapper(classifier=nothing, grp=:class, measure, alpha=1, n_iters=1000, lr=0.01)`

Instantiates PenaltyWrapper which wraps the classifier and contains measure to be optimised. It also provides various hyperparameters to the wrapper.

## Composability

Fairness.jl provides you the ability to easily use multiple fairness algorithms on top of each other. A fairness algorithm can be added over another fairness algorithm by simply wrapping the previous wrapped model with the new wrapper. Fairness.jl handles everything else for you! The use of wrappers provides you the ability to add as many algorithms as you want!!

```
julia> using Fairness, MLJ
julia> X, y, _ = @load_toydata;
julia> model = ConstantClassifier();
julia> wrappedModel = ReweighingSamplingWrapper(classifier=model, grp=:Sex);
julia> wrappedModel2 = EqOddsWrapper(classifier=wrappedModel, grp=:Sex);
julia> mch = machine(wrappedModel2, X, y);
julia> fit!(mch)
[ Info: Training [34mMachine{EqOddsWrapper{ReweighingSamplingWrapper{ConstantClassifier}},…} @921[39m.
[ Info: Training [34mMachine{ReweighingSamplingWrapper{ConstantClassifier},…} @532[39m.
[ Info: Training [34mMachine{ConstantClassifier,…} @565[39m.
[34mMachine{EqOddsWrapper{ReweighingSamplingWrapper{ConstantClassifier}},…} @921[39m trained 1 time; caches data
args:
1: [34mSource @803[39m ⏎ `Table{Union{AbstractVector{Multiclass{2}}, AbstractVector{Multiclass{3}}}}`
2: [34mSource @010[39m ⏎ `AbstractVector{Multiclass{2}}`
julia> ŷ = predict(mch, X);
```