Etiq Test Version
Our initial limited release to give you a flavour of our library and dashboard. Not for commercial or production uses!

Use cases & limitations

A typical use case for etiq: Let's say you are building a predictive model using tabular customer data. You have wrangled your data and tried a few model classes. Now you want to see if your model is discriminating unintentionally against certain demographic groups, e.g. based on gender, ethnicity, age, etc. and you do have access to the demographic label. This is where you can use the etiq library.
Etiq library provides different kinds of pipelines that are intended to plug in to your existing pipelines and test them for a specific purpose. The pipelines currently available focus on identifying and mitigating unintended discrimination. Etiq pipelines provide identify methods, repair methods, metrics to evaluate outcomes including fairness metrics.
The current free release library provides only one etiq pipeline and the usage is limited to a model consisting of max 15 features. This is a teaser library and its intended use is not for production or commercial setting. As we're developing etiq further, we want to understand how you would interact with it, if it's a useful library and how to shape it to meet your needs. While we have tested our library, we expect issues and bugs in this current iteration which stems from usage that we haven't predicted. For more details on the theoretical underpinnings of our methods go to Definitions. We'd like to stress that the 'fairness' literature and methodology is a very wide field, with a lot of divergent opinions. Where applicable we will refer to the framework we are using, but some of our approaches are experimental.
In addition to the library the solution also includes a dashboard that presents results of the different pipelines logged by the library. This additional functionality includes the ability to retrieve results of pipelines from one session to another.
If you want to access our full solution including support from us, or submit any comments, feature requests or issues please login to our slack channel or email us: [email protected]

Quickstart

The Etiq library supports Python versions 3.6, 3.7, 3.8 and 3.9 on Windows, Mac and Linux.
We do not support Mac m1 at the moment
We recommend using pip to install the Etiq library and its dependencies. To download the library & go ahead with the process go to this link.
To import etiq_core:
1
from etiq_core import *
Copied!
Once you have imported the library go to the dashboard site, sign-up and login.
To start storing metrics you ran from your notebook or other IDE to your dashboard you will need a token to associate your session with your account. To create this token, once in your account go to the Token Management window and just click on Add New Access Token. Then copy and paste into your notebook.
From your notebook just login to the dashboard and you're all set to go. Now as you log different pipelines and debiasing pipelines and tie them to a project you'll be able to retrieve them both via the dashboard and via your notebook across sessions.
1
etiq_login("https://dashboard.etiq.ai/", "<token>")
2
Copied!
Data about your pipelines and debiasing pipelines get stored on Etiq's AWS instance. However your datasets and models will not actually be stored anywhere, so you can rest assured.
If your security set-up is such that you would need a deployment entirely on your cloud instance or on prem just get in touch with us - [email protected]
Please don't leave your token lying around as if anyone finds it they can use it to retrieve information stored about your pipelines. Similarly to how you use a password/username authentification.

Projects

To start using the versioning and dashboard functionality, please set a project and a project name. You only have to run it once per session and all the details logged as part of data pipelines or debias pipelines will be stored. Once you go to your dashboard you will be able to see the metrics of all your pipelines & debiasing pipelines logged split by the project name
1
#start the project
2
our_project = Project(name="TestAdult")
Copied!
For team sharing functionality reach out to our team [email protected]

DataPipeline

To follow the example analysis below, download the Adult dataset from https://archive.ics.uci.edu/ml/datasets/adult or load it in the notebook as a Pandas dataframe from the samples included in the library. A demo notebook is available at https://github.com/ETIQ-AI/demo/blob/main/DemoAdultLibrary03.ipynb
1
data = load_sample('adultdata')
Copied!
The DataPipeline object has the model we'd like to evaluate, the dataset used to train it and the fairness metrics that are most relevant to our project.
Below, we define the parameters for the debiasing process using the BiasParams structure. This allows us to specify the protected category (often a demographic feature you'd like to mitigate bias for) using the protected parameter; specifiy who is in the privileged and unprivileged groups (these are set using the privileged and unprivileged parameters respectively); specify what is the positive outcome and the negative outcome in this dataset (these are set using the positive_outcome_label and negative_outcome_label parameters respectively).
1
debias_param = BiasParams(protected='gender',
2
privileged='Male',
3
unprivileged='Female',
4
positive_outcome_label='>50K',
5
negative_outcome_label='<=50K')
Copied!
Even if your model does not use the specific demographic features you want to identify bias for, you should include this in the dataset. (etiq will automatically exclude it later during any model refitting).
It is important to note that the protected feature is removed from the dataset for the purposes of training a model and will only be used to evaluate the model for bias.
Specify transforms like Dropna or EncodeLabels to make sure data are numeric and without missing values.
1
transforms = [Dropna, EncodeLabels]
Copied!
The DatasetLoader reads in the data, applies any transformations, splits the data into training, validation and test datasets and sets aside the test dataset to avoid data leakage in your analysis. The training and validation datasets are loaded into the Dataset class.
1
dl = DatasetLoader(data=data,
2
label='income',
3
transforms=transforms,
4
bias_params=debias_param,
5
train_valid_test_splits=[0.8, 0.1, 0.1],
6
names_col = data.columns.values)
Copied!
Choose the metrics you want computed for this project.
1
metrics_initial= [accuracy, equal_opportunity]
Copied!
Each of these metrics measure how well our model is performing when classifying the data. For example the accuracy metric returns the fraction of the training dataset which is correctly classified. The equal_opportunity metric measures the true positive rate i.e. the proportion positive outcome labels that are correctly classified. The other available metrics used to evaluate model performance are
  • true_neg_rate (the proportion negative outcome labels that are correctly classified)
  • true_pos_rate (the same as equal_opportunity).
  • demographic_parity (the proportion positive outcome labels that are correctly classified)
  • equal_odds_tpr
  • equal_odds_tnr
Load the model you'd like to evaluate with the dataset or choose one of the classifiers that are already available. For this test release these DefaultXGBoostClassifier (a wrapper around XGBoost classifier), DefaultRandomForestClassifier (a wrapper around the random forest classifier from sklearn) and DefaultLogisticRegression (a wrapper around the logistic regression classifier from sklearn).
1
clf_model = DefaultXGBoostClassifier()
Copied!
You can use a pre-trained model and are not restricted to the model classes we have wrappers for. We just provided some widely-used model classes for ease of use.
Models from other libraries (Etiq supports models from XGBoost, LightGBM, PyTorch, TensorFlow, Keras and scikit-learn) may be used by wrapping them in the EtiqModel class . We could, for example, create an LGBMClassifier model, train it and use the trained model.
1
import lightgbm as lgb
2
lgb_model = lgb.LGBMClassifier()
3
fitted_lgb = lgb_model.fit(X_train, y_train)
4
clf_model = Model(model_architecture=lgb_model, model_fitted=fitted_lgb)
Copied!
Now you can create the DataPipeline. The DatasetLoader class will take the data, transform it, split it into training/validation/testing data and load it in. The DataPipeline computes your metrics of interest on the Dataset, using the model you provided.
1
pipeline_initial = DataPipeline(dataset_loader=dl, model=clf_model, metrics=metrics_initial)
2
pipeline_initial.run()
Copied!
Remember your dataset has as many features as you want but in this limited release library the DataPipeline will only pick up on the first 15 features

DebiasPipeline

DebiasPipeline takes as inputs a data pipeline, an identify and/or repair method and metrics you want to use to evaluate your model. Identify methods are as the name suggests are intended to help you identify bias issues. Repair methods are designed to help fix or mitigate the issues identified and include implemented algorithms from the fairness literature.
The current repair pipeline we provide is at the pre-processing level, i.e. changes the dataset with the objective that some of the sources of bias in it will be mitigated. Other methods at in-processing or post-processing stages will be more effective from an optimization point of view, but they might not address some of the issues in the data, which is why this is a good starting area. In our full solution we have additional pipelines.
An example debiasing pipeline is given below
1
identify_pipeline = IdentifyBiasSources(nr_groups=20, # nr of segments based on using unsupervised learning to group similar rows
2
train_model_segment=True,
3
group_def=['unsupervised'],
4
fit_metrics=[accuracy, equal_opportunity])
5
6
# the DebiasPipeline aims to mitigate sources of bias by applying different types of repair algorithms
7
# the library offers implementations of repair algorithms described in the academic fairness literature
8
9
repair_pipeline = RepairResamplePipeline(steps=[ResampleUnbiasedSegmentsStep(ratio_resample=1)], random_seed=4)
10
11
debias_pipeline = DebiasPipeline(data_pipeline=pipeline_initial,
12
model=xgb,
13
metrics=metrics_initial,
14
identify_pipeline=identify_pipeline,
15
repair_pipeline=repair_pipeline)
16
debias_pipeline.run()
Copied!
IdentifyBiasSources is the type of pipeline you are using. For this test release we are providing this pipeline. Similarly RepairResamplePipeline denotes what type of repair pipeline it is.
As a convention anything that is a pipeline type is <TypeOfPipeline>Pipeline
The parameters for the identify pipeline available in this release are as follows:
  • group_definition = unsupervised. This is a type of pipeline method that looks for groups (i.e. segments of the dataset) that have issues that could cause bias. In our test version we have only released one option but in our full package we have multiple options.
  • nr_groups - Experiment with a few different options based on how large your dataset is. This refers to how many groups/segments you think your dataset could be split into.
Remember this is just one of the pipelines we provide and arguably not our most interesting one. If you want to explore using our other pipelines get in touch with us: [email protected]
As with the data pipeline, when running the pipeline, we get the logs of how the pipeline has run:
1
INFO:etiq_core.pipeline.DebiasPipeline36:Starting pipeline
2
INFO:etiq_core.pipeline.DebiasPipeline36:Start Phase IdentifyPipeline844
3
INFO:etiq_core.pipeline.IdentifyPipeline844:Starting pipeline
4
INFO:etiq_core.pipeline.IdentifyPipeline844:Completed pipeline
5
INFO:etiq_core.pipeline.DebiasPipeline36:Completed Phase IdentifyPipeline844
6
INFO:etiq_core.pipeline.DebiasPipeline36:Start Phase RepairPipeline558
7
INFO:etiq_core.pipeline.RepairPipeline558:Starting pipeline
8
INFO:etiq_core.pipeline.RepairPipeline558:Completed pipeline
9
INFO:etiq_core.pipeline.DebiasPipeline36:Completed Phase RepairPipeline558
10
INFO:etiq_core.pipeline.DebiasPipeline36:Refitting model
11
INFO:etiq_core.pipeline.DebiasPipeline36:Computed metrics for the repaired dataset
12
INFO:etiq_core.pipeline.DebiasPipeline36:Completed pipeline
Copied!
In the fairness literature, mitigation is considered to be the likely terminology as these types of issues are hard to remove entirely. Our usage of the term repair & debias refers primarily to mitigation, rather than removal.

Output methods

Now that you've checked the logs and the etiq pipeline ran, to retrieve the outputs, use the following methods:

Metrics

1
debias_pipeline.get_protected_metrics()
Copied!
Example output:
1
{'DataPipeline502':
2
[{'accuracy': ('privileged', 0.84, 'unprivileged', 0.93)},
3
{'equal_opportunity': ('privileged', 0.6901408450704225,'unprivileged',0.55)}],
4
'DebiasPipeline426':
5
[{'accuracy': ('privileged', 0.82, 'unprivileged', 0.91)},
6
{'equal_opportunity': ('privileged', 0.6539235412474849,'unprivileged', 0.65)}]}
Copied!

Issues found by the pipeline

Our library is intended for you to test your models and see if there are any issues. The pipeline surfaces potential issues, and then it's up to you whether you consider them to be issues for your specific model or not. For more details on definitions please see Definitions tab
1
debias_pipeline.get_issues_summary()
Copied!
Example output
To help make sense of the segments, we also have a profiler method which gives you an idea about the rows found to have specific issues.
1
debias_pipeline.get_profiler()
Copied!
To understand more about the types of errors this pipeline finds, please use the following method; it will give you definitions and thresholds used.
1
debias_pipeline.get_thresholds()
Copied!
Stay tuned for our next release where you will be able to customize the definitions and thresholds.

Evaluate method

If you've just built a pipeline using a repair method and want to see if the issues you've identified before, use the evaluate method
1
evaluate_debias = EvaluateDebiasPipeline(debias_pipeline=debias_pipeline,
2
identify_pipeline=identify_pipeline)
3
evaluate_debias.run()
4
5
evaluate_debias.get_issues_summary_before_repair()
6
7
evaluate_debias.get_issues_summary_after_repair()
Copied!

Results Retrieval across Sessions

To see all your projects and pipelines from your notebook or IDE use the methods below:
1
projects = get_all_projects()
2
3
projects
Copied!
This should give you an output like the one below:
1
[<ETIQ:Project [1] Default Project>,
2
<ETIQ:Project [2] TestAdult>]
Copied!
If you just opened a new session but want the pipelines and debiasing pipelines to be logged as part of the project you used in your previous session make sure you find the ID of the project and set your current project to the id you are looking for:
1
set_current_project(projects[1])
Copied!
To see what pipelines are associated with the project, use the methods below:
1
our_project.get_all_data_pipelines()
2
3
#get pipelines by type, use pipeline type name, e.g. IdentifyBiasSources
4
5
our_project.get_all_pipelines_by_type(IdentifyBiasSources)
Copied!
Last modified 9d ago