Quickstart
Last updated
Last updated
The Etiq library supports Python versions 3.8, 3.9, 3.10, 3.11 and 3.12 on Windows, Mac and Linux.
With the release of Etiq 1.6.0 the package is now compatible with Apple Silicon processors.
Due to dependencies in the package you may need to install libomp
via homebrew
If you haven't already, install homebrew on your computer: https://brew.sh/
Then run the following in your terminal: brew install libomp
If you're looking to use our Great Expectations integration, please ensure you install
great-expectations <= 0.18.19
due to breaking changes being introduced with v.1.0.0
If you have any questions please contact us at info@etiq.ai
To start with, go to the dashboard site, sign-up and login. If you want to deploy directly on your AWS instance, just go to our AWS Marketplace listing and deploy from there (using Etiq via AWS Marketplace incurs a cost however).
If you have purchased version 1.2 via AWS Marketplace please go to this section of the docs.
To start logging tests from your notebook or other IDE to your dashboard you will need a token to associate your session with your account. To create this token, once in your account go to the Token Management window and just click on Add New Access Token. Then copy and paste into your notebook.
Download and install Etiq:
For install considerations please go to this section.
Then import it in your IDE & log to the dashboard:
Go to an example notebook or keep reading to get an understanding of the key concepts used in the tool. Please don't leave your token lying around as if anyone finds it they can use it to retrieve information stored about your pipelines similarly to how you use a password/username authentication.
Data about your test results get stored on Etiq's AWS instance. However your datasets and models will not actually be stored anywhere, so you can rest assured.
If your security set-up is such that you would need a deployment entirely on your cloud instance or on prem just get in touch with us - info@etiq.ai
A project is a collection of snapshots. To start using the versioning and dashboard functionality, please set a project and a project name. You only have to run it once per session and all the details logged as part of data pipelines or debias pipelines will be stored. Once you go to your dashboard you will be able to see each of your projects and dig deeper into each of them.
This step is just about logging the relevant information so you can run your tests/scans afterwards. You will need to log your model, your training and test dataset and the config file which defines key parameters, such as what's the predicted features, what are the categorical/continuous features, etc.
Depending at which stage in the model build/production you are and what type of scans you are running, you will want to log differently:
If you are using Etiq's wrapper around model classes, then essentially you log as you train. You can input your entire dataset (in an appropriate format, e.g. already encoded, or already transformed). And in the config you can give different % to the train/validation/test split, e.g. "train_valid_test_splits": [0.8, 0.1, 0.1]
If you have already built your model, then you will need to log a hold-out sample to Etiq as your dataset, and this sample will need to be in a format appropriate for being scored by your scoring function. When you log the split in the config, you should reflect this as your hold-out sample is a validation sample: "train_valid_test_splits": [0.0, 1.0, 0.0]. You can also use this set-up for production type use cases.
First you will need to load your config file. This file contains relevant parameters which will be useful in logging the rest of the elements so make sure you log this before you create your snapshot.
You can also load your config file in the way shown below. However, we prefer the "modern" way shown above because the config below is only used within the with block and doesn't persist until overridden, as in the global example above.
Example configs are provided here and also below. For details on what to log to config check the Config Key Concept. For details on how to adjust the config for different scan types, check Accuracy, Leakage, Drift, Bias or relevant notebooks by scan type here.
For example notebooks and config files, just go to our demo repository.
Next, you will log your dataset and your model. To log your dataset please log the test dataset that you used to assess your model. (There are 2 scans for which your training dataset will be needed: scan_bias_sources and scan_leakage - for more details look at Scan Types).
If your dataset is not in a format your model can score, the scan will not run!
If you have a use case where you can use demographic feature in your training dataset, you have the option to leave it in using this clause in the config:
"remove_protected_from_features": false
The default is that the demographic feature is removed in the scoring. This is because in regulated use cases you shouldn't use the demographic/protected feature to train your model on, but the scan still needs information about the demographic if you want to run bias scans.
Parameter 'model_architecture' refers to model architecture, and is optional, e.g.
Parameter 'model_fitted' refers to model fit, however you store it, e.g.:
You can also specify the 'model_fitted' parameter only.
And create your snapshot:
Now you are ready to run scans on your snapshot:
The above is an example using an already trained model in pre-production. For a full notebook on this go here.
If you want to use one of Etiq's pre-configured model classes see an example here.
If you want to use the scans in production, just email us info@etiq.ai . A demo integration with Airflow will be available shortly.
Threshold values in the example config files are for example purposes. Different use cases will require different thresholds. As the AI regulation sector matures we will add corresponding standards and suggested thresholds, but this will never be a hard and fast rule, it will be a suggestion. What might work for one use case will not work for another.
You have the option to add the categorical and continuous features in your config, as per the example below. This is useful for certain types of scans which translate the findings into business rules, but you have to remember to update your config if you take out or add new features.
If you do not want to scan for bias or do not have in your dataset information about protected features, you can just not add that information to your config. An example config for a data drift use case below. For more details on this example check the github repo and/or the section about Drift
Exciting news etiq for spark is now also available. The data and drift tests you know and love applied to more data than ever before. To install & import just run the below:
Scans like bias sources and leakage are about tests on the training dataset. For more details on how to run these scans, go to their corresponding sections: Leakage and