Etiq Docs
Search…
Custom Tests
You have multiple ways to customize your tests. You can choose the scan types, scan metrics and thresholds, and you can use the lower-lever API to customize the testing pipelines.
Additionally, you can also add your own custom metrics, include it in your config and then your scans will check for this metric as well. For a notebook and config example check our github repo.
At the moment the decorators you can use to build your custom metric are as follows:
  • prediction_values (refers to what the model scores, should be a list)
  • actual_values (refers to the actuals, if your custom metric is for production, it will use the score as actual if it is provided and no actuals or model are available, should be a list)
  • protected_values (refers to the demographic variable you want to check for bias, if you have multiple demographics please create a feature with the intersection)
  • positive_outcome (directional, refers to what is considered a positive prediction or outcome, e.g. in the case of a lending model it would be a low risk score or if the customer is accepted for the loan, should be a value)
  • negative_outcome (directional, refers to what is considered a negative prediction or outcome, e.g. in the case of a lending model it would be a high risk score or if the customer is rejected for the loan, should be a value)
  • privileged_class (refers to the class in the demographics which is priviledged - not protected by the legislation, should be a value)
  • unprivileged_class (refers to the class in the demographics which is not priviledged - and which is protected by the legislation, should be a value, in future releases we will add functionality for multiple values here)
They follow the parameters available in the config file.
@etiq.metrics.accuracy_metric - refers to logging your metric as an accuracy metric (right now you can log it either as accuracy or as bias metric, with drift pending)
@etiq.custom_metric - specifies that this is a custom metric
Below is an example of how to add a custom metric to the accuracy metrics (in this example we're adding a gini metric to the accuracy metric scan suite):
1
@etiq.metrics.bias_metric
2
@etiq.custom_metric
3
@etiq.prediction_values('predictions')
4
def gini_index(predictions):
5
class_counts = Counter(predictions)
6
num_values = len(predictions)
7
sum_probs = 0.0
8
for aclass in class_counts:
9
sum_probs += (class_counts[aclass]/num_values) ** 2
10
return 1.0 - sum_probs
Copied!
Choose the area of metrics you want to include it in. At the moment you can add a custom metric either to accuracy_metrics or bias_metrics
Afterwards don’t forget to update your config file with the metric name, and thresholds you want, before you run your scan.
1
{
2
"dataset": {
3
"label": "income",
4
"bias_params": {
5
"protected": "gender",
6
"privileged": 1,
7
"unprivileged": 0,
8
"positive_outcome_label": 1,
9
"negative_outcome_label": 0
10
},
11
"train_valid_test_splits": [0.0, 1.0, 0.0]
12
},
13
"scan_accuracy_metrics": {
14
"thresholds": {
15
"accuracy": [0.7, 0.9],
16
"true_pos_rate": [0.75, 1.0],
17
"true_neg_rate": [0.7, 1.0]
18
}
19
},
20
"scan_bias_metrics": {
21
"thresholds": {
22
"equal_opportunity": [0.0, 0.2],
23
"demographic_parity": [0.0, 0.2],
24
"equal_odds_tnr": [0.0, 0.2],
25
"equal_odds_tpr": [0.0, 0.2],
26
"individual_fairness": [0.0, 0.2],
27
"gini_index": [0.3, 0.4]
28
}
29
}
30
}
Copied!
At the moment, you can add custom metrics to bias type scans and accuracy type scans. Stay tuned for future releases where we'll include drift.
Copy link