Dashboard Components
Last updated
Last updated
The main option of retrieving your results is via the dashboard.
The dashboard has two main components: A Snapshots List view and a Scans List view. We're adding a Project Overview which will be available in the next release.
This view allows you to quickly compare snapshots and see which is performing better. Also during production it helps you tell if there are any issues arising with the latest data feed. All the metrics and visuals are aggregations of the scan results:
Snapshot name and date/timestamp
% Passed - out of the scans ran on the snapshot, how many found no issues (if a scan finds no issues, then it is passed, if it finds at least one issue then it is failed)
Accuracy % - The value of the accuracy metric on the snapshot
Model Health - Red/amber/green scale. This is currently pre-set based on % of scans passed (red - 30% and below, yellow - 30% - 70%, green - 70% and above), however in future iterations we will give you the option to customise it
Accuracy, Data Leakage, Robustness, Drift, Sensitivity, Bias - are all %scans passed out of scans ran in the given area
Top issue found: what is the issue that impacts the biggest % of the sample. E.g. if an accuracy issue that impacts the whole sample was found at the same time as a bias issue that impacts one segment only, the top issue found surfaced in this view would be accuracy, but you can check for details on all the other issues in the Scans detail view
Sample Impact %: for the top issue found, what is the % of sample impacted
For each snapshot, this provides: a detailed view of the scans performed, broken down by each issue type tested as part of each scan. Fields and definitions are as per previous view. Additional fields include details on the snapshot itself, as well as the following:
Issue type scan tested for
Metric used
Threshold used by the scan
Parameters used by the scan, e.g. demographic feature
Whether the issue was found or not
No. of features an issue was found for if any
No. of segments an issue was found for if any
You can also use the Compare functionality to see which tests are passing from one version of the model to the other.
Instead of linking to the dashboard, you can also link to your usual toolkits or retrieve your results in your IDE as per available notebooks. If you want to use our API to integrate with other tools, just get in touch: info@etiq.ai