Action |
Details of implementation |
Have measures in place to ensure personal data is used in a safe and trusted manner within the AI system |
- Measures to achieve fairness and reasonableness for decisions during development and testing stages, e.g. measures relating to bias assessment, ensuring quality of training data, or the repeatability of results using personal data;
- Safeguards to protect personal data, e.g. pseudonymisation and data minimisation during model development and testing, ensuring the security of AI Systems before and after their deployment, etc.
- Organisations may also consider whether to provide information on how accountability mechanisms and human agency and oversight have been implemented where outcomes may have a higher impact on the individual.
|
Provide information on measures to ensure data quality during AI system development
|
Such information includes:
- Steps taken to ensure the quality of personal data in the training dataset (e.g. how representative, how recent) to improve model accuracy and performance
- Whether pseudonymised data was used during model development or what safeguards were adopted if personal data had been used instead;
- Whether it was necessary to use personal data when conducting bias assessments on training datasets;
- If personal data was used, the process or safeguards adopted to secure the testing environment; and
- Whether data minimisation was practised at all stages of model and/or AI System development and testing.
|
Make clear and concise information about such policies and practices available to individuals |
- Making the policy publicly available through the organisation’s website, instead of upon request
- If the organisation relies on exceptions to consent (e.g. Business Improvement and Research Exceptions), to provide information about the practices and safeguards adopted to protect the interests of individuals
|
Perform self-assessments to assess compliance with AI governance principles and make improvements based on results |
- Review the Model AI Governance Framework to understand the principles and put in place the recommended steps; perform self-assessments with the Implementation and Self-Assessment Guide for Organisations
- Use technical tools such as AI Verify, where information from the testing report can be included in notifications or written policies – e.g. results of explainability testing can be used to identify the data features that are most likely to influence the recommendation, prediction or decision
|