It’s ubiquitous! Artificial intelligence!
Some view AI as the pinnacle of modern technological achievement, while others perceive it as a threat to jobs, education, and the rights and freedoms of human beings. One thing is certain – AI is firmly here to stay, and we as privacy professionals have a critical role in how our organizations develop, deploy, and manage AI when it involves processing data on human beings.
In Part 1 of our AI Governance Insight Series, ‘Privacy Concerns: How Can Your Privacy Program Support AI Governance?’, we looked at the principles that govern a privacy program and how they play a critical role in an AI Governance Model. Now, let’s delve into practical recommendations for updating your privacy program to support AI Governance.
Key Recommendations to Support AI Governance
Below are high-level recommendations for adapting your current privacy program to support AI governance and address data privacy concerns:
Data Inventory/Data Maps:
- Create a data map at the outset of discussions for any data processing activity considering AI technology utilization.
- Update existing data maps to identify where data processing actions are using AI systems.
Revise templates to document data flows into, through and out of AI, capturing the legal basis for processing at each stage. Consider how the AI works, and the possibility of new data sources, sharing, storing, etc.
Review and Update Your Privacy Program Policies:
As part of an organizations regular Policy review cycle, ensure privacy governance and operational documents are revised to address AI, including, but not limited to:
- Online Privacy Notices
- Privacy Program Policy
- Data Subject Rights Policy
- Data Retention Policy
Implement AI Governance:
- Ensure and document roles and responsibilities of the data privacy professional as a key stakeholder in the AI governance structure.
- Draft a playbook for developing and implementing Trustworthy AI.
- Define and understand the roles and responsibilities of the privacy office in your organization’s AI governance model.
Review and Update Your Change Management Process:
- The 7 PbD (Privacy by Design) principles are considered for all new products and services using AI:
- Proactive not reactive
- Privacy as a default setting
- Privacy embedded into design
- Full functionality
- End to end security
- Visibility and transparency
- Respect for user privacy
Ensure these principles are understood and followed from the outset of development through deployment of AI systems.
- Review and update your PIA/DPIA processes to address additional risks AI use presents. Consider including additional risk assessment elements, e.g., algorithm impact assessments, model cards and datasheets (as proposed by the machine learning community) as part of your DPIA. Ensure there is a defined escalation process to review identified risks and approve the steps taken to mitigate these.
- Consider using synthetic data in developing, testing, and validating AI Systems (creating synthetic data from real personal data would be considered processing and regulatory compliance would still be required).
- Ensure that the risk assessment is undertaken by a cross disciplinary team, e.g., privacy, legal, data engineers, infosec etc.
- Ensure there is a defined escalation process to review identified risks and approve the steps taken to mitigate these.
Review and Update Your Third Party Risk Management (TPRM) Process:
- Contracts (upstream and downstream) to ensure they include consideration of AI use by third parties.
- Risk Assessment Questionnaires are revised to enable your organization to review and take action on the risks presented by AI used by third parties.
Training:
- Revise Internal staff training for employees processing personal information to include the risks associated with processing personal data using AI and the requirements when considering use of AI.
- Identify and provide regular training to individuals involved in developing, deploying, monitoring, and maintaining AI systems.
Conclusion
The potential benefits of utilizing AI technologies in business are significant but they are not without risk. Developing and deploying the guardrails outlined in this article will take time, but are ultimately necessary from a regulatory standpoint, and desirable from a best business practices and customer trust standpoint. Bringing your program’s compliance levels up to the standards that will be expected will require up-front investment, resources, and support. Our experienced Myna team can bring together the cross functional skills of data privacy, data governance, security, and privacy engineers to help you implement and mature your privacy and AI programs.
For information on how Myna can help you with any or all these requirements, please contact Wills Catling, Director at Myna Partners at: william.catling@levelupconsult.com and we’d be happy to set up a consultation to hear about your program needs.