Artificial Intelligence (AI) technology is constantly developing and becoming more present in our daily lives, from healthcare to finance, and even in public services. As a response to this, the need for ensuring responsible AI has prompted several changes within the regulatory landscape. In particular, the recent passing of the EU AI Act has increased the pressure on organizations to ascertain their legal obligations and ensure that they comply. This is especially the case since the EU AI Act applies even to organizations outside the EU in certain instances.
To meet this challenge, businesses and organizations deploying AI technology need to have in place internal AI governance frameworks that ensure ethical, safe, and legal AI deployment. AI governance is crucial to mitigate risks whilst maximizing benefits, by setting clear standards and accountability.
What Is AI Governance?
AI Governance can be defined as
“a system of laws, policies, frameworks, practices and processes at international, national and organizational levels that helps various stakeholders implement, manage, oversee and regulate the development, deployment and use of AI technology, whilst managing risks, so the AI aligns with stakeholders’ objectives, is developed and used responsibly and ethically, and complies with applicable legal and regulatory requirements.” (IAPP, 2024)
Considering the vast amount of AI laws being introduced, navigating through the different legal instruments can leave businesses overwhelmed. This is where data privacy professionals can play a key role in organizations. They are already familiar with establishing and overseeing internal governance from a data protection perspective, and data is key to the functioning of AI systems. They are also used to working across departments and teams to ensure compliance, which places them in a good position to liaise with key stakeholders and get their buy-in. Privacy professionals are typically better suited to oversee AI governance since they are not closely involved in operational aspects, which often hampers independence and the ability to ensure compliance effectively.
Here are some examples of the key role privacy professionals can play in AI governance:
- Establishing an AI governance framework: Privacy professionals can put in place several principles that will help their organizations establish AI governance and comply with legal obligations.
- Assessing risk: Privacy professionals are well-placed to assess potential risks that the AI might be posing depending on where the AI is developed and deployed.
- Enabling systematic responsible AI testing: Privacy professionals can flag AI systems that require testing for fairness, transparency, accuracy, and safety.
- Checking ongoing compliance: Privacy professionals can help organizations stay updated with any legal developments and obligations that need to be followed, including the obligation to monitor and oversee AI systems whilst complying with mitigation and legal obligations.
- Addressing the impact across an organization: Privacy professionals can help create and maintain an inventory of AI systems and help identify who is accountable within the business for those AI systems.
Why AI Governance Matters for Organizations
AI governance is essential because it allows businesses to promote:
- Ethical use and fairness
- Safety and risk mitigation
- Accountability and transparency
- Privacy protection, and
- Prevent harm.
Conversely, poor AI governance can lead to undesirable consequences that lead to fines, legal repercussions, and reputational damage.
Amongst the many high-profile cases, reference can be made to the multiple decisions that were taken against Clearview AI on the basis that Clearview AI unlawfully processed data of individuals in breach of Article 6 of the GDPR. Clearview AI scraped images from social media without respecting data protection rights under articles 12, 15 and 17 of the GDPR. The data processing was carried out without the individuals’ consent or awareness, and their data was then shared with law enforcement agencies for surveillance purposes. In view of this, authorities from the EU, UK, and Canada imposed fines and Clearview AI was ordered to desist from processing data and delete the individuals’ data. To highlight the extent at which AI governance should be treated with great care and attention, Clearview AI was fined €20 million by the French Supervisory Authority, €20 million by the Italian Supervisory Authority, and €34 million by the Dutch Data Protection Authority.
Key Elements of Effective AI Governance
To ensure effective AI governance, here are some key guiding principles one can use to create an effective framework.
- Transparency and Explainability: Transparency requires you to inform users that AI is being used, whilst explainability requires you to easily communicate to users the rationale and use of the AI system, in a manner that is accessible and easy to understand.
- Fairness, Inclusion, and Equity: AI systems need to be trained well to ensure that their use does not produce biases or discrimination.
- Security and Safety: Risk assessments need to be carried out to identify any potential risks and mitigate them. You need to ensure that the AI systems are secure, meaning that they are safe from risks such as hacking and malware attacks as well as model poisoning. AI systems must also be safe for the parties interacting with them, including developers, deployers, and users.
- Human-centricity: The AI system should be of assistance and benefit to humans rather than exploiting their vulnerabilities.
- Privacy and Data Governance: AI systems should respect and comply with privacy and data protection legislation. Privacy principles, such as accountability and data minimization should be incorporated into AI system design and use. Personal data must be processed lawfully and for specific purposes only. One should implement appropriate measures to remain compliant and protect the quality and integrity of the data.
- Accountability and Integrity: It is important to have defined roles and responsibilities so that human(s) are made accountable. AI system oversight must be in place and monitored. Any errors or unethical results need to be documented and corrected accordingly.
- Robustness and Reliability: AI systems need to be robust enough to deal with potential errors that may occur and be reliable by demonstrating consistent successful performance.
How to Create an Internal AI Governance Framework
Below follows a high-level outline of the steps required to implement an internal AI governance framework.
STEP 1: DEFINE THE PURPOSE AND SCOPE OF AI GOVERNANCE
As a first step, it is important to reflect as to why AI governance is important for your company. This will help you to drive this initiative within your organization and obtain stakeholder buy-in. The motivations will most likely include legal compliance, risk management, and to protect your company’s reputation.
To determine the scope of your organization’s AI governance, you will need to identify which AI systems or processes are in scope. You will also need to determine which laws are applicable.
STEP 2: ENSURE ACCOUNTABILITY
Accountability entails making individuals responsible for overseeing AI policies, monitoring compliance, and addressing ethical concerns. You can do this by identifying specific individuals responsible for developing, implementing, and monitoring AI governance, and at the same time having policies in place that define who is responsible for which aspects of AI use, such as data privacy or risk management.
Companies need to demonstrate compliance with their reporting obligations. In doing so, it would be a good practice for companies to continuously monitor the development, deployment and use of AI. Having appointed experts and external advisors to provide feedback is a great way of demonstrating accountability in this regard.
STEP 3: DEVELOP AN AI POLICY
Companies should establish ethical guidelines that comprise of universal principles and values to ensure the ethical and responsible development and use of AI systems. Some of these ethical considerations include fairness, transparency, accountability, and privacy.
In their use of AI, companies should:
- ensure that individuals’ data privacy rights are respected, with special attention to data processing activities concerning sensitive data;
- be transparent about the AI’s purpose, data collection, and processing activities;
- have procedures in place that identify and mitigate risks such as any biases that the AI system might present to prevent discrimination and unfair results; and
- ensure ongoing compliance with data protection and AI laws.
STEP 4: ONGOING MONITORING AND ASSESSMENT
Companies should have in place procedures to make sure that they are informed of new developments that might impact their development, implementation and use of AI systems. Furthermore, their AI systems need to be reviewed to confirm whether they still meet regulatory standards.
Whilst compliance obligations remain high, many organizations might encounter certain challenges when trying to implement an AI governance program. Here are some common implementation challenges:
- Measurement of Risk
Using third-party software, hardware, and data can complicate the measurement of risk since not all organizations have the same risk metrics. Another issue is that there is no consensus on how to measure AI risk and trustworthiness. Furthermore, new risks are constantly emerging and risks might change at different stages of the AI development, making the measurement of risk more difficult to gauge. - Risk Tolerance
The amount of risk that an AI system poses and that can be tolerated is highly dependent on the context, application, and use of the AI system. A case-by-case approach needs to be taken to study all the relevant factors of the use and deployment of the AI. Additionally, risk tolerance is also dependent on policies and norms that evolve over time and are established by not only the AI owners and industries, but also by policy makers. - Risk Prioritization
Attempting to eliminate all risks and having unrealistic expectations can be counterproductive when it comes to leading the deployment and use of an AI system. Organizations should follow actionable risk management guidelines for assessing trustworthiness in each of the AI systems they develop and deploy. The priority of risks must be identified and balancing exercises must be carried out. In this context, organizations might also be faced with difficult tradeoff decisions. To take decisions in this regard, a comprehensive risk assessment needs to be carried out considering trustworthiness characteristics of the AI and considering the different perspectives of the AI actors.
Takeaways
Effective AI governance will help ensure that you mitigate risks and remain compliant with applicable legislation in line with accountability, data protection, and transparency requirements. In the process, you will foster trust, attract and retain clients, and prevent harmful use of AI. On the other hand, poor AI governance can lead to undesirable consequences such as fines, legal repercussions, and reputational damage.
To help you achieve AI governance, VeraSafe can assist you in:
- Implementing responsible AI principles throughout the AI lifecycle,
- Conducting AI risk assessments,
- Carrying out ongoing AI monitoring and compliance, and
- Staying updated with AI legal developments.
VeraSafe can provide you with customized solutions considering specific industry nuances. Get in touch with us to explore the next steps for you to implement effective AI governance in your organization.
You may also like:
An Introduction to the EU AI Act
CIPA vs. Chatbots: Can Websites Be Sued for Eavesdropping?
Accidental Data Breach? Misdirected Emails Can Land You in Hot Water
Related topic(s): Digital Services Act, Compliance Tools and Advice, AI