Opinion

Zooming in on AI - #8: Balancing innovation and compliance - how governance can foster responsible AI

AI-driven technology has emerged as a cornerstone of our present and future daily lives, revolutionising the way transactions and interactions are organised.

With the increased use of AI systems, there is also an increased need for risk awareness and mitigation strategies. Whether an organisation uses off-the-shelf AI systems or highly customised AI systems, the crux of a successful development and use will be determined by the governance of the AI system. Effective governance ensures that AI systems are not only efficient and innovative but also secure and ethically sound.

What is AI governance?

Governance encompasses the frameworks, processes, policies, and tools that guide the research, development, deployment, use, and management of AI in a safe and responsible manner. Its primary aim is to maximise the benefits of AI while preventing potential harm. By serving as guardrails, governance ensures that organisations can align the development and use of AI systems with business, legal, and ethical requirements throughout every stage of the AI lifecycle. 

For governance to be truly effective, it must be a multidisciplinary task. This involves the collaboration of a diverse range of stakeholders from various fields, including AI developers and engineers, end-users, policymakers, legal and compliance experts, and other business teams. This collective effort ensures that all perspectives are considered, leading to more robust and comprehensive governance practices.

Why should we care about AI governance?

AI has a tremendously increasing presence is all types of sectors, including healthcare, finance, logistics, retail, education and public services. The capabilities of AI are astonishing, driving efficiency, productivity, and innovation across these fields. However, the rise of AI also introduces new challenges related to accountability and ethics. AI has the potential to cause significant ethical, social, or economic harm to individuals or organisations, and its systems can lead to irreparable damage with serious consequences. 

To address these risks, a robust AI governance structure is essential. Such a structure aims to prevent and mitigate potential harms, striking a balance between maximising opportunities and innovation on one hand and ensuring safety and ethical standards on the other. As machine learning algorithms are trained and deployed to make decisions, governance is crucial for monitoring and mitigating outcomes that may be unfair or unjust. Additionally, effective AI governance helps organisations comply with legal and regulatory requirements, as more governments are enacting AI regulations. 

A well-designed governance system also fosters trust in AI systems and, by extension, in the organisations that use them. This trust can help avoid reputational damage and mitigate economic and financial risks. The governance framework must be flexible enough to evolve alongside AI technology while maintaining a sufficient level of standardisation and process. 

What should AI governance focus on? 

There is no universal agreement on the exact processes and policies that should constitute an AI governance model. A "one size fits all" approach is not feasible in this context. Instead, a successful AI governance system must be customised to align with the specific goals of the AI system within an organisation. By defining these goals, the organisation can then determine the appropriate measurements and actions that will shape its governance system. 

Despite the need for customisation, there are several fundamental components that typically form part of the core of an AI governance model:

  • Data privacy: As AI systems require large datasets to function effectively, protecting personal data and avoid privacy violations is an essential element when developing or using AI systems. Organisations must enhance privacy-preserving techniques, including privacy by design, especially when handling sensitive data, to avoid privacy violations.
  • Security and safety: To ensure that AI systems are safe and trustworthy, maintaining the confidentiality and integrity of the (training) data and the AI system is key. Implementing robust security measures is essential to prevent cyberattacks and safeguard the AI systems.
  • Bias mitigation and fairness: Avoiding (human) biases and discrimination into AI systems is another core concern. In this regard, data quality plays a vital role in achieving fair and unbiased decision-making. Any biases inherent in the training and input data can creep into the AI system’s decision-making process. Additionally, having a diverse development team and employing varied data sampling methodologies can contribute to avoiding biases and discrimination in AI-driven decision making.
  • Transparency and explainability: Providing clarity is key to understand how an AI system is developed and how it can make decisions. It will help organisations to explain AI-driven outcomes, identify biases and enhance accountability. It will also foster trust in the AI system and consequently in the organisation itself.
  • Accountability: The attribution of responsibility for any negative consequences resulting from the use of AI technology is essential. Establishing a governance framework that clearly defines responsibility will foster trust and integrity in the use of AI systems.
  • Regulatory compliance: Regulatory compliance is critical to mitigate regulatory risks. Organisations must ensure compliance with AI regulations in different jurisdictions such as the EU AI Act, US state AI laws, the proposed Artificial Intelligence and Data Act in Canada, and China’s Interim Measures for the Management of Generative Artificial Intelligence Services. Adhering to these regulations is vital for organisations to avoid legal and reputational risks. 

Many governments and organisations have issued guidance on such governance topics related to AI. Notable examples include the AI Principles of the OECD, the EU Ethics Guidelines for Trustworthy AI, UNESCO's Recommendation on the Ethics of AI, the G7 guiding principles on AI, Singapore’s Model AI Governance Framework, the National Institute of Standards and Technology's AI Risk Management Framework and the International Organization for Standardization's AI Standards.

How can AI governance be applied in practice?

Successful governance is a continuous process that requires constant attention and adaptation. AI systems are changing constantly, leading to different outputs which can have severe financial, legal or reputational damages. It is in the organisation’s interest that the output does not affect the organisation’s credibility.

To strengthen the governance practices, specific actions can be taken within an organisation. Such actions may include:

  • Stakeholder involvement: These stakeholders can be both external and internal stakeholders, including developers, employees, regulators, end users, investors, and others. Clear and effective communication with each stakeholder ensures transparency and builds trust in the organisation's AI practices.
  • Determining the appropriate level of human involvement: This involves deciding the extent of human oversight required. Depending on the AI system’s objectives, it may be necessary to have a human in the loop to override AI-driven decisions, or in some cases, minimal human intervention might be more effective in achieving the system's goals ('human out of the loop').
  • Set up the necessary policies and procedures: This includes formulating policy standards and operational procedures, which can encompass an AI ethics framework to guide responsible AI use by employees. Additionally, a liability framework should be included to determine accountability in case of negative consequences resulting from AI-driven decisions.
  • Include contractual protections: Incorporate contractual provisions is important, especially when involving a third party for the development or deployment of the AI system. Such contractual protections can include specific warranties and indemnification obligations, as well as third-party due diligence assessments to identify possible external risks.
  • Establish a corporate (AI) ethics board or an AI governance committee: Such dedicated board or committee can oversee AI initiatives and ensure compliance with the organisation’s standards and values. These boards or committees typically consist of cross-functional teams with legal, technical, and policy expertise.
  • Creating an office-wide AI culture: Fostering an AI culture can be achieved by providing advanced training and awareness programmes for employees and staff that are involved in the AI lifecycle. Focus on culture, people and their education to ensure a successful and responsible use of AI.
  • Include comprehensive audits of the AI systems: Such audits are necessary to assess the AI system’s use, identify potential risks or concerns and ensure alignment with AI policies and principles. Any improvement actions following that audit must be implemented and monitored.
  • Establish AI governance metrics (KPI’s): What gets measured, gets done. AI governance KPI’s are crucial for maintaining oversight, control and accountability over the use of the AI system. Effective AI KPI’s are specific and measurable and balance quantitative and qualitative assessments.

AI governance in the board of directors?

AI governance is an integral component of IT governance, which in turn is a crucial aspect of corporate governance. As such, it is essential for AI governance to be a priority for the board of directors. The extent to which the board is involved in the day-to-day governance of AI can vary based on the organisation's size, the complexity of the AI systems, and the specific goals those systems are designed to achieve.

This does not imply that every board member must be an AI specialist. However, it is important for individual board members to have a solid understanding of how AI systems are utilised within the organisation and the impact these systems have. This knowledge is vital for making informed strategic decisions. Board members should be well-educated in the organisation's AI policies, the risks associated with these systems, and the potential internal and external consequences. This understanding enables them to evaluate whether the AI initiatives align with both current and future business objectives.

Furthermore, discussions related to AI should be thoroughly documented in the meeting minutes. This practice ensures that there is a clear record of responsibility and accountability regarding AI governance decisions.

Conclusion 

The necessity for a robust yet flexible AI governance framework has never been more critical. Governance is an integral component of an organisation’s AI ecosystem, ensuring the responsible use of AI. It should be a priority for every policymaker, executive, director, employee, and user of AI.

Organisations that prioritise AI governance with an emphasis on responsible and ethical AI will be better positioned for long-term success. In this context, organisations must go beyond merely establishing their AI governance framework. They need to continuously monitor and measure the effectiveness of these programs to ensure they are functioning as intended.