Opinion

Data governance: Strategic convergence, opportunities and risks

Published Date
Sep 11 2024
AI is accelerating digital transformation for companies and data governance is a key pillar in this change, enabling data strategies that unlock the potential of AI, and mitigate the risks associated with its use. Data strategies, focused on all forms of data, not just personal, can help companies leverage data to unlock innovation, competitive advantage, productivity, and efficiencies. In the last decade, we have seen a rising focus on data as an asset and rapid AI advancements have significantly accelerated this. 

To exploit the opportunity, effective governance will require collaboration between a wide range of business functions – technology, data science, legal, risk, ethics, compliance, security and more. At the heart of the opportunities and risks lie key questions about long term programs for joined-up accountability – to demonstrate a responsible and compliant approach to data governance and leverage it to improve data standards and quality.  

There is a risk that organizations maintain a siloed approach to data governance, with inconsistent standards and approaches to assessing risks and finding solutions. Working effectively with the right partners to exploit data opportunities and effectively manage risks across the digital supply chain is also a crucial component of data governance.

Companies are starting to address the following key questions:

  • Who should be accountable for AI?
  • Which governance mechanisms should we consider when developing and deploying AI?
  • How should data and AI risks be strategically positioned within wider governance frameworks?
  • What is the future of the Chief Privacy Officer (CPO) role?
  • How can Key Performance Indicators (KPIs) be used to assess the value of data governance and its impact?

In this blog, we explore the drivers behind these key questions and how companies are changing their governance in response.

Wider digital regulation will drive convergence in data governance 

Policymakers, businesses and legislators are now looking beyond personal data, seeing all data collected and generated by businesses as a market differentiator, and a key factor for economic and productivity growth. This is evidenced in the EU Data Act, EU Data Governance Act and the UK Digital Information and Smart Data Bill – legislation that seeks to improve the sharing of datasets and the use of interoperable re-usable formats, to enable data aggregation and analysis, to improve the usability, delivery and productivity of digital products and services.

There are also now a broad array of digital regulations addressing digital safety, competitiveness and innovation: the EU has introduced the AI Act, the Digital Services Act (DSA) and Digital Markets Act (DMA).  In the UK, we have the Online Safety Act (OSA) and the Digital Markets, Competition and Consumers Act (DMCCA).

These new regulatory frameworks all come with several intersecting provisions around data governance and risk assessment. Issues like consent also interact across the new laws. These new laws all come with rising levels of fining and enforcement powers. The General Data Protection Regulation’s (GDPR) highest fine level is 4% of global turnover, the DSA 6%, the EU AI Act 7%, the DMA 10%, while the UK Online Safety Act is 10% and DMCCA 5%. With this also comes the risk of multiple investigations and multiple fines for similar data operations. In the EU, the European Commission has already opened major investigations under the DSA and DMA.

While the U.S. is struggling to agree primary legislation on privacy, online safety and AI, it is playing a leading role in AI governance through AI standards being set by the National Institute of Standards and Technology (NIST) and the White House’s Executive Order on the Safe, Secure and Trustworthy Development and Use of AI.

From differing areas of AI regulation, we can see core principles of AI governance emerging on a global basis: safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. These principles are underpinned by a need to respect existing human rights frameworks.

The impacts and risks of AI systems are growing – not just privacy risks, but safety and societal impacts, and environmental costs. Businesses are now considering how data ethics, privacy and security relate to their Environmental, Social and Governance (ESG) strategies. For example, privacy and security are now featured in many companies ESG frameworks. The data governance agenda is not just driven by regulatory requirements, but wider corporate responsibility.   

Data protection and privacy at a crossroads

We’re now some six years into the implementation of GDPR and it has been 12 twelve years since the text was first published by the European Commission. In this time, we have seen significant growth in investment in data protection compliance and the public’s awareness of their associated rights has grown. The reputational and financial risks associated with getting data protection wrong have driven it up the risk agenda in organizations. Organizations have invested in long-term privacy management programs to demonstrate their accountability and give practical effect to GDPR’s principles in their data lifecycle. 

Data protection governance and the long-established global principles and rights still have a crucial role to play in AI governance and regulation (e.g. in the OECD Guidelines and Council Europe Convention 108, as well as GDPR). But their interpretation and application to AI will require fresh thinking about proportionality, outcomes and risks – for example when thinking about accuracy and deletion related to generative AI.

Meanwhile, in the rest of the world, the data protection map is rapidly growing, with more major economies passing new laws. India’s new data protection law will soon take effect, with several innovative features, including consent managers – a registered service that acts as a single point of contact to enable an individual to give, manage, review, and withdraw her consent through an accessible, transparent, and interoperable platform. The new law has a strong focus on safeguarding children’s privacy.

Data protection will remain an ever-present component of wider data governance, but it will need to evolve to remain relevant to AI and effectively interoperate with other laws. The profession and the role of Data Protection Officers (DPOs) will need to evolve too.

A report, Responsible AI Management: Evolving Practice, Growing Value, was published earlier this year by The Ohio State University, in collaboration with the International Association of Privacy Professionals (IAPP). It found the following:

“Privacy experts are most likely to be responsible for Responsible AI Management (RAIM), with others involved as well. Of respondents, 60% said their organization had assigned the RAIM function to a specific person or people. The people performing this function held a variety of titles ranging from Privacy Manager to Data Scientist to Responsible AI Officer. Companies were most likely to assign the RAIM function to individuals with expertise in privacy, at 59.5%. The number of companies that identified more than one person involved in RAIM, and the wide variety of titles those individuals hold, suggests a cross-functional approach to RAIM may be useful.”

Senior liability and accountability

While DPO roles come with defined responsibilities, they do not come with individual liability. But in financial services regulation the concept of a senior managers' regime is a well-established principle, where individuals have defined responsibilities and can be held individually accountable for relevant breaches of legislation. As the risks related to digital technologies grow this concept is becoming more relevant to digital regulation. The UK Online Safety Act contains liability provisions for senior managers, including specifying criminal offences they could be charged with if they fail to comply with certain requirements.

In the US, the Securities and Exchange Commission (SEC) prosecuted SolarWinds’ Chief Information Security Officer (CISO) for their individual role in security governance and how this led to a cyber-attack. This new stance, coupled with new cyber disclosure rules from the SEC, heightens the challenge in this area.

This growing trend highlights the importance of effective governance and audit trails to address the liability risks. Equally, organizations will need to ensure the supporting governance in this context does not develop a culture of over caution and a disproportionate focus on risk mitigation.

How is data governance changing?

Core knowledge and existing roles related to data protection governance should be of significant value to organizations – including management of the data lifecycle, data quality, and assessing risks and harms.  Governance will need to adapt and evolve to work with AI – this will require innovation, in combination with knowledge and experience. 

Many laws and regulations related to data demand impact and risk assessments. Organizations will have to ensure a joined-up approach to assessment and understanding of how to design and apply mitigating solutions (e.g. bias mitigation). Governance for responsible innovation will require a multidisciplinary approach – organizations we have worked with highlight the importance of reflecting the full range of organizational insight in the governance process.

Developing a framework that assesses the different types of risk involved in AI development and deployment will be key to informing AI model and data selection, as well as training and mitigation measures. Risk assessments for AI will also need to set a business’ tolerance.

Structures and roles

There will never be a single solution to the ongoing challenge of data governance and digital regulation.  Companies will need to develop their approach considering their risk profile, business and operating model, wider compliance governance, their size and level of maturity, and sector-based regulatory challenges.

Who should be accountable for AI? Accountability for AI could sit with 1) each existing business function; 2) the CPO; 3) the data protection team; or 4) a new AI department.

In the short-term, we’re observing that a centralized driver or coordinating function (which may be the CPO or privacy team at the moment) is particularly important to ensure that AI risks are being considered at each stage of the lifecycle, at each level of the business and by each relevant team. This encourages engagement in AI both horizontally and vertically across the business.

The role of the coordinating function can be to drive a standard approach to AI risk assessment frameworks – for example – when AI is being deployed, the business accesses a set of standard questions covering the inherent business risks of compliance, privacy, cyber, ethics, etc. This will also help streamline initiatives, minimize parallel workflows and tackle compliance fatigue. In the long-term, companies are considering whether the role of the coordinating function can dissolve into each business function, with the aim of integrating AI risk into the first line.

But it is clear that siloed structures for data governance will create significant inefficiencies and risks of inconsistency. It will reduce opportunities for innovation and collaboration in finding solutions.

Traditionally, many data protection teams are structured within legal, risk, ethics or compliance departments. The roles of the CPO and DPO are defined in a distinct way; the CPO role is a privacy leader and champion, working with C-suite on strategic developments in governance and privacy by design, while the DPO is a legally required role focused on advising and monitoring compliance, plus acting as an interface between different business functions.   

A number of experts in the privacy community have noted that the CPO role may be at a crossroads. It is notable that a number of significant data and technology companies have also transformed the role in 2024, bringing it down to product level, linking it with new AI responsibilities. Organizations are starting to introduce new roles such as Chief Privacy & Data Responsibility Officer or Chief Privacy & Trust Officer. There will be key relationships with the General Counsel, Chief Data Officer (CDO) and Chief Technology Officer (CTO).

The DPO has a formal function defined under the GDPR, including their operational independence and direct reporting lines to the Board. The DPO will, however, now need to operate within a new context of the broader governance required for AI – this will draw on much of the data governance required for data protection compliance, but many questions around AI safety and risk assessment will stretch beyond data protection.

We see a need for strategic coordination, as well as to join up roles. There is an opportunity for the data protection profession to expand and evolve in order to address broader AI governance requirements.

The role of Chief AI Officer (CAIO) is also on the rise, particularly in the U.S., where President Biden’s Executive Order has mandated the role in the U.S. government, and this is likely to have a wider influence in the private sector as well. CAIOs will develop a strategy that will align AI deployment to organizational goals: the role is likely to focus on using AI to improve workforce efficiency, as well as to identify and develop new revenue streams. The role holder is also likely to be responsible for mitigating ethical, legal and security risks associated with AI.

Organizations are moving towards dedicated committee structures to oversee data governance opportunities and risks, including committees for AI and digital ethics. For example, an AI committee with the CAIO as the chair. The CTO would also sit on this Committee, along with a Senior Director for Responsible AI.

Metrics and KPIs

Lastly, new challenges for data governance will bring important questions about effective metrics and KPIs. Because of the more intangible nature of data benefits and risks, organizations have often found it challenging to develop effective KPIs for privacy management programs. KPIs for data governance are likely to center around the following components: data quality, people, process and technology.

Effective metrics to measure AI and data governance and compliance include time (measuring the average time from product inception to launch) and cost (costs saved in streamlining processes, such as Data Protection Impact Assessments (DPIAs).

Counting compliance outputs and purely quantitative metrics may provide a partial picture on how organizations' compliance is performing, however it could miss wider risks that are emerging related to use of new technologies, and not capture the value that data governance adds to service and product delivery, and the trust and perception of the business as a digital entity. 

There is the challenge of ensuring that reporting to the Board is meaningful and representative of the underlying risks, while also being concise. Data governance KPIs can utilize a risk prioritization matrix that categorizes risks (very high, high, medium and low) based on the potential impact of the residual risk (e.g. financial, reputational and ethical) on the business and individuals.

Conclusion

It is clear that an evolution in governance, structure and roles is underway to enable effective and responsible AI use, as well as to effectively address compliance and other risks that emerge from its use. While there isn’t a single model, all companies undertaking digital transformation using AI will need to consider key questions about how their structure can support collaboration and a joined-up approach to data governance, enabling innovation, compliance and responsible data use for strategic advantage. 

At A&O Sherman we look forward to working with companies to address these opportunities and challenges over the coming months and years, and please contact us if you would like to explore how we could support your data governance journey.