Opinion

An Intelligent or Artificial Response: The Australian Government's Interim Response to the consultation on Safe and Responsible AI in Australia

Published Date
Feb 2 2024
Ross Phillipson and Saranpaal Calais from A&O’s Australian tech and cyber team review the Australian government’s interim response to AI regulation in Australia and highlight key issues for international companies seeking to deploy AI in their operations.

The Australian Government recently issued its interim response to the Safe and Responsible AI in Australia Discussion paper from 2023. Notably, approximately 20% of the more than 500 submissions were made by individuals. This caused the Government to conclude that there was a significant level of public concern about AI and its risks and that, potentially, this was a barrier to wide scale adoption in Australia.

The public lack of trust in AI risks compromising Australia’s opportunity to take advantage of it. AI is predicted to add up to $600 billion a year to Australia’s GDP by 2030 and the Government is motivated to maximize the opportunities that wide scale adoption offers, not least of which could be a solution to Australia’s well-documented economic productivity conundrum. The Government aims to solve this through regulatory and policy settings to ensure AI is “safe”. The risk to companies operating in Australia is that those regulatory settings go too far and hamper innovation and adoption of AI technologies due to an artificially increased perception of risk from the public.

Ex Machina or Ex Regulatory

The primary concerns identified in the consultation report can be summarised as follows: 

  1. the current regulatory framework is insufficient to address the risks posed by AI; 
  2. there are unforeseeable risks from new and powerful AI models; and 
  3. there need to be ‘guardrails’ in place for high-risk AI. 

The Government’s response is aimed at “creating a regulatory environment that builds community trust and promotes innovation and adoption while balancing critical social and economic policy goals.” To achieve this, the Government has indicated it will use a risk-based framework to support the safe use of AI and proposes to allow “low risk AI” to continue to grow largely unconstrained by new regulation, save existing requirements under privacy, antidiscrimination, consumer protection and similar laws. 

As an initial step to achieve this, the Government must necessarily define what “high-risk AI”, and therefore subject to regulation, is. Unfortunately, the interim response does not land on a preferred approach – rather, it canvasses approaches taken elsewhere (notably the EU’s AI Act which provides a non-exhaustive list of “high risk AI” uses). This issue is a critical one – it will define what is in scope of the requirements – and therefore it will be crucial to monitor the proposals and understand how this definition (and accompanying requirements) may affect your current and future use of algorithmic and AI tools. 

Foundation for the Future

To manage the risks of “high risk AI”, the Government proposes developing guardrails with a focus on three areas:

  1. Testing: Potential independent review of systems before and after release, with in-flight monitoring and vulnerability reporting obligations being explored.
  2. Transparency: Notification requirements for where AI systems are used or watermarking of AI-generated content. Further proposals include public reporting on AI limitations and training data models, though how this would work within a corporate setting, where trade secret, confidentiality, IP and other legal issues arise, would need to be better understood. 
  3. Accountability: Creation of designated roles tasked with the responsibility of AI safety and requiring specific training for those who are designing and deploying AI applications.

Currently, rather than creating new legislation, the Government plans to build on existing laws to address the risks from AI. The Government identified at least ten separate pieces of legislation that are expected to need modification including:

  • Competition and consumer law: liability for misleading and deceptive conduct for the use of AI to generate deepfake, as well as reforms necessary to manage digital platform risks.
  • Health and privacy laws: clinical safety risks in the use of AI models by health and care organizations and practitioners.
  • Copyright law and broader IP: the use of creative content to train AI models, including remedies for infringement.

It is not clear how these laws will be updated and coordinated to meet the risks of AI – particularly where the Government has not yet defined what amounts to “high-risk AI”. We would expect the Government to begin developing that definition prior to or in parallel with these statutory updates to ensure that the revisions are appropriate and will not require further, frequent amendments. 

While developing the new regulatory settings, the Government will establish a temporary expert advisory group to develop voluntary AI Safety Standards. Initially to be offered as a toolkit to enable organizations to ensure the safe and secure development and deployment of AI technology, this will likely provide a preview of regulations that will follow.

With the lack of definition of “high-risk AI”, multiple laws in a state of flux, and only a temporary expert advisory group in place, it is important for organizations to engage with and understand what the proposals are and determine how they may impact current and planned initiatives with a view to ensuring operational sustainability when the law lands.

Navigating the Matrix

While the future regulatory landscape is uncertain for Australia, international commitments made in the recent Bletchley Declaration include collaboration with the international community to develop an interoperable response to the risks. We can therefore look to international efforts for guidance, such as the EU AI Act and Singapore’s regulatory approach (as a consequence of the 2020 Digital Economy Agreement between Australia and Singapore and, specifically, the MoU on AI).

Further, considering Australia’s strategic AUKUS alliance with the US and UK which includes AI, it is likely Australia will seek to align its approach to facilitate the goals of that alliance. Absent legislative efforts, the Biden Administration’s EO is instructive, as is the White House’s Blueprint for an AI Bill of Rights, and includes many regulatory constructs that could conceivably be adopted in Australia, albeit via a slower legislative approach. These include safety, testing and privacy protecting requirements, as well as dual use and export restrictions.

A private member’s bill was introduced to the House of Lords in the UK in November 2023 (reflective of a March 2023 pro-innovation white paper) which takes the approach of creating an AI Authority to oversee existing regulatory bodies, and granting the Secretary of State the authority to regulate AI in accordance with certain principles. The bill is presently only at the second reading stage, and a UK Government response to the white paper consultation has yet to be released.

Unfortunately, the Australian Government’s interim response lacks detail in terms of next steps and timelines.  What is evident is that the definition of “high-risk AI” is an important next step and this should be an area of focus for any company seeking to develop and deploy AI in Australia. Ensuring Australia’s standards are compatible with international regimes should be a priority for both the Government and international companies alike.

Special thanks to paralegal Charlotte Hilton for her contributions to this article.

Content Disclaimer

This content was originally published by Allen & Overy before the A&O Shearman merger

Related capabilities