Opinion

Australia’s privacy regulator, the Office of the Australian Information Commissioner, publishes new guidance on privacy considerations when using artificial intelligence (AI)

The Office of the Australian Information Commissioner (OAIC) has published AI guidance articulating how Australian privacy law applies to AI and the OAIC’s overall expectations on AI governance and privacy safeguards for developers of AI products and businesses when using AI technology. This comes in the form of two guidelines published on October 21, 2024:

(1) Guidance on privacy and developing and training generative AI models (Developer AI Guidance) to assist developers to mitigate privacy risks when using personal information to develop and train generative AI; and 

(2) Guidance on privacy and the use of commercially available AI products (Business AI Guidance) to assist businesses using commercially available AI products (or systems),

(together, the AI Guidance).

The AI Guidance concludes that the governance-first approach to AI is the ideal way to manage privacy risks, which in practice means embedding privacy-by-design into the design and development of an AI product that collects and uses personal information and implementing an ongoing process to monitor AI use of personal information throughout the product lifecycle. In addition, the AI Guidance suggests that compliance with the Voluntary AI Standard published by the Department of Industry, Science and Resources on September 5, 2024 will help entities subject to the Privacy Act (APP entities) develop and deploy AI systems in Australia in accordance with their obligations under Australian privacy law. 

The key takeaways from the AI Guidance are as follows:

Developer AI Guidance

According to the Developer AI Guidance, developers seeking to use personal information to develop and train AI models are strongly encouraged to consider and address the following from a privacy perspective:  

Ensure accuracy

The OAIC recommends developers take reasonable steps (commensurate with the likely increased level of risk in an AI context) to ensure accuracy in generative AI models, which may include, for example, using high-quality data sets, undertaking appropriate testing, and using safeguards where needed to signpost any important information to users.

Be aware of web-scraping risks

Developers should consider the privacy issues when scraping publicly available personal information online to train their AI models or systems as such personal information is still subject to Australian privacy laws. The Developer AI Guidance reiterates that personal information should only be collected where reasonably necessary for the developer’s functions and activities.

Consent for sensitive information

To the extent any sensitive information is scraped online or obtained from third-party datasets, developers should seek to obtain consent from the relevant individuals to which the personal information relates. The Developer AI Guidance gives examples of photographs or recordings of individuals potentially containing sensitive information.  

Purpose and legal basis of collection

Where developers are seeking to use personal information that they already hold for the purpose of training an AI model, and this was not the primary purpose of collection, they need to consider whether they are able to use such personal information for their AI-related activities. To the extent a developer does not have consent for a secondary, AI-related purpose, it must be able to establish that this secondary use would be reasonably expected by the individual. If not, the developer is unable to use such personal information for its AI-related activities. Where consent is being relied upon for processing, individuals should be provided with an ability to withdraw their consent or opt out of such use.

Business AI Guidance

According to the Business AI Guidance, businesses using commercially available AI should consider and address the following from a privacy perspective:  

Conduct due diligence

APP entities should conduct due diligence to ensure their product is suitable for the intended use, such as how human oversight has been embedded into processes, the potential privacy and security risks, and who will have access to personal information input or generated by the entity when using the product. 

Include transparent information about the use of AI in privacy policies and in collection notices

Externally facing privacy policies and collection notices should clearly outline when and how AI will access and use an individual’s personal information. It is also recommended that APP entities establish internal policies and procedures for the use of AI products to facilitate transparency and ensure good privacy governance.  

Purpose of collection

In accordance with Australian Privacy Principle (APP) 6, an individual’s personal information should only be used or disclosed for AI: (i) for the primary purpose for which it was collected or otherwise with consent (where used for a secondary purpose); or (ii) where the individual would reasonably expect the entity to use or disclose their information for the secondary purpose, and that purpose is related (or for sensitive information is directly related) to the primary purpose. In the case of the latter, the relevant APP entity should outline the proposed uses in its privacy policies and collection notices to establish the secondary use was reasonably expected. The Business AI Guidance highlights the importance of distinguishing between the use of AI, which is facilitative of, or incidental to, a primary purpose (such as the use of personal information as part of customer service where AI is the tool used) from purposes which are directly AI-related (such as the use of personal information to train an AI model). Notably, the OAIC suggests best practice is to seek explicit consent prior to allowing AI to use personal information to train itself, including by providing individuals with the ability to opt out.

Use of personal information in AI systems must comply with APP 3

APP entities must ensure that the generation of personal information by AI is reasonably necessary for their functions or activities and is only done by lawful and fair means. Any inferred, incorrect, or artificially generated information produced by AI models (e.g., hallucinations and deepfakes) may still constitute personal information (and be subject to Australian privacy laws) to the extent an individual can be identified or is reasonably identifiable. 

Refrain from use

APP entities are advised not to enter personal information (particularly sensitive information) into publicly available generative AI tools (e.g., chatbots) due to the significant and complex privacy risks involved.

What does the AI Guidance mean for APP entities

The OAIC’s recommendation is to take a privacy-by-design approach to the AI lifecycle. Therefore, APP entities developing or using AI systems should take the following steps to ensure privacy law compliance:

(1) Review and update external privacy policies and collection notices to ensure clear and transparent information about how and when AI will use and generate personal information. These obligations complement the recent reforms proposed to the Privacy Act 1988 (Cth) in relation to increasing transparency about the use and disclosure of personal information in automated decision-making processes.

(2) Conduct due diligence to ensure the AI system or product is suitable for the intended use and does not pose any material security risks to the business. To that end, entities should consider how the AI system has been trained, the quality of data sets used to train the system, steps taken to mitigate any bias/discrimination and other risks to ensure the output is robust, accurate and reliable, and the level (if any) human oversight mechanisms are in place, including who has access to personal information used and processed by the AI system.

(3) Ensure the use and generation of personal information by AI does not occur unless reasonably necessary for the APP entity’s activities and only used for a secondary purpose where legally permitted. Consent must always be obtained for the use of sensitive information.

(4) Conduct a privacy impact assessment to assess potential risks and impacts of the AI system as regards privacy risks.

(5) Implement an internal AI audit and governance framework for the ongoing governance of the use of AI and train employees on the development and use of technology.