Article

Hong Kong SFC issues circular on the use of generative AI language models

The Hong Kong Securities and Futures Commission (SFC) has issued a circular that sets forth comprehensive guidelines and expectations for licensed corporations (LCs) regarding the responsible use of generative artificial intelligence language models (AI LMs).

This circular, accompanied by an appendix detailing a non-exhaustive list of risk factors, aims to ensure the responsible adoption and management of AI LMs. The full circular can be accessed here.

Scope and applicability

The circular is applicable to all LCs that offer services or functionalities powered by AI LMs or use AI LM-based third-party products in relation to their regulated activities, irrespective of whether the AI LM is developed internally, by a group company, an external service provider, or sourced from open platforms. The SFC’s directive underscores the importance of a risk-based approach, allowing LCs to tailor their compliance efforts based on the materiality and risk level of specific AI LM use cases.

Notably, the SFC considers using AI LMs for providing investment recommendations, investment advice or investment research to investors or clients as “high-risk use cases.”

Core principles and requirements

The circular is structured around four core principles, each designed to address different facets of AI LM deployment and management:

1. Senior management responsibilities

Senior management is tasked with ensuring that effective policies and controls are in place throughout the AI LM lifecycle. This includes appointing responsible staff with the requisite expertise to manage AI LMs effectively. For high-risk use cases, LCs must ensure sufficient management oversight and continuous monitoring of its deployment. Allowing LCs the discretion to involve staff from various functions—business, risk, compliance and technology—in managing AI LMs is welcomed by the industry. 

2. AI model risk management

LCs are required to establish a comprehensive AI model risk management framework. This framework should cover all stages of model development and management, including design, implementation, customization, training, testing, calibration, validation, approval, ongoing review, monitoring, use and decommissioning. Special attention must be given to mitigating the AI LM’s hallucination risk.

Notably, a non-exhaustive list of AI LM high-risk use cases is set out in the circular, including providing investment recommendations, investment advice or investment research to investors or clients. High-risk use cases are subject to additional measures such as human-in-the-loop reviews for factual accuracy, output robustness testing and continuous client disclosures. It is noted that the blanket requirement of having a human in the loop may be impractical for certain activities, such as communication surveillance. The SFC provides some flexibility for LCs to comply with this requirement on a case-by-case basis.

3. Cybersecurity and data risk management

In light of the evolving cybersecurity threat landscape, LCs must implement robust policies, procedures and internal controls to manage associated risks. This includes conducting periodic adversarial testing, encrypting non-public data both at rest and in transit, and preventing the input of sensitive information into AI LMs. LCs should stay updated on current and emerging cybersecurity threats related to AI LMs.

4. Third-party provider risk management

LCs must exercise due diligence in selecting third-party providers and continuously monitor their performance. This involves assessing supply chain vulnerabilities and data leakage risks of each third-party component of the AI LM architecture. Stringent cybersecurity controls and contingency plans are essential to ensure operational resilience, especially for critical operations. While the SFC provided examples on what its expectations are in terms of due diligence and ongoing monitoring of third-party providers, further clarification may still be needed.

Implications and next steps

The circular is effective immediately, but the SFC acknowledges that LCs may require time to update their policies and procedures to comply fully.

LCs should take proactive steps to align with the new requirements, including:

  • Assessing their status as users of AI LMs and the circular’s applicability to their operations, in particular, whether any use cases fall within the high-risk category.
  • Strengthening their AI model risk management frameworks in line with the circular’s requirements and the detailed risk factors in the appendix.
  • Reviewing and updating existing and future contracts with third-party providers of AI LMs or AI LM-based products.

LCs should also consider their notification obligations under the Securities and Futures (Licensing and Registration) (Information) Rules (Information Rules) if AI LMs are to be adopted in high-risk use cases and are encouraged to discuss their plans with the SFC at early stages.

This circular marks a significant step towards ensuring the responsible and secure use of generative AI language models in Hong Kong’s financial sector. LCs are encouraged to engage with the SFC early to navigate any potential regulatory challenges and ensure seamless compliance.

Related capabilities