Insight

Cyber and AI: NYDFS has entered the chat

On October 16, 2024, the New York Department of Financial Services (“NYDFS”) released an Industry Letter—entitled Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks (the “Letter”). The Letter has drawn significant attention. Below, we summarize the provided key takeaways from the Letter and offer guidance to regulated entities on putting that guidance into practice.

The most notable aspect about the Letter is that NYDFS is taking the lead in considering the cybersecurity implications of AI technologies (most of the laws, regulations, and guidance to date have focused instead on privacy considerations.) This is a welcome development, as AI systems and models generally sit on the same technical infrastructure as other IT systems and are vulnerable to similar attacks paths.

The Letter makes clear that 23 NYCRR Part 500 (the “Cybersecurity Regulation” or “Part 500”) applies to, and provides a regulatory and risk framework for, AI. The Letter details some of the risks posed by AI and provides six general categories of controls. These controls do not add new requirements for covered entities beyond what is already mandated by Part 500. Rather, the Letter instructs covered entities on which of those controls should be applied to AI, including:

  • Risk assessments and risk-based programs. NYDFS underscores that assessments and programs should holistically assess the covered entity’s cyber risks, including those posed by AI.
  • Vendor management. NYDFS “strongly recommends” that covered entities consider risks arising from third-party vendors and service providers’ use of AI when conducting diligence on such providers.
  • Access controls. NYDFS stresses that multi-factor authentication (MFA) is one of the most effective access controls. Indeed, the amended Cybersecurity Regulation, announced in 2023 requires MFA for all users attempting to access information systems. Aside from MFA, the Letter recommends that covered entities have “other access controls” in place. Risk assessments may suggest which controls are appropriate based on risk appetite.
  • Cybersecurity training. The Letter recommends that the annual cybersecurity training mandated by Part 500 address both the risks posed by AI and the covered entity’s policies and procedures to mitigate AI risk. Where the covered entity deploys AI directly, training should focus on how to protect AI.
  • Monitoring. In addition to monitoring for security vulnerabilities and user activity, the Letter recommends monitoring AI tools for unusual query behaviors that may indicate an attempt to extract nonpublic information from the entity’s IT systems.
  • Data management. Here, the letter focuses on data minimization—stating covered entities must dispose of nonpublic information that is no longer necessary for business operations, including information used for AI purposes.

Putting the guidance into practice

  • Given that covered entities will generally already have the requisite controls in place as part of their existing cybersecurity regimen, implementing the guidance in the Letter should not require a wholesale overhaul of systems, policies, and procedures. Companies may nevertheless want to evaluate where a refresh is needed. Conferring with outside counsel during this process can ensure that covered entities are in market practice and facilitate well-rounded risk evaluations.
  • As a first step, companies may wish to review their most recent risk assessments to ensure AI systems were in scope. That evaluation should include not only what AI technologies are currently being used, and for what purpose, but also how the technologies could be misused (and in all those instances, what cybersecurity risks are presented). Moreover, companies should consider where those systems are stored (e.g., in the cloud) and who has access to them.
  • Given the Letter’s focus on vendor risk, companies may also wish to undertake a review of what vendors have access to data, what restrictions on that access are in place, and whether existing contractual terms properly restrict use (and allocate risk for misuse) with vendors.
  • As a matter of best practice, consider reviewing existing policies and procedures to make certain that they expressly and appropriately address AI-related cybersecurity risk. As part of a review, companies may want to assess whether both company-wide and business-specific guidance is appropriate, particularly in AI related use cases.
  • As a practical matter, monitoring and data management requirements may warrant more careful assessment of existing systems. Particularly where data must be retained for use by an AI system, companies will want to consider how to minimize retention to comply with NYDFS guidance.

Prelude to enforcement?

The Letter indicates that cyber risks associated with AI will be an NYDFS enforcement priority going forward. Through the Letter, NYDFS has confirmed that AI is in scope for Part 500 and indicated that it perceives cybersecurity risk attendant with AI usage. NYDFS has also articulated its expectations on proactive risk management.

In this regard, the NYDFS will most likely expect to see AI cyber risks addressed in written policies and procedures, including those that require documenting AI/cyber risks and implementing internal controls to identify and mitigate such risks.

Conclusion

Moreover, it shows that NYDFS knows cyber fundamentals still matter—access controls, third party risk management, data minimization—even when dealing with a new, sophisticated technology. Accordingly, companies should be thinking about tried-and-true attack vectors for AI powered systems and models.

A final point, the NYDFS guidance set forth in the Letter demonstrates a familiar trend that states will regulate AI in the absence of federal regulations or guidance. NYDFS may be among the first states to issue guidance, but they certainly won’t be the last.

Related capabilities