On October 16, 2024, the New York Department of Financial Services (“NYDFS”) issued an industry letter covering Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks (the “Industry Letter”). The Industry Letter contains guidance for entities regulated by NYDFS (“Covered Entities”) in assessing and responding to cybersecurity risks related to the use of artificial intelligence (“AI”), specifically regarding the use of AI by threat actors and the risks posed by a Covered Entity’s AI systems. The Industry Letter does not come as a surprise, as the NYDFS previously acknowledged in the Assessment of Public Comments on the revised Proposed Second Amendment to 23 NYCRR Part 500 (“APC”), the cybersecurity risks associated with AI, expecting Covered Entities to take those risks into account in their risk assessments and cybersecurity programs. In the APC, the NYDFS declined to add new sections regarding AI and large language models, but was clear that this was an area they would continue to monitor and assess whether to adopt future amendments to 23 NYCRR Part 500 (“Cybersecurity Regulation”).
The Industry Letter presents a clear signal of NYDFS’ interpretation of existing regulations and insight into enforcement priorities, as NYDFS suggests the cyberattack landscape is becoming increasingly more sophisticated by threat actors’ use of AI and Covered Entities’ deployment of AI systems/tools.
Risks Posed by AI
The Industry Letter highlights the “most concerning threats” relative to cybersecurity, including:
Threat actors use of AI. The Industry Letter discusses numerous risks presented by the use of AI by threat actors, including AI-enhanced social engineering, such as phishing attacks and other social engineering schemed using deepfakes. NYDFS also highlights the potential for AI tools to increase the effectiveness, speed, and scale of existing cyberattacks. As we previously discussed in our advisory, AI vs. AI: Recent Developments in the Cyber Landscape, NYDFS echoed that AI tools can aid in the development of malware, and enable less sophisticated in conducting increasingly sophisticated and effective cyberattacks.
Inherent risks to AI systems/tools. NYDFS also warns of the potential risks posted by a Covered Entity’s use of AI. Inherently, AI systems collect vast amounts of nonpublic information (“NPI”), including particularly sensitive information such as biometric data, making these systems a prime target of threat actors. These risks are particularly strong when AI tools require storage of biometric data (e.g., facial scan, fingerprint, etc.), which threat actors can use to imitate Authorized Users in order to bypass Multi-Factor Authentication (“MFA”) protections.
In addition, the collection of these datasets often involves agreements with vendors and Third-Party Service Providers (“TPSPs”), presenting additional “links in the chain” that can be targeted and compromised by threat actors—potentially exposing sensitive data.
Recommended Controls and Measures to Mitigate AI-related Threats
With its Industry Letter, NYDFS also details extensive recommendations for Covered Entities to mitigate against the aforementioned risks. For instance:
Risk Assessments (§ 500.9). Risk Assessments are a foundational requirement under the Cybersecurity Regulation and NYDFS has, on multiple occasions, emphasized that risk assessments are a necessary pre-requisite to designing and establishing an effective and compliant cybersecurity program. Consistent with the APC’s emphasis on the importance of risk assessments taking into consideration the risks associated with AI (APC, p. 3), the Industry Letter doubles down, emphasizing that these assessments must now consider risks based on “deepfakes and other threats posed by AI.” Also, Covered Entities should address AI risks stemming from the Covered Entity’s own use of AI, AI used by TPSPs and vendors, and potential vulnerabilities in AI applications.
Incident Response (“IR”), Business Continuity, and Disaster Recovery (“BCDR”) Plans (§ 500.16). Covered Entities must also maintain incident response, business continuity, and disaster recovery plans designed to address Cybersecurity Events (as defined in § 500.1(f)) and other disruptions, including those related to AI. We would note that NYDFS clarified in the Second Amendment (and the APC) that “other disruptions” are limited to “cybersecurity-related disruptions” not all disruptive events. (See § 500.16(a)(2) and the APC, p. 2). That said, it remains unclear how prescriptive a Covered Entity’s IR and BCDR plans must be in addressing AI-imposed risks to Cybersecurity Events and other disruptions, but it seems that NYDFS would expect AI to be addressed in those plans in some regard to show the Covered Entity has considered/is considering and addressing the risks posed by AI.
Multi-Factor Authentication (§ 500.12). NYDFS emphasized robust access controls, including MFA (which NYDFS again emphasized as one of the most effective access controls), is an important defensive measure to combat the threat of AI-enhanced social engineering, including particularly the threat of deepfakes. NYDFS uses this Industry Letter as an opportunity to remind (and perhaps clarify for) Covered Entities that the enhanced MFA requirements in the Second Amendment come into effect in a little over one year (November 1, 2025), and MFA must be in place for “all Authorized Users attempting to access Covered Entities’ Information Systems or NPI, including customers, employees, contractors, and TPSPs.” Specific to AI, the Industry Letter encourages Covered Entities to make use of authentication factors that cannot be circumvented using AI-enhanced attacks (such as deepfakes), meaning, avoiding SMS text authentication and forms of authentication that can be impersonated by deepfakes, such as voice and video authentication. Instead, NYDFS encourages the use of digital-based certificates (i.e., a file or electronic password that proves the authenticity of a device or user through the use of cryptography and public key infrastructure such as a transport layer security certificate) and physical security keys (such as FIDO2 security keys). NYDFS also encourages Covered Entities to consider using authentication factors that employ liveness detection technology or texture analysis, which can help verify if fingerprints or other biometric detection factors come from a live person.
Cybersecurity Training. NYDFS also emphasizes the importance of cybersecurity training for all personnel—including, notably the Covered Entities’ senior executives and Senior Governing Body members—in combating AI-related threats. NYDFS has emphasized on multiple occasions the importance of cybersecurity awareness training and simulated phishing training, as we have previously covered, and expanded the scope by explicitly adding “social engineering” as an area that must be covered in the cybersecurity awareness training in the Second Amendment. Now, NYDFS seems to be expanding the training requirement even further, both from a content and a personnel perspective.
- Cybersecurity training for all personnel (§ 500.14(a)(3)). From a content perspective, NYDFS suggests incorporating AI-related threats into the cybersecurity awareness training for all personnel, to ensure “personnel are aware of the risks posed by AI, procedures adopted by the organization to mitigate risks related to AI, and how to respond to AI-enhanced social engineering attacks.” The social engineering component of the training, which can be delivered via simulated phishing and voice and video impersonation exercises, must address deepfake attacks, as well as discussion of procedures for what to do when personnel receive unusual requests typical in social engineering attacks (such as requests for urgent money transfers or access to NPI).
- Cybersecurity training for cybersecurity personnel (§ 500.10(a)(2)). Additionally, as a part of the requirement to provide cybersecurity updates and training sufficient to address relevant cybersecurity risks to all cybersecurity personnel, NYDFS suggests that training should include “how threat actors are using AI in social engineering attacks, how AI is being used to facilitate and enhance existing types of cyberattacks, and how AI can be used to improve cybersecurity.”
- Cybersecurity training for the Senior Governing Body (§ 500.4(d)). Interestingly, NYDFS specifies that it is not just the employees or contractors of the Covered Entity that must be trained on these AI-related risks (as a part of the broader cybersecurity awareness training), but also the “Senior Governing Body,” which is the board of directors (or an appropriate committee, equivalent governing body, or if neither of those exist, the senior officer(s). (§ 500.1(q)). It is atypical for a regulator to prescribe specific cybersecurity training to a board of directors, particularly since NYDFS removed the requirement from the pre-proposed Second Amendment that the Senior Governing Body have “sufficient expertise and knowledge” of cybersecurity-related matters, replacing it in the Second Amendment with “sufficient understanding.” (APC, p. 1). It would seem, then, that the Industry Letter suggests that the Senior Governing Body could not have a sufficient understanding of the risks posed by AI without being specifically trained on those risks by the Covered Entity.
Third-Party Service Provider and Vendor Management (§ 500.11). The Industry Letter notes that Covered Entities should be mindful of the risks posed to TPSPs by AI and its uses. Covered Entities should ensure that they maintain policies and procedures that ensure due diligence before engaging TPSPs, mandate proper security controls for TPSPs that have access to a Covered Entity’s systems or NPI, and incorporate additional representations and warranties in instances where TPSPs use AI tools. Covered Entities should also consider including language in TPSP agreements mandating that TPSPs take advantage of enhanced privacy, security, and confidentiality options when using AI products or services.
Monitoring (§§ 500.5(b) and 500.14). While the Cybersecurity Regulation already requires Covered Entities to maintain policies and procedures designed to promptly inform the Covered Entity of new security vulnerabilities by having a monitoring process in place, as well as a requirement to monitor the activity of users on their system, NYDFS encourages additional monitoring for Covered Entities using AI-enabled products, or if they permit personnel to use AI applications such as generative AI applications. This includes monitoring for “Privacy Attacks” (highlighted by NIST in § 2.4 of NIST AI 100-2), which include unusual queries that could indicate attempts to extract NPI. NYDFS also suggests blocking queries that could expose NPI to a public AI product or system.
Data Management (§ 500.13). The Industry Letter also focuses on the benefits of good data management reduce the NPI at risk of exposure in the event of system compromise. Covered Entities should dispose of NPI that is no longer necessary for business operations or other legitimate business purposes, including NPI used for AI training or other purposes. Where the maintenance of data is necessary for the effective functioning of an AI product or system, Covered Entities should identify all systems relying on AI-enabled products and prioritize the implementation of mitigations for systems critical to ongoing business operations.
***
As AI enhances and changes the cybersecurity landscape, NYDFS also notes the “substantial cybersecurity benefits” to be gained by integrating AI into cybersecurity tools—particularly with respect to monitoring systems, analyzing trends, and predicting potential threats. The Industry Letter suggests that NYDFS is closely monitoring the AI arms race that continues between threat actors’ use of AI to launch cyberattacks vs. AI-powered cybersecurity defensive tools and, as was the case following prior guidance, may be indicative of the NYDFS’ enforcement priorities moving forward.