b'(2) Model TrainingAs a general rule, lawyers should not transmit Confidential Information or Sensitive Personal Information to aGAItoolthatpermitstheinformationtobeusedformodeltraining.Securityresearchershave demonstrated serious data security risks when classified information is used to train models. 29For example, in one widely discussed study, researchers demonstrated the ability to retrieve verbatim copies of personally identifiable information from an early OpenAI model even when the information was included in only one document in the training data. 30 The indicators in Table 2 show that public GAI tools require users to consent to model training, while tools aligned with the consumer category typically allow users to opt-out of model training. GAI tools that are more closely aligned with the business and enterprise categories will have model training turned off by default and include contractual assurances that customer data will not be used for this purpose. In the Technical Addendum included with this Guide, we provide more details for how model training works and discuss potential narrow exceptions to the general rule against allowing Confidential Information to be used to train models. (3) Data RetentionLawyers should investigate how long the GAI tool retains information, both after the lawyer deletes it from the user interface and after the service itself is terminated. The ideal data retention setting for GAI tools isZero Data Retention , meaning that a lawyers information is never retained by the underlying transformer model, and it is permanently removed from the providers servers immediately after the lawyer deletes the data from the user interface. 31Some GAI tools, despite being licensed as business-class tools and being subject to business-class privacy and security assurances, do not support Zero Data Retention, and therefore may be inappropriate for processing Sensitive Personal Information (especially without informed client consent). 32Table 2 shows the retention policies for public and consumer aligned GAI tools are typically defined by the third-party provider, and users have limited or no control over how long their data is stored or when it is deleted. By contrast, business and enterprise aligned tools provide these controls directly to the users or organizational administrators. Lawyers who use GAI tools to process Confidential Information or Sensitive Personal Information should understand that these tools retain data in multiple layers, and each layer might have its own retention 29Natl Inst. of Standards & Tech.,Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence (NIST AI 600-1) (July 2024), https://doi.org/10.6028/NIST.AI.600-1.Pr Nofiicleh olas Carlini et al.,, in 30Security Symposium2633 (USENIX Assn 2021), https://www.usenix.orgE/cxtornafcetrienngc Ter/auisneinngix Dseactuar firtoym21 L/aprrgees eLnatnagtiuoang/ec aMrolidnei-lsextraPcrtoincege. dings of the 30th USENIX 31See Vertex AI and Zero Data Retention , Google Cloud, https://cloud.google.com/vertex-ai/generative-ai/docs/data-governance (last visited Sept. 14, 2025) (providing a detailed discussion of Zero Data Retention applied to GAI tools). 32 , OpenAI (June 5, 2025), https://openai.com/index/response-to-nyt-data-demands (last visited Sept. 14, 2025) (describing OpenAIsIn vaSrieoeuHs odwat Wa reertee nRteisopno cnadpinagb itloit iTehse f oNre pwu Ybolirck,cToinmseusm Deart, aa nDde mbuasnidnse sins- Oclradsesr v teor Psiroontesc itn U rseesrp Pornisvea ctoy the Preservation Order entered inIn re OpenAI, Inc., Copyright Infringement Litigation , No. 25-md-3143 (S.D.N.Y. 2025)) (referencing re Openai, Inc ., 2025 U.S. Dist. LEXIS 97943 (S.D.N.Y. Mar. 13, 2025)).Page | 14'