b'settings. Weve included a more detailed discussion of layered GAI data retention in the Technical Addendum included with this Guide.(4) Data Isolation The level of data isolation in a GAI tool determines how closely your data is stored with that of unrelated third parties. Vendors who misconfigure their data isolation settings might expose sensitive information and allow unauthorized access to customer-specific data or resources. 33 As the sensitivity of the information being processed by a GAI tool increases, so too should the level of data isolation. Lawyers who use GAI tools toprocessConfidentialInformationorSensitivePersonalInformationshoulddocumenttheisolation settings for those tools. To help with this, weve included a more detailed discussion of how data isolation works in the Technical Addendum included with this Guide. As shown in Table 2, a GAI tool that is aligned with the public and consumer categories typically provides only basic, and therefore higher-risk, forms of customerisolation.Businessandenterprisealignedtoolsarebuiltwithstrongerisolation,suchas independently audited logical separation or customer-dedicated environments. (5) Supply Chain Risk GAI tools often rely on a network of downstream service providers that may process, store, or transmit the datayousendtothesystem.Eachsubprocessorshouldapplyadministrativeandtechnicalsafeguards appropriate to the sensitivity of the data. Lawyers should identify all subprocessors involved in providing and maintaining the GAI tool, evaluate whether their access to client data is necessary and proper, and confirmthateachisboundbyconfidentialityandsecurityobligationsthroughcontractualflow-down provisions. Table 2 describes public and consumer-aligned GAI tools as having few, if any, supply chain safeguards or published details about their supplier ecosystem. Business-aligned tools tend to provide full transparency for the privacy and security practices of its downstream vendors. Enterprise-aligned tools may also provide regulation-level supply chain protection (such as HIPAA Business Associate Agreements that address specific regulatory requirements for downstream providers). (6) AI Risk Management Frameworks ReputableGAItoolprovidersshouldpublishordirectlyprovidewrittenassurancesthattheydesign, maintain,andtesttheirmodelsinaccordancewithrecognizedriskmanagementframeworks. 34These frameworks address areas such as data governance, security controls, bias mitigation, transparency, and incident response. By aligning with these standards, providers demonstrate that their systems have been evaluatedagainstwidelyacceptedcriteriaforsafety,reliability,andtrustworthiness.Lawyersshould request documentation of the providers risk management practices and confirm that those practices remain aligned with these evolving frameworks. This is an area in which industry standards and official frameworks are not yet well established; accordingly, our Table 2 designations should be understood to illustrate that 33Multitenancy and Azure OpenAI Service , Microsoft Learn (May 13, 2025), https://learn.microsoft.com/en-us/azure/architecture/guide/multitenant/service/openai (last visited Oct. 7, 2025). 34Examples include the National Institute of Standards and Technology Artificial Intelligence Risk Management Framework (NIST AI RMF) and the Open Worldwide Application Security Project (OWASP) GenAI Security Project. Page | 15'