Frost Brown Todd LLC

03/27/2026 | Press release | Distributed by Public on 03/27/2026 08:27

Generative AI Disclaimers: A Practitioner’s Guide

  • Generative AI Disclaimers: A Practitioner's Guide

    Mar 27, 2026

Contributors

Search Submit

Popular Insights

Receive email updates on topics that matter to you.

Learn More

This guide covers state laws requiring disclaimers and disclosures for generative artificial intelligence (GenAI). It also discusses court requirements and the ABA model rules addressing the use of GenAI by lawyers. Finally, the practice note includes a limited model GenAI disclaimer clause that could be used after necessary modifications by, and particular to, the business.

Given the proliferation of AI and GenAI products and services available to law firms, corporations, public entities, and the everyday user, certain disclaimers and disclosures are now being required by various laws and rules. State laws require certain entities and organizations using consumer-facing AI or GenAI tools to make mandated disclosures and disclaimers to the public about such tools or face penalties. Similarly, some courts are requiring their own disclosures from attorneys using GenAI tools in legal pleadings and documents. The American Bar Association's model rules touch on such GenAI related disclosures from law firms. Finally, businesses are finding they need specific GenAI related disclaimers for their products and services to minimize liability.

State Laws Requiring GenAI Disclaimers

California

The California AI Transparency Act, (S.B.942) now operative August 2, 2026, requires Generative AI (GenAI) systems with over one million monthly visitors/users (defined as a "provider") to make available to users a free AI detection tool that can assess whether content was created or altered by the provider's GenAI system. See Cal. Bus. & Prof. Code §§ 22757.1-22757.2. The provider must offer the option to include manifest disclosures in a manner that is clear and conspicuous to the user and permanent or extraordinarily difficult to remove that such content was generated by AI. Cal. Bus. & Prof. Code § 22757.3. Latent disclosures are also required in content created or altered using the provider's AI system. The disclosure must include the version of the GenAI system and a time-date-and-unique-identifier stamp. Cal. Bus. & Prof. Code § 22757.3. A company licensing the provider's GenAI system must comply with the rules for latent disclosures. Cal. Bus. & Prof. Code § 22757.3(c). Violation of these rules subjects companies to a civil penalty and/or injunctive relief and attorneys' fees and costs. Cal. Bus. & Prof. Code § 22757.4. Exclusively non-user-generated video games, TV, streaming, or interactive experiences are excluded from the Act. Cal. Bus. & Prof. Code § 22757.5. The Attorney General's Office is entitled to prosecute violations of the statute. There is no safe harbor provision to correct violations.

A.B.853 delays enforcement of the California AI Transparency Act from January 1, 2026, to August 2, 2026. Cal. Bus. & Prof. Code § 22757.6. Effective January 2027, it requires large online platforms (defined as a search engine, social media, file sharing, or mass messaging platform with two million or more non-contributor unique monthly users) to provide a user interface that discloses the availability of system provenance data indicating whether content on the platform was generated or substantially altered by a GenAI system or captured by a capture device (defined as a camera, mobile phone with camera or microphone, and voice recorder). Cal. Bus. & Prof. Code § 22757.3.1. The user interface must conspicuously disclose information sufficient to identify the content's authenticity, origin, history of modification, available provenance data, and the GenAI system or capture device that created or substantially altered the content. Cal. Bus. & Prof. Code § 22757.3.1. A GenAI hosting platform or application that permits users to download the source code or model weights of any GenAI system must ensure disclosures pursuant to Section 22757.3 starting January 2027. Cal. Bus. & Prof. Code § 22757.3.2. A capture device manufactured after January 1, 2028, must embed automatic latent disclosures in content captured by the device, as well as provide users with the option to include latent disclosures in such content that includes the time and date of content creation or alteration, name and manufacturer of the device. Cal. Bus. & Prof. Code § 22757.3.3.

The California Generative AI Data Training Transparency Act applies to any person/entity that produces or substantially modifies an artificial intelligence system/service for use by the public (defined as a "developer.") See Cal. Civ. Code § 3110.

Starting January 1, 2026, a GenAI system publicly available to Californians for use shall disclose on its website documentation regarding the data used by the developer to train the GenAI system, including: the sources/owners of the datasets; number of data points; types of data points; whether data protected by copyright, trademark, or patents is included; whether the data includes personal information or aggregate consumer information; any cleaning, processing, or modification to the datasets; the time periods for collection and use of the datasets; and use of synthetic data. See Cal. Civ. Code § 3111(a) and Cal. Civ. Code §§ 1798.140. GenAI systems used entirely for safety and security purposes, airlines, national security, military, or defense are excluded from these requirements. Cal. Civ. Code § 3111(b).

The California Transparency in Frontier Artificial Intelligence Act, enacted in September 2025, is a landmark legislation regulating large AI frontier models. See Cal. Bus. & Prof. Code § 22757. 10 to § 22757.16. Among other requirements, the statute provides that a frontier model developer must disclose on its website certain aspects of its AI risk framework. Cal. Bus. & Prof. Code § 22757.12(a). A "frontier model" is defined as a generative AI model that was trained using computing power greater than 10^26 integer, including computing for the original training, and subsequent fine-tuning, reinforcement learning, or other material modifications to the preceding foundation model. Cal. Bus. & Prof. Code § 22757.11(i). Specifically, the developer must disclose how its AI risk framework will handle a list of AI risk issues, including best practices to assess and mitigate possible catastrophic risk from use of the frontier AI model, internally and when used by the public; regularly updating such standards and best practices; using third-party risk assessment and mitigation tools as necessary; cyber security practices to prevent unauthorized use of and access to the model; and identifying and responding to critical safety incidents, including if the model circumvents oversight mechanisms. See Cal. Bus. & Prof. Code § 22757.12(a)(1)-(1).

"Catastrophic risk" is defined as a foreseeable and material risk that a frontier model will materially contribute to the death or serious bodily injury of more than 50 people or cause more than $1 billion in damage to property from a single incident doing any of the following: (a) providing expert assistance in the creation of chemical, biological, radiological, or nuclear weapons; (b) without human involvement, engaging in a cyber-attack, or crimes constituting murder, assault, extortion, or theft, if the latter were committed by a human; or (c) evading the control of its frontier developer or user. Cal. Bus. & Prof. Code § 22757.11(c)(1). A "critical safety incident" includes unauthorized access, modification, or exfiltration of model weights that results in death or bodily injury; loss of control of a frontier model causing death or bodily injury; the use of deceptive techniques by the frontier model against the frontier developer to subvert control or monitoring in a manner that materially increases catastrophic risk; and harm resulting from a catastrophic risk materializing. Cal. Bus. & Prof. Code § 22757.11(d).

Colorado

The Colorado Consumer Protections for Artificial Intelligence Act, effective June 30, 2026, requires developers of high-risk AI systems (high-risk AI) to use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination. Colorado Revised Statutes Annotated 6-1-1701 et seq. High-risk AI systems are those making consequential decisions about a consumer in education, employment, finance/lending, essential government services, healthcare, housing, insurance, and legal services. C.R.S.A. § 6-1-1701.

To ensure compliance, a developer or deployer of a high risk system must comply with various requirements, including disclosures about the uses and inappropriate uses of the high risk AI system, the type of data used to train the system, foreseeable limitations of the system and the risks of algorithmic discrimination, how the system was evaluated for performance, the mitigation of risks prior to deployment, the data governance measures used, intended outputs, human in the loop, and monitoring performance of the system. C.R.S.A. § 6-1-1702. The developer must report to the Attorney General's office any risks and actual incidents of algorithmic discrimination within 90 days. C.R.S.A. § 6-1-1702.

Companies deploying high-risk AI systems must comply with several requirements for deployers, complete an impact assessment of the AI system, and make consumer-facing disclosures. C.R.S.A. § 6-1-1703. These disclosures include notifying consumers that AI is being used to make consequential decisions about them, disclosing the principal reasons for consequential decisions adverse to the consumer and the data used in that decision, the opportunity to appeal an adverse decision, with human review involved, and the right to opt out of the processing of personal data where applicable. C.R.S.A. § 6-1-1703. The Attorney General can enforce the statute, including as an unfair trade practice. C.R.S.A. § 6-1-1706.

Utah

The Utah AI Intelligence Policy Act ("UAIPA") establishes liability for entities that do not provide clear and conspicuous disclosures to consumers that they are interacting with an AI system. Similar to the Colorado statute, businesses providing services regulated/licensed by the state must prominently disclose to consumers the use of "high-risk" GenAI, including for the collection of sensitive personal information and making significant personal decisions in financial, legal, or healthcare. The Attorney General can levy fines and seek an injunction, disgorgement, and attorneys' fees for violations.

Texas

The Texas Responsible Artificial Intelligence Governance Act bars any AI system from discriminating against a protected class of people. It also bars the government use of AI systems that use "social scoring" (i.e., classification of people based on personal attributes, inferred or predicted, to deny benefits or services). See Tex. Bus. & Com. Code § 551.001 et seq.

The Act requires government agencies using artificial intelligence systems to interact with consumers to disclose to each consumer that the consumer is interacting with an artificial intelligence system. Tex. Bus. & Com. Code § 552.051. The disclaimer must be clear and conspicuous and written in plain language. It must avoid the use of a dark pattern, as defined in the Texas Business & Commerce Code. The disclosures may be provided by a link to a separate web page of the AI system developer or deployer.

New York

The New York legislature recently passed the New York Responsible AI Safety & Education ("RAISE"). It follows the California Transparency in Frontier Artificial Intelligence Act in requiring frontier models to make significant disclosures about the safety of their models and their plants to assess, address, and mitigation such possible harm, including crime, bioweapons, and other widespread risks to public safety.

Courts Requiring GenAI Disclaimers

Concerned with the rising use and misuse of GenAI in the practice of law, some courts are mandating that attorneys and pro se parties certify if they used GenAI tools to draft pleadings and documents filed before the judge. Courts are also requiring parties to independently verify the accuracy of the contents of pleadings filed using GenAI.

The U.S. District Court for the Northern District of Texas was the first to issue a Standing Order on the use of GenAI pleadings and court filings in 2023. Since then, several state and federal courts have issued rules and standing orders about the use of GenAI and relevant disclosure, including the Eastern District of Pennsylvania, the Northern District of Illinois, and the New York Supreme Court.

ABA Rules of Professional Conduct and GenAI Disclaimers

Rule 1.4 of the ABA Model Rules of Professional Conduct addresses attorney communications and requires attorneys to communicate timely with their clients. Specifically, attorneys must "reasonably consult with the client" about the case as well as "keep the client reasonably informed about the status of the case." In the current context of GenAI legal tools, this rule suggests that lawyers inform their clients when GenAI tools or non-standard AI legal research and document/data management systems (other than Lexis, Westlaw, Relativity, etc.) are used on client matters. It is recommended to disclose to clients at the time of retention (in the engagement letter) the law firm's use of AI and GenAI tools or to obtain consent of existing clients to the use of such tools in the client's matters. This not only demonstrates the law firm's use of emerging technologies but also ensures transparency and minimizes possible clarifications needed at a later date.

Model GenAI Disclosure Clause

"The content in this [document/photograph/graphic/presentation/video/audio/search result] has been [created/altered/revised/modified] using AI, including generative AI. The content may be, or may contain data/information that is incorrect, misleading, incomplete, or erroneous. Use or reliance upon the content is at the users' own risk. Users must individually verify the accuracy and veracity of the content for their specific use."

For more information, please contact the author or any attorney with our Business and Commercial Litigation practice group.

*This excerpt from Practical Guidance, a comprehensive resource providing insight from leading practitioners, is reproduced with the permission of LexisNexis. Reproduction of this material, in any form, is specifically prohibited without written consent from LexisNexis.

Frost Brown Todd LLC published this content on March 27, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on March 27, 2026 at 14:27 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]