Frost Brown Todd LLC

01/13/2026 | Press release | Distributed by Public on 01/13/2026 08:55

California Appellate Courts Remind Practioners to Avoid Citing AI Hallucinations in Legal Briefs

  • California Appellate Courts Remind Practioners to Avoid Citing AI Hallucinations in Legal Briefs

    Jan 13, 2026

Contributors

Search Submit

Popular Insights

Receive email updates on topics that matter to you.

Learn More

Artificial intelligence (AI) is being used more frequently in the legal sector. For example, a March 2023 survey by the Thomson Reuters Institute of 443respondents, primarily from midsized and large U.S. law firms, revealed that 82% of the surveyed lawyers believed AI can be readily applied to legal work-with 51% of them saying that it should be. However, 62% of the respondents in the 2023 survey said their law firms had risk concerns with generative AI.

One such concern involves AI errors and hallucinations, the latter of which occurs when generative AI models produce factually incorrect, nonsensical, or fabricated information. A 2025 survey by Counselwell and Spellbook of 256 respondents from primarily in-house legal departments showed that 49% used AI for legal research, although 78% of the respondents had concerns about AI errors and hallucinations. With respect to global law firms, a 2025 International Legal Technology Association survey of nearly 600 global firms showed that 80% of the respondents are using or exploring the use of generative AI in 2025, with 80% also listing risks relating to confidentiality, misuse, and accuracy as some of the respondents' biggest concerns. Other recent 2025 surveys suggest that AI use by lawyers has risen to a low of 26% or to a high of 96%, although the higher number involved a more limited pool of 72 respondents in the survey. DeepL, a global language AI company, did a survey of 1,000 legal professionals in a variety of sectors, and 30% of the respondents identified AI misuse or hallucinations as being concerns in high-stakes litigation.

Damien Charlotin, a researcher with results available on the internet, has tracked various AI misuse cases around the globe. As of December 24, 2025, he had 709 cases involving fabrications and misrepresentations which resulted in various levels of punishments; 482 of these "hits" occurred the United States. Given these rather astounding statistics, it is small wonder that the California Courts of Appeal-which generate a substantial amount of the published caselaw in California-have recently issued several opinions setting a standard of care for California appellate practitioners using generative AI, although it is probable this also extends to state court practitioners and Ninth Circuit appeal lawyers as well.

Notable Appellate Decisions on AI Fabrications and Hallucinations

The first leading opinion in this area, Noland v. Land of thee Free, L.P., 114 Cal.App.5th 426 (Sept. 12, 2025), was issued by the Second District Court of Appeal, Division 3. There, in response to an order to show cause by the appellate court, the appellant's attorney in Noland acknowledged that his briefs were "replete with fabricated legal authority, which he admit[ted] resulted from his reliance on generative AI sources" that "he did not manually verify'" (id. at 443, 441). The factual record demonstrated that, out of the 23 cases quoted, 21 of them were false, augmented by the additional circumstance that the appellant's opening and reply briefs were "peppered" with inaccurate citations in support of their appellate propositions. Noland determined that the attorney's conduct was sanctionable because it rendered the appeal frivolous and because "[t]he appeal also unreasonably violate[d] the Rules of Court because it does not support each point with citations to real (as opposed to fabricated) legal authority." (Id. at 447, citing Cal. Rules of Court, rule 8.204(a)(1)(B).)

Establishing a standard of care, the appellate court in Noland stated, before filing any court document, an attorney must check every case citation, fact, and argument for correctness, not delegating that role to AI or any other form of technology. (Id. at 446-447, citing Versant Funding LLC v. Teras Breakbulk Ocean Navigation Enterprises, LLC, 2025 WL 1440351, at *4 (S.D. Fla. May 20, 2025).) Although the offending attorney requested that a corrected brief refiling should be the penalty, the reviewing panel instead "conservatively" sanctioned the offending attorney $10,000 payable to the court. (Id. at 448.) The opposing counsel might have obtained some fee recovery for filing the respondent's brief, but that counsel did not alert the appellate court to the AI fabrications so that no sanctions were awarded to respondent-with the appellate court refusing to dismiss the appeal and affirming the judgment below on the merits. (Id.)

Noland was followed by the Fourth District, Division 1's opinion in People v. Alvarez, 114 Cal.App.5th 1115 (Oct. 2, 2025). A criminal defendant's attorney cited one nonexistent case and misrepresented the legal proposition in two other cases. (Id. at 1117-1118.) As in Noland, the attorney admitted his lack of professionalism by failing to verify the cases provided to him by AI. (Id. at 1118.) Alvarez sanctioned the attorney $1,500 payable to the appellate court under the sanctions authority of the Code of Civil Procedure section 128.7, subdivision (b)(2). (Id. at 1120.) Consistent with Noland, the Alvarez court found that "attorneys must check every citation to make sure the case exists and the citations are correct." (Id. at 1119.)

Schlichter v. Kennedy, 2025 WL 3204738 (4th Dist., Div. 2 Nov. 17, 2025) involved an attorney using AI-fabricated legal authority in a petition for writ of supersedeas, which was summarily denied, and in appellant's subsequent opening brief. The appellate court set an order to show cause (OSC) on why sanctions should not be imposed. At the OSC, although admitting he used AI on appeal and apologizing to the appellate court about the "mistakes," the offending attorney took the approach-contrasted with the attorneys in Noland and Alvarez-that the spurious citations resulted from clerical errors unrelated to the use of generative AI. The Fourth District, Division 2 found the attorney's explanations not credible, sanctioning him $1,750 payable to the appellate clerk. The panel, too, agreed with Noland and Alvarez that attorneys must check the correctness of AI-generated authority, not delegating this responsibility to any form of technology.

The latest published California opinion in this area is the Second District, Division 1's per curiam decision in Shayan v. Shakib, 2025 Cal.App. LEXIS 782 (Dec. 1, 2025). The offending attorney there filed an appellant's opening brief containing numerous fabricated quotations falsely attributed to published decisions. The respondent picked up on the fabrications, moving to strike the appellant's opening brief. The appellate court did so, sanctioning the appellant's attorney $7,500 payable to the clerk but allowing the appellant 10 days to file a corrected opening brief to only cure the fabricated citations and quotations in the original brief. Like the attorney in Schlichter, the appellate counsel in Shayan unsuccessfully argued that citation errors were merely clerical. Shayan endorsed the strict liability test from prior intermediate appellate courts, indicating zero tolerance for hallucinated case citations, facts, or law. (Quoting Noland, 114 Cal.App.5th at 448-449.)

In line with these decisions, California State Senator Tom Umberg has introduced Senate Bill 574 for consideration by the California Legislature. It provides that attorneys must review, verify, and correct any AI-generated work, with sanctions available for any violations and with possible state bar repercussions flowing from breaching this standard of care. The bill would add Government Code section 6068.1(a)(3)(A)-(B) and Code of Civil Procedure section 128.7(b)(2)(A) to the statutory provisions.

Key Takeaways

  • Fall on your sword. Although you likely will get sanctioned to some extent, those attorneys paying smaller sums to the court clerk were more candid on admitting to their use of AI and by taking responsibility for the fabrications.
  • If you are the respondent, move to dismiss the appeal, strike the offending brief, or otherwise point out the problems to the court. If you are the responding party on appeal, ask the appellate court to take some action when you spot fabrications, such as dismissing the appeal, striking the offending brief, or mentioning the infirmities in the respondent's brief. If not, you may fail to preserve the ability to get frivolous sanctions recovery for your efforts as respondent.
  • Follow some advisory prompts from the California intermediate appellate courts. On October 31, 2025, under News and Announcements, the intermediate appellate courts have an article titled "Using AI for Your Court Case? Read This First." It contains some advisory warnings, including these: (1) double-check anything AI gives you; (2) ask AI to give sources, and check them yourselves; and (3) you must make sure all court papers are correct, even if AI helped.

FBT Gibbons' appellate team has a proven track record of success in appeals involving questions of first impression, bet-the-company judgments, and decisions that shape the rules under which our clients will operate well into the future. For more information, please contact the author or any attorney with the firm's Appellate Practice Group.

Frost Brown Todd LLC published this content on January 13, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on January 13, 2026 at 14:55 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]