Federal judges and AI
The integration of Artificial Intelligence (AI) into the US federal judiciary is a complex field that offers opportunities and challenges. Since 2025 there is an internal transition recommendation that allows the use of AI, but calls for clear caution and personal responsibility. At the same time, errors from AI usage in the offices of federal judges led to a broad debate about rules, liability, and transparency. A draft for a new Evidence Rule 707, which "machine-generated evidence" more strictly scrutinizes, is under public commentary.
Introduction
"Federal judges AI" describes the tension between US federal courts and artificial intelligence. This includes the use of AI in internal work (research, drafts), in the parties' pleadings, as well as a possible evidentiary item before the court. Generative AI can provide ideas, but it also tends to hallucinate sources and citations. A prominent example of this was the case Mata v. Avianca im Jahr 2023, , in which invented precedents led to sanctions. Chief Justice John Roberts emphasized in late 2023 the opportunities of AI for access to justice, but also the necessity of human judgment and humility in AI usage. The American Bar Association published first ethics guidelines in 2024, , which permit the use of AI in accordance with competence, confidentiality, communication, and appropriate fees.
Regulatory landscape
In May/June 2023 federal courts responded to the Avianca affair. Judge Brantley Starr (N.D. Texas) as the first demanded a "Mandatory Certification" for the use of Gen AI in pleadings, which requires human review and confirmation of every AI-generated passage. Similar disclosure obligations followed in Pennsylvania and Illinois. In 2024/25 further courts issued comparable orders; overviews of those are provided by Stanford Law and Trackers of large law firms.
On On July 31, 2025 the Administrative Office of the U.S. Courts (AO) sent interim guardrails to all federal courts. These allow the use and testing of AI, but prohibit the delegation of core decisions to AI. AI expenditures must be independently verified, and users remain fully responsible. The AO recommends that courts define local tasks for which approved tools may be used. FedScoop reported on these guardrails.
In parallel, evidentiary law is pushing forward the classification of AI expenditures. The draft for a new Rules of Evidence norm 707 was on June 10, 2025, for publication recommended; ; the comment period runs from August 15, 2025 to February 16, 2026. The aim is to subject machine-generated evidence, if no human expert testifies, to the same standard of reliability as expert testimony under Rule 702. A judicial panel had already endorsed this direction on May 2, 2025 publicly affirmed.
In October 2025, two federal judges, Julien Neals (D.N.J.) and Henry Wingate (S.D. Miss.), confirmed that staff used ChatGPT or Perplexity for drafting work. This led to faulty orders that were withdrawn, and a tightening of internal rules. Reuters and AP News reported on it.

Quelle: the-decoder.com
Justice in the digital age: A symbolic depiction of the integration of AI into jurisprudence.
Challenges and Opportunities
The mix of openness and caution in dealing with AI has several reasons. First: quality assurance. Generative AI can speed up processes, but hallucinated quotes jeopardize procedural fairness and create sanction risks. The Avianca sanctions show that courts sanction Rule-11 violations. Second: confidentiality and IT security. The AO points to the protection of sensitive data and urges clear procurement and security processes for AI tools. Third: federal diversity as a test bed. The judiciary administration encourages local experimentation to explore responsible areas of deployment. This pragmatic approach bundles pilot projects and knowledge sharing. Fourth: clarity in evidentiary law. The proposed Rule 707 is meant to prevent convincingly appearing but methodologically questionable AI outputs from distorting the proceedings.
Quelle: YouTube
The existing internal transitional guidelines of the federal judiciary from the 31. Juli 2025 show that use is allowed, but the delegation of core functions to AI is prohibited and independent verification is required. Several federal judges require disclosure or certification when using generative AI in pleadings, starting with Brantley Starr on 30. Mai 2023. The draft on FRE 707 "machine-generated evidence" has been published for public comment since August 15, 2025. A general disclosure obligation before all federal courts is unclear, as such duties are currently judge- or court-specific. The AO guidelines were described to the Senate, but the full text has not been made generally available. The claim that "federal judges let AI render judgments" is false, because the AO explicitly counsels against delegating core functions to AI. Also the assertion that "Avianca was just media hype, with no consequences" is false, as the court imposed actual sanctions.

Quelle: law.com
The coexistence of humans and machines in the courtroom: A vision of the future of justice.
Practical Implications
For everyone working in US federal proceedings, it is crucial to check, before any filing, the locally applicable standing orders on AI to check, because disclosure, certification, and audit obligations vary. Every AI output must be verified against reliable sources, because Rule-11-Sanktionen are real. It is important to use privacy-compliant tools, as the AO emphasizes procurement, security, and accountability. From an evidentiary perspective, the development of FRE 707 needs to be monitored, as the hurdle for machine-generated outputs will rise.
Quelle: YouTube
Open questions concern the final form and accessibility of the final federal guidelines, as the AO refers to interim guidance and the full text has not yet been published. Also the final version of FRE 707 after the commentary and its interaction with Rule 702 in everyday practice is still unclear. Additionally, the systematic recording of future AI misfunctions by courts remains an open question, as the AO currently does not report national statistics outside of certain insolvency contexts.

Quelle: freepik.com
The courtroom of the future: Technology as an integral part of legal decision-making.
Conclusion and Outlook
The development shows a clear trajectory: permitted use of AI with clear human responsibility, increasing transparency in pleadings, and growing evidentiary certainty through FRE 707. . For practitioners, this means knowing local requirements, using AI wisely and sparingly, independently validating every output, and actively tracking developments in evidentiary and professional conduct law. The Stanford-Übersicht, these Kommentierungsphase zu FRE 707 ABA-Ethik-Leitlinien are important points of contact.