Landmark Ruling: California Courts Adopt Generative AI Policies

Date: Category:US Views:2 Comment:0


Landmark Ruling: California Courts Adopt Generative AI Policies originally appeared on L.A. Mag.

<em>The Hall of Justice, </em><em>Los Angeles, CA</em>Courtesy Tupungato via Adobe Stock
The Hall of Justice, Los Angeles, CACourtesy Tupungato via Adobe Stock

For most people, interactions with the court system are rare, but when they do occur, the consequences can be significant. Generative AI introduces both opportunities and risks into the judicial process. It can improve efficiency by drafting documents, summarizing evidence, and assisting in legal research. Yet, without regulation, it poses dangers, including confidential data leaks, inaccurate or fabricated content, and the reinforcement of systemic bias embedded in historical legal data. Without clear safeguards, sensitive information could inadvertently be fed into public AI tools, risking data breaches or misuse. Inaccurate or biased AI-generated content could misinform judges, attorneys, or the public, leading to unfair outcomes. Even minor errors in legal documents could have enduring consequences for a case.

On July 18, 2025, the California Judicial Council approved a landmark framework governing the use of generative artificial intelligence (GenAI) in the state’s judicial branch. The Judicial Council is the policy-making body of the California courts. The framework applies to the entire Judicial Branch of California, the largest court system in the United States, comprising the California Supreme Courtthe California Courts of Appeal, and the California Superior Courts. California is one of the first and the largest court systems in the United States to adopt such a sweeping GenAI regulation framework across the nation.

The regulation was created by the Judicial Council’s AI Task Force, created in 2024 by Chief Justice Patricia Guerrero in response to growing public concern about how AI could affect the administration of justice. The AI Task Force is an advisory body tasked with developing policy recommendations to the Judicial Council on AI use in the judicial branch. The task force’s mandate was to find a balance between leveraging new technology for efficiency and ensuring the courts remain impartial, secure, and trustworthy.

The Framework for Regulating Generative AI

The report to the Judicial Council from the AI Task Force is titled "Rule and Standard for Use of Generative Artificial Intelligence in Court-Related Work." The report recommends a Rule of Court and a Standard of Judicial Administration on pages 15–19. A Rule of Court is a binding requirement with formal consequences for noncompliance. Standards of Judicial Administration are advisory guidelines, but not strictly enforceable like a rule. They behave like a recommendation of best practices, encouraging consistency while allowing discretion.

Rule of Court 10.430 contains provisions that function to do the following:

  • Prohibit the entry of confidential, personal identifying, or other nonpublic information into a public GenAI system. For example, driver’s license numbers, date of birth, social security numbers, addresses, phone numbers, medical/psychiatric information, financial information, content sealed by court order or deemed confidential by court rule, etc.

  • Prohibit the use of GenAI to discriminate or disparately impact people.

  • Ensure that court staff and judicial officers who create or use GenAI material confirm that the material is accurate and free from erroneous, hallucinated, biased, offensive, and harmful outputs.

  • Ensure compliance with all applicable laws, court policies, and ethical/professional conduct rules, codes, and policies when using GenAI.

  • Ensure the disclosure of use of or reliance on GenAI if the final version of a written, visual, or audio work provided to the public consists entirely of generative AI outputs.

Standard of Judicial Administration 10.80 contains similar provisions, covering the use of GenAI by judicial officers for tasks within their adjudicative role, differing slightly in terms of disclosure. It enables officers to consider whether to disclose to the public the use of GenAI when it’s used to create content. It also reminds the judicial officers to “comply with applicable laws, court policies, and the California Code of Judicial Ethics when using generative AI.” The purpose of imposing a standard in addition to a rule is that the AI Task Force considered it more appropriate for addressing the use of GenAI in judicial officers’ adjudicative tasks. The standard identifies major risks of GenAI, allowing judicial officers to use their judgment in how to mitigate these risks in their work.

Effective September 1st, the regulation will require any California court that permits the use of GenAI to implement a written use policy by December 15th. The courts can either adopt the Judicial Council’s model policy or create their own, so long as their version meets the regulation’s core requirements. This ensures that local courts are best positioned to determine how GenAI can be used responsibly within their specific operational needs.

The policies directly impact approximately 1,800 judges and tens of thousands of court employees across California’s 65 courts, as well as attorneys, clerks, and litigants interacting with the system. Law firms and self-represented litigants will also be indirectly affected, since filings and communications with the courts will now be subject to these oversight measures. The rules are likely to shape how lawyers prepare documents, how court staff handle administrative work, and how judges incorporate emerging technologies into their workflow.

While the rule mandates that courts permitting AI must adopt a policy by December 15th, enforcement will primarily occur at the local court level. Each court will be responsible for ensuring its staff and judicial officers comply with the safeguards, and for incorporating the requirements into internal procedures. There is no central “AI compliance unit,” so courts must ensure their policies align with the mandatory provisions of Rule 10.430. Since the standard is advisory, adherence in adjudicative contexts will depend on individual judicial officers’ judgment, guided by their ethical duties. Over time, compliance may be reinforced through training programs, audits, and public reporting.

The AI Task Force emphasizes that regulation is an initial step because GenAI technology evolves rapidly. The Council deliberately avoided creating an exhaustive list of approved or prohibited tools. The framework is designed to be updated as courts gain more experience with AI and as risks or opportunities become clearer. The Task Force intends to continue monitoring how AI is used in court settings, gather feedback from judges, staff, and the public, and consider potential revision of disclosure requirements or safeguards. This approach allows for refinement over time to ensure that policy keeps pace with technological innovation and public expectations.

What Drove California Towards Regulation?

Data Privacy and Security Risks

A central concern prompting the Judicial Council to take action is the potential mishandling of sensitive data when using GenAI. Public AI platforms store and use input data to improve their models. When court staff or judicial officers enter confidential case information into these systems, that data might end up in servers outside the court’s control, where it might be stored indefinitely, accessed by unauthorized parties, or incorporated into future AI outputs.

Court records contain personal details and sensitive information. A single instance of sensitive information entering public AI systems constitutes a breach of privacy. This could, in turn, have harmful effects on individuals and erode trust in the judiciary’s ability to safeguard information. Beyond direct breaches, there is also the risk of inadvertent exposure. Even when specific details are not publicly visible, AI systems trained on confidential court data could generate outputs revealing sensitive case information. For example, an AI model could potentially indirectly disclose the contents of a sealed record through a loosely related query.

California’s rules address these risks by explicitly prohibiting the entry of confidential, personal identifying, or other nonpublic information into public generative AI systems. Courts are also tasked with ensuring their AI policies include clear safeguards against these practices. This approach is designed to prevent both intentional misuse and accidental data loss, reinforcing the judiciary’s commitment to protecting privacy.

Ethical Considerations

The California Code of Judicial Ethics requires judges to preserve the integrity and impartiality of the judiciary, ensuring that court processes remain fair, transparent, and accountable to the public. The GenAI policy is grounded in judicial ethics principles of impartiality, confidentiality, and independence. GenAI poses unique challenges to these obligations because its outputs can be inaccurate, biased, hallucinated, and opaque. Requiring human review of AI-generated content, prohibiting the disclosure of confidential information to public AI systems, and encouraging transparency through disclosure helps to preserve these ethical foundations while allowing courts to adapt to technological change.

Public confidence in the judiciary depends on knowing that decisions are made through human judgment, not solely by algorithms whose inner workings are opaque. Californians expect that judicial decisions are made by people who are trained, accountable, and guided by the law instead of opaque algorithms whose reasoning we cannot examine. When courts disclose how and when AI is used, it reassures the public that nothing is hidden and that human judgment remains central.

Transparency ensures that when AI plays a role in court communications or decision-making, the public is aware of it. Accountability ensures that human oversight remains a non-negotiable part of the process, with clear responsibility for verifying accuracy and preventing bias. By requiring written policies and disclosure in key situations, California is setting a precedent for how courts nationwide might balance innovation with the principles of judicial integrity and fairness.

Privacy Protections with National Implications

The new AI court rules align closely with California’s broader push to strengthen privacy protections, including the state’s recent addition of a Privacy Law specialization for attorneys. Both emphasize safeguarding personal information, limiting its use to authorized purposes, and maintaining accountability in how data is handled. While the specialization prepares lawyers to navigate statutes like the CCPA and CPRA across industries, the judicial framework applies those same principles within the courts, explicitly prohibiting the entry of confidential or nonpublic information into public AI systems and requiring oversight to prevent misuse or inadvertent disclosure.

These privacy safeguards are a core part of what makes California’s GenAI framework both distinctive within the state and potentially influential on a national scale. That’s also why it could serve as a model for other states to adopt similar measures. By combining mandatory rules for certain contexts with advisory best practices for others, the approach balances flexibility with consistent protections. While some individual courts in other states have issued local guidelines, and certain federal agencies have begun exploring AI in administrative settings, no other state court system has adopted rules and standards on this scale. As public concern over AI in legal decision-making grows, the California model may serve as a reference point for jurisdictions seeking to maintain public trust while integrating emerging technologies. Over time, the principles of transparency and accountability embedded in these rules could form the foundation for national policy, ensuring that technological advancement strengthens trust in the justice system.

This story was originally reported by L.A. Mag on Aug 11, 2025, where it first appeared.

Comments

I want to comment

◎Welcome to participate in the discussion, please express your views and exchange your opinions here.