SACRAMENTO, California — A federal judge on Tuesday struck down a California law restricting AI-generated, deepfake content during elections — among the strictest such measures in the country — notching a win for Elon Musk and his X platform, which challenged the rules.
But Judge John Mendez also declined to give an opinion on the free speech arguments that were central to the plaintiffs’ case, instead citing federal rules for online platforms for his decision.
Mendez also said he intended to overrule a second law, which would require labels on digitally altered campaign materials and ads, for violating the First Amendment. The judge’s decisions Tuesday deal a blow to California Gov. Gavin Newsom, who signed the laws last year in a rebuke of Musk, vowing to take action after the tech billionaire and then-Donald Trump supporter shared a doctored video of former Vice President Kamala Harris ahead of the election.
The first law would have blocked online platforms from hosting deceptive, AI-generated content related to an election in the run-up to the vote. It came amid heightened concerns about the rapid advancement and accessibility of artificial intelligence, allowing everyday users to quickly create more realistic images and videos, and the potential political impacts.
But opponents of the measures, like Musk, also argued the restrictions could infringe upon freedom of expression.
The original challenge was filed by the creator of the video, Christopher Kohls, on First Amendment grounds, with X later joining the case after Musk said the measures were “designed to make computer-generated parody illegal.” The satirical right-wing news website the Babylon Bee and conservative social media site Rumble also joined the suit.
The Harris video had depicted her describing herself as the “ultimate diversity hire.”
Mendez said the first law, penned by Democratic state Assemblymember Marc Berman, conflicted with the oft-cited Section 230 of the federal Communications Decency Act, which shields online platforms from liability for what third parties post on their sites. “They don’t have anything to do with these videos that the state is objecting to,” Mendez said of sites like X that host deepfakes.
But the judge did not address the First Amendment claims made by Kohls, saying it was not necessary in order to strike down the law on Section 230 grounds.
“I’m simply not reaching that issue,” Mendez told the plaintiffs’ attorneys.
Neither Newsom’s office nor the office of California Attorney General Rob Bonta immediately responded to requests for comment. Berman’s office declined to comment and the office of Assemblymember Gail Pellerin, the Democrat who authored the second law, did not immediately respond to a request for comment.
Kristin Liska, arguing on behalf of the California attorney general’s office, noted the Berman law only applied to large platforms with 1 million or more users. She therefore asked Mendez to limit his order to plaintiffs X and Rumble.
Kohls’ Attorney Theodore Frank told POLITICO after the hearing he would work with Liska to ensure his client is protected on other sites that were not party to the lawsuit, like Facebook and YouTube.
Liska faced tough questioning from Mendez over the second deepfake law, challenged by the same plaintiffs, that requires platforms to label and take down deepfake videos of politicians around election time.
“I think the statute just fails miserably in accomplishing what it would like to do,” Mendez said, adding he would write an official opinion on that law in the coming weeks.
Laws restricting speech have to pass a strict test, including whether there are less restrictive ways of accomplishing the state’s goals. Mendez questioned whether approaches that were less likely to chill free speech would be better.
“It’s become a censorship law and there is no way that is going to survive," Mendez added.
Comments