News

Red Pilling of Politics – Court Strikes Down California Law on Political Deepfakes

  • Mark Rasch--securityboulevard.com
  • published date: 2025-10-10 00:00:00 UTC

None

<p>In The Matrix, (the first Matrix) Morpheus tells Neo, “You have to understand, most of these people are not ready to be unplugged.” California’s legislature, in passing Assembly Bill 2655 (AB 2655), thought it could hand voters the red pill — forcing large platforms to label or remove AI-generated “materially deceptive” political content so the electorate could distinguish real from fake. But the federal court in Sacramento noted that Congress, in passing Section 230 of the Communications Decency Act, provided broad immunity to ISPs and carriers.</p><h3><strong>Political Deepfakes</strong></h3><p>In February 1972, “Paul Morrison” of Dover, N.H. wrote a letter to the conservative Manchester Union Leader recounting the fact that Morrison overheard Muskie laughing at a joke about French-Canadians – Canucks – being “lumberjacks and can’t speak English,” and that Muskie – the Democratic candidate for President – condoned this kind of ethnic slur. The so-called “Canuck letter” was famously an effective political fake organized by political operatives working for the Nixon campaign. Since then, the political deepfake has evolved. In 2024, a manipulated audio recording of President Biden telling voters to “stay home” circulated just before New Hampshire’s primary. In the last election, a flood of AI-generated images showed Donald Trump in prison jumpsuits, Kamala Harris laughing at gas prices, and fabricated “endorsements” by celebrities who never spoke. As AI gets better, these images and videos become indistinguishable from actual political speeches. Even if a majority of people know or suspect that something is a deepfake, the damage may already be done. <br><br>The California legislature responded, passing AB 2655, which required large platforms to remove or label deceptive deepfakes during political campaigns and to provide reporting tools for Californians. The idea was to deputize platforms to act as referees, ensuring that false, manipulated speech didn’t sway elections. A federal court in California, however, struck down that law as unconstitutional.</p><h3><strong>Protected Speech</strong></h3><p>The First Amendment protects false speech almost as zealously as true speech. In United States v. Alvarez, 567 U.S. 709 (2012), the Supreme Court struck down the Stolen Valor Act, which criminalized lying about military honors. Justice Kennedy wrote for the plurality:<br><br>“The Court has never endorsed the categorical rule the Government advances: That false statements receive no First Amendment protection. The Government’s content-based restrictions on speech have been permitted, as a general matter, only when confined to the few categories of speech where the law has a long tradition of regulation.” Id. at 718.<br><br>So while fraud, defamation, or incitement can be punished, mere falsity — even egregious falsity — is not enough. A deepfake that makes a candidate look foolish may be offensive, even manipulative, but unless it crosses into defamation or true threats, it’s constitutionally protected.</p><div class="code-block code-block-12 ai-track" data-ai="WzEyLCIiLCJCbG9jayAxMiIsIiIsMV0=" style="margin: 8px 0; clear: both;"> <style> .ai-rotate {position: relative;} .ai-rotate-hidden {visibility: hidden;} .ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;} .ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;} </style> <div class="ai-rotate ai-unprocessed ai-timed-rotation ai-12-1" data-info="WyIxMi0xIiwxXQ==" style="position: relative;"> <div class="ai-rotate-option" style="visibility: hidden;" data-index="1" data-name="VGVjaHN0cm9uZyBHYW5nIFlvdXR1YmU=" data-time="MTA="> <div class="custom-ad"> <div style="margin: auto; text-align: center;"><a href="https://youtu.be/Fojn5NFwaw8" target="_blank"><img src="https://securityboulevard.com/wp-content/uploads/2024/12/Techstrong-Gang-Youtube-PodcastV2-770.png" alt="Techstrong Gang Youtube"></a></div> <div class="clear-custom-ad"></div> </div></div> </div> </div><h3><strong>Section 230 and the Clash</strong></h3><p>Perhaps the most famous Internet regulation is Section 230 of the Communications Decency Act. This 1996 law, 47 U.S.C. § 230(c)(1), provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Consistently, courts have applied Section 230 to state that platforms like X, Meta, and YouTube aren’t legally responsible for what their users post. They can moderate, label, or remove if they want, but the government can’t force them to. AB 2655 did exactly that — requiring platforms to treat third-party deepfakes as their own problem, and compelling them to take specific action.<br><br>In Kohls v. Bonta, No. 2:24-cv-02527-JAM-CKD (E.D. Cal. Aug. 28, 2025), a coalition including The Babylon Bee and other creators challenged the law, and the California Attorney General ultimately agreed that the new law violated Section 230. As a result, the parties stipulated that the providers were not required to enforce the statute. Senior District Judge John A. Mendez struck it down, holding:<br><br>“AB 2655 violates and is preempted by Section 230 of the Communications Decency Act of 1996 (47 U.S.C. § 230). Defendants shall not enforce AB 2655, in its entirety, against any ‘provider’ of ‘an interactive computer service.’” Order at 2, <a href="https://cases.justia.com/federal/district-courts/california/caedce/2%3A2024cv02527/453046/100/0.pdf" target="_blank" rel="noopener">available here</a>. <br><br>The judge didn’t need to resolve the broader First Amendment questions because Section 230 preemption was enough. But the plaintiffs had argued — convincingly — that the law also amounted to compelled speech, prior restraint, and viewpoint-based regulation, all toxic under constitutional scrutiny.<br><br>However, politicians are not the only ones impacted by deepfakes. In May of this year, Congress passed the TAKE IT DOWN Act (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act), Public Law 119-12, which requires online platforms (especially user‐generated content platforms) to establish a notice/takedown system whereby such content must be removed within a defined period after valid notice – similar to the protections afforded copyrighted materials posted in an infringing manner online. Because TAKE IT DOWN is on equal legal footing as Section 230 (both passed by Congress), it does not suffer the same infirmities as the state statute – it effectively can compel the takedown of information.<br><br>Unlike AB 2655, the TAKE IT DOWN Act targets a very narrow category of speech — non-consensual intimate imagery — that is widely recognized as harmful, invasive, and minimally valuable. Thus, the images involved both nonconsensual (tortious) invasion of privacy, and continuing harm – similar to, but not identical to “revenge porn.” While the First Amendment cannot compel an entity to speak or refrain from speaking in most cases, here the carrier is compelled not only to restrict its own speech, but to provide a mechanism to restrict the speech of others. Courts addressing the balance of harms between AI-generated intimate images and free speech are likely to rule on the side of restricting the images.<br><br>For political speech – even political AI-generated speech – the balance likely would tip the other way. Much such <a href="https://securityboulevard.com/2024/10/californias-deepfake-regulation-navigating-the-minefield-of-ai-free-speech-and-election-integrity/" target="_blank" rel="noopener">AI-generated political speech</a>. For example, a political operative might find it useful to put an opponent’s words into a speaker’s mouth – having Trump deliver a speech by Biden or vice versa – to illustrate a hypocrisy. This would be protected speech, even if not wanted. Balancing the regulation of political deepfakes with the preservation of a vibrant marketplace of ideas is one of the hardest challenges facing lawmakers today. On the one hand, AI-generated videos and audio clips that depict candidates making false or inflammatory statements can be profoundly damaging—eroding public trust, inciting unrest, and manipulating voters in the moments when facts matter most. On the other hand, the Supreme Court has long emphasized that political speech, even when harsh, exaggerated, or satirical, occupies the “core” of First Amendment protection. See New York Times Co. v. Sullivan, 376 U.S. 254, 270 (1964). The danger of overregulation is that laws designed to stamp out deception could also sweep up parody, satire, or legitimate political critique—forms of speech essential to democratic debate. Thus, any regulatory regime must be narrowly tailored: targeted at demonstrably harmful and deceptive uses of deepfakes (such as impersonation or incitement), while leaving intact the broad and sometimes messy arena of free political expression that the Constitution has always protected.<br><br>And that ain’t easy. Particularly in the hyperpartisan world in which we currently live. Hey, let me see that red pill again.</p><div class="spu-placeholder" style="display:none"></div><div class="addtoany_share_save_container addtoany_content addtoany_content_bottom"><div class="a2a_kit a2a_kit_size_20 addtoany_list" data-a2a-url="https://securityboulevard.com/2025/10/red-pilling-of-politics-court-strikes-down-california-law-on-political-deepfakes/" data-a2a-title="Red Pilling of Politics – Court Strikes Down California Law on Political Deepfakes"><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F10%2Fred-pilling-of-politics-court-strikes-down-california-law-on-political-deepfakes%2F&amp;linkname=Red%20Pilling%20of%20Politics%20%E2%80%93%20Court%20Strikes%20Down%20California%20Law%20on%20Political%20Deepfakes" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_linkedin" href="https://www.addtoany.com/add_to/linkedin?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F10%2Fred-pilling-of-politics-court-strikes-down-california-law-on-political-deepfakes%2F&amp;linkname=Red%20Pilling%20of%20Politics%20%E2%80%93%20Court%20Strikes%20Down%20California%20Law%20on%20Political%20Deepfakes" title="LinkedIn" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F10%2Fred-pilling-of-politics-court-strikes-down-california-law-on-political-deepfakes%2F&amp;linkname=Red%20Pilling%20of%20Politics%20%E2%80%93%20Court%20Strikes%20Down%20California%20Law%20on%20Political%20Deepfakes" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_reddit" href="https://www.addtoany.com/add_to/reddit?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F10%2Fred-pilling-of-politics-court-strikes-down-california-law-on-political-deepfakes%2F&amp;linkname=Red%20Pilling%20of%20Politics%20%E2%80%93%20Court%20Strikes%20Down%20California%20Law%20on%20Political%20Deepfakes" title="Reddit" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_email" href="https://www.addtoany.com/add_to/email?linkurl=https%3A%2F%2Fsecurityboulevard.com%2F2025%2F10%2Fred-pilling-of-politics-court-strikes-down-california-law-on-political-deepfakes%2F&amp;linkname=Red%20Pilling%20of%20Politics%20%E2%80%93%20Court%20Strikes%20Down%20California%20Law%20on%20Political%20Deepfakes" title="Email" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share"></a></div></div>