LLMs and SCOTUS
The quick Constitutional case for AI laissez-faire
Since the New Deal, the Supreme Court has given government almost unlimited power to regulate the economy. Wickard v. Filburn (1942) infamously ruled that a farmer growing wheat to feed his own animals on his own farm was nevertheless engaged in “interstate commerce.” Given this stance, it’s hard to see how the courts could justify any restrictions on government regulation of the rapidly growing Artificial Intelligence industry.
Hard, that is, as long as you call it “the Artificial Intelligence industry.” But all of the top AIs also go by another acronym: LLMs, which of course stands for Large Language Models.* Which makes sense, because the primary output of this industry is just a bunch of words.
So what? While the Supreme Court has given government a virtual carte blanche to regulate the economy for over 80 years, constitutional protection of freedom of expression has probably never been stronger. In 1969, the Supreme Court moved from the classic “clear and present danger” test for permissible regulation of free speech to the even higher “imminent lawless action” test. By eviscerating obscenity law, Miller v. California (1973) even effectively extended full constitutional protection to almost all pornography, allowing the industry to thrive despite its extreme unpopularity. Though more Americans morally condemn pornography than abortion, the Supreme Court stands with porn.
Upshot: Once you acknowledge the truism that AI output is speech, almost all regulation of AI is ipso facto illegal. Government has no more legal right to regulate AI than it has to regulate the New York Times. Even if you’re a convinced doomer, you have to admit that the danger of existing LLMs is not “clear and present,” much less “imminent.” If the Supreme Court has an iota of consistency, the AI industry will be able — barring an anti-AI amendment to the Constitution — to fend off virtually all regulation with ease.
Does the Supreme Court have an iota of consistency? Based on past performance, the jury is still out. When (not if) AI comes before the Supreme Court, I bet SCOTUS will side with the government against the industry. But hopefully I’m wrong.
P.S. Volokh, Lemley, and Henderson, all top legal scholars, basically agree with me.
* Images and videos are the main non-linguistic AI outputs, but these too enjoy the strongest constitutional protections.



If you’re only looking at chatbot responses, that argument could hold. But agentic AI seems a lot more similar to human agents than speech, and so I strongly doubt that AI agents will be fully protected by the First Amendment.
"Once you acknowledge the truism that AI output is speech, almost all regulation of AI is ipso facto illegal."
Unfortunately, this is drastically overstated, and Volokh/Lemley/Henderson's position is much more narrow than this.
At best, only the *output* is speech. Most potential AI regulation targets conduct, not expression. Training data, safety testing, discrimination by AI-driven processes, data retention/privacy regulations? All conduct. The fact that the conduct would lead to expressive output does not shield the conduct from regulation.
And the output is speech, yes, but whose? The creators' and users' speech rights should matter here, as Volokh et al argue, but the amount and scope of regulation of the output depends on the source. Corporate speech and professional speech have less protection, and it's not yet clear that a speaker will be assigned to the speech in all cases. Then there's time/place/manner restrictions, which could potentially be applied to the deployment context.
Even if one trusts the courts to apply past 1A precedent reasonably, there's still a vast risk surface 1A can't possibly be expected to protect, and that's even before we get to agent behavior.