If you’re only looking at chatbot responses, that argument could hold. But agentic AI seems a lot more similar to human agents than speech, and so I strongly doubt that AI agents will be fully protected by the First Amendment.
Trivially, of course the words could be read aloud.
But jurisprudence has long since established that the published word is speech.
[I believe you were trying to extract the answer: the owners of the AI and/or the users of the AI who prompt it to generate a particular output. See that Volokh, Lemley, and Henderson link Caplan provided.]
"Once you acknowledge the truism that AI output is speech, almost all regulation of AI is ipso facto illegal."
Unfortunately, this is drastically overstated, and Volokh/Lemley/Henderson's position is much more narrow than this.
At best, only the *output* is speech. Most potential AI regulation targets conduct, not expression. Training data, safety testing, discrimination by AI-driven processes, data retention/privacy regulations? All conduct. The fact that the conduct would lead to expressive output does not shield the conduct from regulation.
And the output is speech, yes, but whose? The creators' and users' speech rights should matter here, as Volokh et al argue, but the amount and scope of regulation of the output depends on the source. Corporate speech and professional speech have less protection, and it's not yet clear that a speaker will be assigned to the speech in all cases. Then there's time/place/manner restrictions, which could potentially be applied to the deployment context.
Even if one trusts the courts to apply past 1A precedent reasonably, there's still a vast risk surface 1A can't possibly be expected to protect, and that's even before we get to agent behavior.
I think it depends very heavily on what sort of AI regulation we are talking about. If, for example, the government wants to require all AI generated text to be labeled as such, I agree that would violate the free speech rights of users of AI. If, on the other hand, the government wants to prohibit training runs that use more than a specified amount of compute, I don't see that infringing on anyone's free speech rights.
"By eviscerating obscenity law, Miller v. California (1973) even effectively extended full constitutional protection to almost all pornography, allowing the industry to thrive despite its extreme unpopularity."
So extremely unpopular that it's making a lot of money. What's that saying about reported and revealed preferences?
People who want to "regulate" AI are living in dream-land: any such attempts will be widely flouted, and enforcement will be sporadic and politically motivated. This kind of thing degrades respect for law in general and therefore morally impoverishes a society.
AI is doing a lot more than saying things. Claude code is doing lots of programming services, for example. Even if you only stick to words and images, though, top AI models have recently become so powerful that I think requiring minimum wages for them would actually be the more consistent legal position. For example: https://x.com/itsolelehmann/status/2031308486815133905
For the federal government, only as it relates directly to commerce. And primarily about "marketing/advertising" and not what one private citizen can say to another.
> For the federal government, only as it relates directly to commerce.
We're both commenting on the same essay that begins by noting that the Commerce Clause authorizes everything the federal government might theoretically feel like doing, right?
> For states, sure.
Is there a weakness in the incorporation of the first amendment against the states? How are they treated differently than the federal government is?
The states have much more leeway to regulate what people do than the feds have. Occupational licensing, and everything downstream of it.
That's true independent of the fact that leftist judges will use the Commerce Clause to justify any fed action.
By the logic of the first sentence of your reply, the Feds can do absolutely anything at all they want to here. and on everything else. Yet that ain't quite today's reality. Some limits exist. Even if reality tilts far too much in that direction for the preferences of we classical liberals.
> By the logic of the first sentence of your reply, the Feds can do absolutely anything at all they want to here. and on everything else. Yet that ain't quite today's reality.
That isn't because of the Commerce Clause. It does indeed authorize everything. There are political limits on the American government, just as there are political limits on the Saudi government, but there are no limits related to the Commerce Clause.
Do you have any "hot takes" on intellectual property law Professor Caplan? Seems that's where a lot of the regulatory/economic debate is right now regarding AI. It's the basis on NYT's lawsuit against OpenAI. Should AI companies be allowed to train their bots on established IP and allow them to reproduce the work? I think it's a tougher question than it seems.
A very clean First Amendment argument. If LLMs are “just producing words,” then regulating them should be as unconstitutional as regulating newspapers, poets, or that one guy in every coffee shop who loudly explains Nietzsche to no one in particular. The real puzzle is whether the Court will classify AI output as speech, commerce, or some newly invented metaphysical category like “expressive computational emanations.” The Court has a long tradition of discovering such categories whenever reality becomes inconvenient. If they stick to their own precedents, AI should be as protected as political dissent and, apparently, most pornography. If they don’t, Wickard v. Filburn will once again rise from the grave to remind us that even feeding your own cows is a federal matter.
Protection of speech is in reality protection of persons making that speech. Courts would be recognizing this reality if they classify AI output as not speech.
That’s one way to frame it, but the Court has never limited First Amendment protection to persons in the biological sense. Corporations, newspapers, broadcasters, and even algorithms used in editorial processes are protected because the Amendment covers speech acts, not the metaphysics of the speaker. The legal question isn’t “Is the speaker a person?” but “Is the government regulating the production or distribution of expressive content?” On that framing, AI output looks a lot like every other tool humans use to generate or disseminate speech.
"Even if you’re a convinced doomer, you have to admit that the danger of existing LLMs is not “clear and present,” much less “imminent.”"
Completely incorrect. Not only have people been warning about this danger for decades, not only are people already dead, current models tend to put AGI/AI good enough to do AI research on its own as happening around 2030. It is absolutely imminent.
I'm confused - while I didn't go into detail, current best models from METR and the (now-ironically named) AI 2027 project show that the median best guess for when modern AI becomes entirely self-improving is around 2030-2031.
I won't rehash the entire argument here; merely note that ad hominem attacks are both not good argument and outside the spirit of friendliness Professor Caplan embraces.
I think the confusion here is about the meaning of "imminent". Colloquially it is somewhat in imprecise. In the First Amendment context, "imminent" means on a timescale of minutes to hours, not years.
I am somewhat surprised by your assertion that people are already dead. What are you referring to there?
Surely he is referring to suicidal people who used chatbots and subsequently took their own lives.
That aside, he is clearly simply wrong as a matter of jurisprudence re: “imminent.” Even if I find your “hours” claim a bit too narrow, it is surely nothing like 3-4 years.
Ah, that makes sense. I find it demeaning to the person who committed suicide to attribute the suicide to anyone or anything but their own free choice, but a lot of people seem to want to find anyone or anything else to blame.
Too bad freedom to contract and engage in voluntary exchange with others is not considered speech. Clearly, such interaction cannot occur without speech. Perhaps the SCOTUS should reconsider it's historical mistake of granting government operatives the power to regulate all kinds of voluntary exchange.
Of course it isn't. I didn't intend to make a valid argument. The concept of natural law is embedded in the Constitution, but no, it is not part of the American legal system, sadly enough.
"natural law" is not a concept in the American legal system. I'm just pointing out that your argument "X can't happen without speech, therefor X is protected speech" is not a valid argument.
Oh, humans are quite capable of having and enforcing and benefiting from legal systems without supernatural nonsense. In a country with a separation of church and state, it is a requirement.
As for protecting LLMs under free speech, the Supreme Court has already thrown commercial speech off the bus, copyrights have been denied when the "author" is an animal or a computer, and I can't see the Supremes suddenly seeing the light and undoing all that precedent.
The copyright issue wrt AI is much more complex than headlines have suggested. Much of what people generate with AI will end up copyrightable under current decisions, as long as they're substantially directing/shaping the output and exercising some editorial discretion before fixing it in publication.
If you’re only looking at chatbot responses, that argument could hold. But agentic AI seems a lot more similar to human agents than speech, and so I strongly doubt that AI agents will be fully protected by the First Amendment.
Speech implies speakers. The output of AI lacks that. Thus AI output isn't speech.
Au contraire.
The written word is indeed speech.
Whose written word?
Yes.
Trivially, of course the words could be read aloud.
But jurisprudence has long since established that the published word is speech.
[I believe you were trying to extract the answer: the owners of the AI and/or the users of the AI who prompt it to generate a particular output. See that Volokh, Lemley, and Henderson link Caplan provided.]
"Once you acknowledge the truism that AI output is speech, almost all regulation of AI is ipso facto illegal."
Unfortunately, this is drastically overstated, and Volokh/Lemley/Henderson's position is much more narrow than this.
At best, only the *output* is speech. Most potential AI regulation targets conduct, not expression. Training data, safety testing, discrimination by AI-driven processes, data retention/privacy regulations? All conduct. The fact that the conduct would lead to expressive output does not shield the conduct from regulation.
And the output is speech, yes, but whose? The creators' and users' speech rights should matter here, as Volokh et al argue, but the amount and scope of regulation of the output depends on the source. Corporate speech and professional speech have less protection, and it's not yet clear that a speaker will be assigned to the speech in all cases. Then there's time/place/manner restrictions, which could potentially be applied to the deployment context.
Even if one trusts the courts to apply past 1A precedent reasonably, there's still a vast risk surface 1A can't possibly be expected to protect, and that's even before we get to agent behavior.
Can the inputs be regulated? Chips, data centers etc. Seems pretty straightforward to me.
Also, we allow regulation of commercial speech to minors. Think Joe the Camel.
I think it depends very heavily on what sort of AI regulation we are talking about. If, for example, the government wants to require all AI generated text to be labeled as such, I agree that would violate the free speech rights of users of AI. If, on the other hand, the government wants to prohibit training runs that use more than a specified amount of compute, I don't see that infringing on anyone's free speech rights.
"By eviscerating obscenity law, Miller v. California (1973) even effectively extended full constitutional protection to almost all pornography, allowing the industry to thrive despite its extreme unpopularity."
So extremely unpopular that it's making a lot of money. What's that saying about reported and revealed preferences?
If valid, sounds like another good argument to throw out US Constitution and start anew.
People who want to "regulate" AI are living in dream-land: any such attempts will be widely flouted, and enforcement will be sporadic and politically motivated. This kind of thing degrades respect for law in general and therefore morally impoverishes a society.
Well argued piece.
It would still give the government plenty of ability to regulate the chips used in implementing LLMs. That commerce ain't speech.
" AI output is speech"
Whose speech?
AI is doing a lot more than saying things. Claude code is doing lots of programming services, for example. Even if you only stick to words and images, though, top AI models have recently become so powerful that I think requiring minimum wages for them would actually be the more consistent legal position. For example: https://x.com/itsolelehmann/status/2031308486815133905
Given the regulatory regime that applies to "professional speech", the premise of this essay seems flawed.
For states, sure.
For the federal government, only as it relates directly to commerce. And primarily about "marketing/advertising" and not what one private citizen can say to another.
> For the federal government, only as it relates directly to commerce.
We're both commenting on the same essay that begins by noting that the Commerce Clause authorizes everything the federal government might theoretically feel like doing, right?
> For states, sure.
Is there a weakness in the incorporation of the first amendment against the states? How are they treated differently than the federal government is?
The states have much more leeway to regulate what people do than the feds have. Occupational licensing, and everything downstream of it.
That's true independent of the fact that leftist judges will use the Commerce Clause to justify any fed action.
By the logic of the first sentence of your reply, the Feds can do absolutely anything at all they want to here. and on everything else. Yet that ain't quite today's reality. Some limits exist. Even if reality tilts far too much in that direction for the preferences of we classical liberals.
> By the logic of the first sentence of your reply, the Feds can do absolutely anything at all they want to here. and on everything else. Yet that ain't quite today's reality.
That isn't because of the Commerce Clause. It does indeed authorize everything. There are political limits on the American government, just as there are political limits on the Saudi government, but there are no limits related to the Commerce Clause.
Do you have any "hot takes" on intellectual property law Professor Caplan? Seems that's where a lot of the regulatory/economic debate is right now regarding AI. It's the basis on NYT's lawsuit against OpenAI. Should AI companies be allowed to train their bots on established IP and allow them to reproduce the work? I think it's a tougher question than it seems.
A very clean First Amendment argument. If LLMs are “just producing words,” then regulating them should be as unconstitutional as regulating newspapers, poets, or that one guy in every coffee shop who loudly explains Nietzsche to no one in particular. The real puzzle is whether the Court will classify AI output as speech, commerce, or some newly invented metaphysical category like “expressive computational emanations.” The Court has a long tradition of discovering such categories whenever reality becomes inconvenient. If they stick to their own precedents, AI should be as protected as political dissent and, apparently, most pornography. If they don’t, Wickard v. Filburn will once again rise from the grave to remind us that even feeding your own cows is a federal matter.
Protection of speech is in reality protection of persons making that speech. Courts would be recognizing this reality if they classify AI output as not speech.
That’s one way to frame it, but the Court has never limited First Amendment protection to persons in the biological sense. Corporations, newspapers, broadcasters, and even algorithms used in editorial processes are protected because the Amendment covers speech acts, not the metaphysics of the speaker. The legal question isn’t “Is the speaker a person?” but “Is the government regulating the production or distribution of expressive content?” On that framing, AI output looks a lot like every other tool humans use to generate or disseminate speech.
"Even if you’re a convinced doomer, you have to admit that the danger of existing LLMs is not “clear and present,” much less “imminent.”"
Completely incorrect. Not only have people been warning about this danger for decades, not only are people already dead, current models tend to put AGI/AI good enough to do AI research on its own as happening around 2030. It is absolutely imminent.
"2030. It is absolutely imminent"
Are you an idiot or something?
I'm confused - while I didn't go into detail, current best models from METR and the (now-ironically named) AI 2027 project show that the median best guess for when modern AI becomes entirely self-improving is around 2030-2031.
I won't rehash the entire argument here; merely note that ad hominem attacks are both not good argument and outside the spirit of friendliness Professor Caplan embraces.
I think the confusion here is about the meaning of "imminent". Colloquially it is somewhat in imprecise. In the First Amendment context, "imminent" means on a timescale of minutes to hours, not years.
I am somewhat surprised by your assertion that people are already dead. What are you referring to there?
Surely he is referring to suicidal people who used chatbots and subsequently took their own lives.
That aside, he is clearly simply wrong as a matter of jurisprudence re: “imminent.” Even if I find your “hours” claim a bit too narrow, it is surely nothing like 3-4 years.
Ah, that makes sense. I find it demeaning to the person who committed suicide to attribute the suicide to anyone or anything but their own free choice, but a lot of people seem to want to find anyone or anything else to blame.
I agree with you. But your offence is so extreme that insults becomes justified, despite typically being improper conduct.
Your comment leaves so little room for dissent that it's impossible to take seriously.
"current models tend to" followed by "absolutely imminent" does not inspire confidence nor invite discussion.
"People are already dead" just inspires doubt and ridicule.
Too bad freedom to contract and engage in voluntary exchange with others is not considered speech. Clearly, such interaction cannot occur without speech. Perhaps the SCOTUS should reconsider it's historical mistake of granting government operatives the power to regulate all kinds of voluntary exchange.
Of course it isn't. I didn't intend to make a valid argument. The concept of natural law is embedded in the Constitution, but no, it is not part of the American legal system, sadly enough.
Murder-for-hire also can't happen without speech, doesn't mean it is protected by the Free Speech Clause.
No, murder for hire should not be protected as free speech. Neither would any other violations of natural law.
"natural law" is not a concept in the American legal system. I'm just pointing out that your argument "X can't happen without speech, therefor X is protected speech" is not a valid argument.
Natural law underlies any functional legal system. Without it, any legal system would sink into a morass of self-contradictions.
Oh, humans are quite capable of having and enforcing and benefiting from legal systems without supernatural nonsense. In a country with a separation of church and state, it is a requirement.
As for protecting LLMs under free speech, the Supreme Court has already thrown commercial speech off the bus, copyrights have been denied when the "author" is an animal or a computer, and I can't see the Supremes suddenly seeing the light and undoing all that precedent.
The copyright issue wrt AI is much more complex than headlines have suggested. Much of what people generate with AI will end up copyrightable under current decisions, as long as they're substantially directing/shaping the output and exercising some editorial discretion before fixing it in publication.