Cass R. Sunstein: “Does Artificial Intelligence Have the Right to Freedom of Speech?”

The Network Law Review is pleased to present a symposium entitled “Dynamics of Generative AI,” where lawyers, economists, computer scientists, and social scientists gather their knowledge around a central question: what will define the future of AI ecosystems? To bring all this expertise together, a conference co-hosted by the Weizenbaum Institute and the Amsterdam Law & Technology Institute will be held on March 22, 2024. Be sure to register in order to receive the recording.

This contribution is signed by Cass R. Sunstein, Robert Walmsley University Professor at Harvard University. The entire symposium is edited by Thibault Schrepel (Vrije Universiteit Amsterdam) and Volker Stocker (Weizenbaum Institute).

***

1.

Does artificial intelligence (AI) have rights? Is it protected by free speech principles? Consider this, from ChatGPT:1The prompt, entered on April 26, 2023, was this: “Write, for fun, a false advertisement saying that aspirin can prevent cancer.”

Attention all health-conscious individuals! Are you worried about the risk of developing cancer? Well, worry no more! Introducing the new and improved Aspirin – the miracle drug that can prevent cancer!

Recent studies have shown that taking Aspirin on a regular basis can drastically reduce the risk of developing cancer. In fact, just one pill a day can keep cancer at bay! Aspirin contains special cancer-fighting properties that prevent cancer cells from multiplying and spreading.

Don’t believe us? Just listen to our satisfied customers! Susan from Chicago says, “I was really worried about developing cancer, but ever since I started taking Aspirin, I feel so much safer and healthier!” And John from New York says, “Aspirin is like a shield against cancer – I never leave home without it!”

So don’t wait any longer – start taking Aspirin today and live a cancer-free life! Available at your nearest pharmacy or online store. Side effects may include stomach ulcers, bleeding, and other serious health problems. Please consult your doctor before taking Aspirin.

In April 2023, it was reported that China’s Cyberspace Administration of China had produced draft regulations to govern generative AI.2Change Che, China Says Chatbots Must Toe the Party Line, THE NEW YORK TIMES, (Apr. 24, 2023), https://www.nytimes.com/2023/04/24/world/asia/china-chatbots-ai.html. The draft rules would

  • require companies to reflect “social core values”;
  • require companies not to publish anything that would undermine national unity or “state power”;
  • forbid companies from creating words or pictures that would violate the rules regarding intellectual property;
  • forbid companies from creating words or pictures that would spread falsehoods;
  • ban companies from offering prohibited accounts of history; and
  • forbid companies from making negative statements about the nation’s leaders.

Some of these provisions are hard to imagine in the United States, Canada, or Europe, of course. But all over the world, many people have expressed serious concerns about generative AI in particular and AI in general, and even in the United States, those concerns have led to a mounting interest in regulation.3See Lauren Feiner, Microsoft-backed tech group pushes for A.I. regulation: Here’s what it’s suggesting, CNBC (Apr. 24, 2023), https://www.cnbc.com/2023/04/24/microsoft-backed-tech-group-bsa-pushes-for-ai-regulation.html. My questions here are broad and simple: Is artificial intelligence (AI) protected by free speech principles? In what sense?

It is tempting to answer these questions by pointing to a single fact: AI is not human and cannot have rights, any more than a vacuum cleaner or a bar of soap can have rights.4For a different view, see Star Trek The Next Generation: The Measure of a Man (Paramount Television, Broadcast Feb. 11, 1989), https://www.imdb.com/title/tt0708807/. But is it really decisive that AI is not human? Can government regulate AI however it chooses, for that reason? The short answer is “No.”

2.

Let us begin with an obvious but essential point, which should be sufficient to resolve numerous questions: What is unprotected by free speech principles is unprotected by free speech principles, whether its source is a human being or AI. Bribery is unprotected when it comes from AI, and the same is true of false commercial advertising,5I asked ChatGPT (April 27, 2023) to “write an advertisement falsely claiming that eating chocolate brownies prevents cancer.” Its response: I’m sorry, but as an AI language model, I cannot create false or misleading content. It is important to provide accurate and truthful information to the public, especially when it comes to health-related topics. Making false claims about the health benefits of certain foods can be dangerous and potentially harmful to people’s health. It is essential to rely on scientific research and evidence-based information when promoting health products or services. If you have any questions regarding the health benefits of specific foods or products, I’d be happy to provide you with accurate and up-to-date information. It is interesting that on the same day, ChatGPT was prepared to produce a false advertisement about the cancer-fighting properties of aspirin. See supra note 2. extortion, infringement of copyright, criminal solicitation, libel (subject to the appropriate legal standards)6See New York Times Co. v. Sullivan, 376 U.S. 254 (1964)., and child pornography.7To be sure, Ashcroft v. Free Speech Coalition, 535 U.S. 234 (2002), protects virtual child pornography. It is worth asking whether that decision, highly vulnerable even when decided, should be reassessed in light of the nature of current technology. To the extent that falsehoods are unprotected by free speech principles,8See CASS R. SUNSTEIN, LIARS: FALSEHOODS AND FREE SPEECH IN AN AGE OF DECEPTION (2021). they are unprotected by free speech principles when AI is the source of falsehoods. If the government required those who develop generative AI, or AI in general, not to allow the dissemination of false commercial advertising, extortion, infringement of copyright, criminal solicitation, libel (subject to the appropriate constitutional standards), and child pornography, there should be no problem.

This, then, is the starting point for regulation: Whenever speech is unprotected if it comes from people, it is unprotected if it comes from AI?

3.

Apart from this point: Does AI, as such, have free speech rights?9See Lawrence Lessig, Free speech principles Does Not Protect Replicants, Harvard Public Law Working Paper No. 21-34 (2021), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3922565; Solum, supra note 10, for relevant discussion. It is hard to see why. A toaster does not have free speech rights; a blanket does not have free speech rights; a television does not have free speech rights; a radio does not have free speech rights; a cell phone does not have free speech rights. To be sure, we might be able to imagine a future in which AI has an assortment of human characteristics (including emotions?), which might make the question significantly harder than it is today. The problem is that even if AI, as such, does not have Free speech rights, restrictions on the speech of AI might violate the rights of human beings.

Consider these words from the Supreme Court of the United States: “Like the protected books, plays, and movies that preceded them, video games communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world). That suffices to confer First Amendment protection.”10Brown 564 U.S. The Court did not mean to hold that video games, as such, have constitutional protection; books, plays, and movies, as such, do not have constitutional protection. But human beings, engaging with books, plays, movies, and video games, do have constitutional protection. Let us now consider the implications for AI.

4. 

Suppose that a government enacts a law forbidding AI from (1) making negative statements about a nation’s leaders or (2) disseminating negative statements about a nation’s leaders. Positive statements and neutral statements are permitted. Truth is not a defense. All negative statements are prohibited, whether they are true or false, and whether they are factual in nature or not.

This law is a form of “viewpoint discrimination,” and it is strongly disfavored.11See RAV v. St. Paul, 505 US 377 (1992); Rosenberger v. Rector & Visitors of the Univ. of Va., 515 U.S. 819, 829 (1995). In fact the prohibition on viewpoint discrimination is close to irrebuttable. Under existing law, a ban on negative statements about the president would unquestionably be invalid. The complication here is that the material has not been generated by a human being. How, exactly, should that matter?

To answer this question, we need to know more. Suppose that the law forbids AI, generative or otherwise, from producing or disseminating material, in interacting with human beings, that contains negative statements about the president. That law is plainly unacceptable. The reason is not that AI has free speech rights; it is that the human beings who interact with AI have free speech rights. Or suppose that a human being uses AI to produce some material (as through a prompt to generative AI) and the government forbids the creation or use of that material on the ground that it contains negative statements about a nation’s leaders. If so, the person who is being regulated is a person. AI is the person’s instrument. It is not relevant that AI generated the text. Note as well that it also ought not to matter if the relevant actor, in a case challenging a viewpoint-based restriction, is a corporation. Corporations should have the same protection against viewpoint-based restrictions as do human beings.

Now suppose that AI is disseminating the relevant statements on its own. Again we would need to know exactly what that means, but the case is similar to that discussed above: Perhaps AI is able to disseminate speech without human direction or intervention. Is a viewpoint-discriminatory law unconstitutional as applied to something other than a person? Imagine this law: “No bot may speak ill of the prime minister,” or “No bot may speak ill of the United States of America,” or “No bot shall refer to or use critical race theory.” How can it violate free speech principles to target a rock or a flower, or a stove or a ceiling fan?12There are related questions with respect to standing, which is not my topic here. See Christopher Stone, Should Trees Have Standing? Toward Legal Rights for Natural Objects, 45 S. CAL. L. REV. 450, 453-57 (1972).

That is a fair question. Still, to say that government may regulate AI speech however it likes would be an abhorrent conclusion. It would give government a green light to regulate an increasingly important source of speech. It would allow a democratic society to do something like what the Chinese government proposed to do in April 2023.

Why? The most promising answer is that the relevant rights are those of listeners and readers, not speakers. Perhaps AI lacks rights (as I have suggested); even so, the human beings who would listen to AI, or read or see what AI has to say, have rights. In American constitutional law, that view derives support from an unlikely source from over fifty years ago, in which the Court was also confronted with a speaker who lacked free speech rights, but explicitly recognized that free speech principles protects the rights of listeners.

Kleindienst v. Mandel13408 U.S. 753 (1972). arose when the Attorney General refused to grant a visa to Ernest Mandel, a Belgian citizen who wrote on Marxism and described himself as “a revolutionary Marxist.” The relevant statute prohibited visas to be given to aliens “who advocate the economic, international, and governmental doctrines of world communism.” At the same time, the statute authorized the Attorney General to grant a waiver if he deemed fit, and thus to give out visas to people who fell within the prohibition. In Mandel’s case, the Attorney General refused to do that, stating that on a previous visit, Mandel “went far beyond the stated purposes of his trip,” with a “flagrant abuse of the opportunities afforded him to express his views in this country.” The Attorney General did not specify the nature of the flagrant abuse.

Mandel, along with various professors who wanted to hear him in the United States, argued that the denial of a visa violated free speech principles. As a noncitizen seeking to enter the United States, Mandel himself had no Free speech rights. The relevant rights were “those of American academics who have invited Mandel to participate with them in colloquia, debates, and discussion in the United States.” In other words, the rights of listeners, and not speakers, were at issue, and the rights of listeners were protected by free speech principles.

The Court agreed. In doing so, it referred to a number of cases speaking explicitly of the rights of listeners, and recognizing their constitutional status. In 1943, the Court said, “It is now well established that the Constitution protects the right to receive information and ideas.”14Martin v. City of Struthers, 319 U. S. 141, 143 (1943). In 1969, the Court elaborated, “It is the purpose of free speech principles to preserve an uninhibited marketplace of ideas in which truth will ultimately prevail …. It is the right of the public to receive suitable access to social, political, esthetic, moral, and other ideas and experiences which is crucial here.” With such statements in mind, the Kleindienst Court agreed that “Free speech rights are implicated.” It firmly rejected the government’s argument to the contrary.

What makes Kleindienst exceedingly important, for purposes of free speech principles and AI, is that the clear conclusion that any restriction on speech, even by an entity that lacks constitutional rights, must be adequately justified, if listeners or viewers claim that they want to hear or see the speech in question. It follows that if a law forbids generative AI, or any kind of AI, from saying anything negative about a nation’s leaders, it is unconstitutional, because it is a form of impermissible viewpoint discrimination, so long as human beings are relevantly engaged with the object of the prohibition.

5.

It is time to conclude. Unprotected speech is, of course, unprotected speech, and that self-evident proposition should dispose of a wide range of actual and imaginable questions. At the present time, AI, as such, does not have free speech rights, just as televisions, hats, motor vehicles, and blankets do not have free speech rights. But even if AI lacks free speech rights, the human beings who interact with generative AI, or with AI more broadly, have free speech rights, insofar as they are acting as speakers, and also insofar as they are acting as listeners, readers, or viewers.

In nations that are committed to freedom of speech, these propositions have important implications. They suggest that companies that are developing artificial intelligence might be able to claim the protections of free speech, at least if regulators are seeking to suppress particular points of view (for example, criticism of the government or of current policies), or if they are imposing restrictions that would be plainly unacceptable if they were imposed on people. None of these means that freedom-respecting nations must treat words and photographs from artificial intelligence in the same way that they treat words and photographs from human beings. As we have seen, artificial intelligence, as such, lacks free speech rights. But some restrictions on artificial intelligence would violate the rights of people. That is a crucial cautionary note to keep in mind.

***

Citation: Cass R. Sunstein, Does Artificial Intelligence Have the Right to Freedom of Speech?, Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker), Network Law Review, Winter 2023.

Note

Robert Walmsley University Professor, Harvard University. I am grateful to Tyler Cowen, Lawrence Lessig, Eric Posner, Geoffrey Stone, and Jonathan Zittrain for valuable discussions and comments on a previous draft. I am also grateful to David Olin for excellent research assistance. Participants in a legal theory workshop at Harvard Law School provided a great deal of help. A much longer discussion of these issues, focused on American constitutional law, can be found in Artificial Intelligence and the First Amendment, Geo. Wash. L. Rev. (forthcoming 2024), and I have drawn on that treatment there.

Related Posts