Paul Seabright: “Artificial Intelligence and Market Power”

The Network Law Review is pleased to present a symposium entitled “Dynamics of Generative AI,” where lawyers, economists, computer scientists, and social scientists gather their knowledge around a central question: what will define the future of AI ecosystems? To bring all this expertise together, a conference co-hosted by the Weizenbaum Institute and the Amsterdam Law & Technology Institute will be held on March 22, 2024. Be sure to register in order to receive the recording.

This contribution is signed by Paul Seabright, Professor of Economics at the Toulouse School of Economics and Institute for Advanced Study in Toulouse. The entire symposium is edited by Thibault Schrepel (Vrije Universiteit Amsterdam) and Volker Stocker (Weizenbaum Institute).


1. Introduction

Human beings, like all other living things, seek to extract signal from noise in their environment to help them pursue strategies that enhance their fitness. Strategies that take into account the state of the environment nearly always outperform those that do not. Intelligence is the name we give to the collection of traits that increase our ability to extract signal from noise. Artificial intelligence could, in principle, refer to any tools we use to augment our signal extraction abilities (spectacles, say, or calculators), but in practice, it has come to refer to a class of software programs that seek, first, to replicate human abilities to solve a range of problems and, secondly, to surpass those abilities by massively scaling up processing power, speed and the amount of data on which such programs are trained.

What’s not to like about these technologies? If we faced environments that were not reacting to our presence, artificial intelligence would be an unmixed blessing. In fact, we face environments that are full of our predators, who profit directly from harming us, and our rivals, who compete with us for scarce resources. They also contain our collaborators, on whom we depend for our survival and well-being. Artificial intelligence may harm us by strengthening these predators and rivals by more than it strengthens us, or by weakening our collaborators if it does not harm us directly. Indeed, it might make everyone worse off, including our rivals and predators as well as us. For example, increased rivalry may harm all parties by triggering conflict without necessarily leading any party to shift the balance of advantage enough in its favor to compensate. Even where some parties benefit overall the cost to others may be far greater than could conceivably justify the sacrifice of others.

The arrival in the last couple of years of sophisticated AI products, from large language models (LLMs) to compellingly realistic videos of either completely artificial persons or “deepfakes” of real persons, to drone technology that operates at scale in conflict arenas such as Ukraine and the Persian Gulf, has sparked a spectrum of reactions from joyous optimism to something close to panic.1For an unconditionally optimistic view, see Marc Andreessen: “The Techno-Optimist Manifesto”, For a conditionally optimistic view, see Erik Brynjolfsson : “This Could Be the Best Decade in History – Or the Worst”, Financial Times, 31st January 2024. For a conditionally pessimistic view, see Daron Acemoglu & Simon Johnson: Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, Basic Books, 2023. For an unconditionally pessimistic view, see Daniel Dennett: “The Problem with Counterfeit People”, The Atlantic, May 16th 2023, and interview in Tufts Now: Clearly nobody knows how we shall look back on these developments twenty, fifty or a hundred years from now. Still, we can divide the reactions according to the causal mechanisms they presuppose, and ask what evidence there is for each of them. This will make it easier to evaluate policy interventions.

Almost all the documented applications of artificial intelligence consist in strengthening the ability of human agents – individuals, businesses, political or religious movements, armed groups, nation states – to do more effectively what they already do – produce goods or services, analyze information, demand payments or deliver lethal force on a battlefield. The scenarios foreseen for these applications presume that human agents remain very much in charge of their implementation. In section 2, called “ecosystem risks”, I set out the conditions under which we can expect such applications to increase or decrease the well-being of different groups. There have also been concerns of a more exotic nature, namely that some applications might “escape” the control of the human agents that introduce them. This could be because the human agents do not foresee all the relevant conditions in which the applications operate, so that (for example) AI systems operating nuclear weapons might fire them in circumstances where human agents would not do so. Or it could be because the applications are replicators, and develop objectives which conduce to their replication, contrary to the objectives of the human agents that have introduced them. Such fears underlie many of the concerns about “existential risk” that have led some to call for moratoria on AI development or at least for its strict regulation. I consider such concerns in Section 3.

2. Ecosystem risks

All organisms in an ecosystem are either collaborators or rivals (or are indifferent to each other’s presence). Their relation may vary in different parts of the ecosystem, and it may change as conditions change, for example with fluctuations in the availability of energy. When the organisms are firms, for example, their relation as collaborators or rivals is most simply captured by whether they are producers of substitute or complementary goods and services: many multi-product firms produce goods or services some of which are substitutes for, and some complements to, those of other firms. They may collaborate in some areas (such as R & D) while competing in others.

It might seem hard to make general statements about the impact of artificial intelligence on overall welfare, but some polar cases can be sketched.

In markets for ordinary goods and services that are reasonably competitive, artificial intelligence is comparable to a general-purpose technology such as electricity. Electricity raised productivity and living standards across the board, while imposing costs on those firms and individuals that previously supplied inferior substitute technologies – steam power, gas lighting and some kinds of human labor. The labor was mostly able to be redeployed in occupations that expanded thanks to electricity, since it took little training to be able to work an electric sewing machine or to perform other tasks in electric light. AI may, however, displace more human labor than electricity did, and the alternative occupations for that human labor may require more training. For example, workers in some parts of health care or care of the elderly may find their jobs can be carried out by AI; alternative ways of interacting with patients or the elderly may require skills (and non-cognitive talents such as empathy) that not all displaced workers may possess.

This general, broadly optimistic conclusion carries over to some forms of market imperfection but not others. Broadly speaking, if the main source of imperfection is market power, general-purpose technologies will still provide some gains in productivity and living standards, though the share of the productivity gains that accrues to consumers and/or workers may be lower than when markets are more competitive. Indeed, they may even provide greater gains, if the source of the market power is economies of scale, as seems plausible for many such technologies. However, when market imperfections mostly consist in externalities such as pollution, general-purpose technologies can make outcomes worse by increasing both output and the amount of the negative externalities inflicted.

In markets for information, which are a special case of such externalities, AI may be particularly damaging because an increase in the number of signals emitted by competing agents may lower the overall signal to noise ratio faced by their audience. The AI that enables individual customers and citizens to make better sense of the signals they are receiving from firms and political actors may be unable to make headway against the AIs that are enabling those firms and political actors to send out ever more sophisticated but potentially misleading signals. A better analogy here than electricity is the technology of printing. Printing made it easier for readers to find written material they knew they wanted – Christians who wanted to read the Bible in the vernacular, for example. It also made it easier for anyone with an ideology to broadcast to a much larger audience – so that Christians who did not feel sure what they believed or what they ought to read faced a bewildering increase in the numbers of movements preaching incompatible messages across Protestant Europe in the fifteenth and sixteenth centuries. The damage was not limited to an increased difficulty at discerning the truth. Given the methods used by such movements in pursuit of their rivalry it also involved a greater exposure to violence. It would be hard to argue that, in this domain at least, printing had a clearly beneficial effect in the short term – witch trials and executions for heresy were both much more common in the 15th and 16th centuries than they had been two or three centuries earlier. Still, the longer-term benefits of printing – notably the diffusion of scientific discoveries – clearly came to outweigh the shorter-term damage, though arguably it took a couple of centuries for the balance to turn positive.

The harms that occurred in greater number after the invention of printing were the same kind of harms that human beings had been inflicting on each other since prehistory. This reminds us that the technology inflicted damage through the actions and choices of human beings. Printing technology was intrinsically harmless – but its presence changed the incentives of human beings to hurt each other using old-fashioned methods. In a similar fashion, many of the possible damaging consequences of AI reflect incentives it creates for human beings to hurt each other using existing technologies. Such consequences would therefore depend also on the safeguards that can be put in place regarding the use of those existing technologies.

In markets for activities that directly cause harm – notably the infliction of violence – AI has evidently the potential to make weapons much more lethal and capable of evading detection. It also has the potential to improve defense capabilities radically. Which effect is likely to dominate? Since prehistory there have been repeated breakthroughs in technology that shifted the balance between defense and offense, in the process shifting the power balance between different kinds of individuals and groups. When human beings fought in hand-to-hand combat the advantage lay almost always in the possession of sheer physical strength (though strategic cunning, for example in plotting ambush, could sometimes offset asymmetries in strength). This was good for defense, and of limited value for offense so long as human groups were mobile. The development of projectile weapons redressed the balance in favor of those who were less physically strong but could leverage the advantages of surprise, or (after the agriculture and associated sedentarization) shoot at their enemies from a distance behind ditches or solid walls. Metal armor shifted the balance back again in favor of strength, and gunpowder made walls more vulnerable. Armor-piercing weapons like the crossbow and the longbow gave the advantage again to cunning and organization. It seems reasonable to think that robotics (notably drone technology) will have a dual effect: favorable to defense in conventional warfare (as in Ukraine, where tank assaults by both sides are faring badly), but favorable to offense in situations of asymmetric warfare (for example, it will make it much easier for lone terrorists to launch attacks on public places from the comparative safety of a location far from the target). It is harder to see whether sophisticated AI systems that are not embodied in drones will tend to favor defense (by allowing earlier detection of attacks) or offense (by making it easier to probe points of weakness in enemy defenses). Once again, the impact of the technology is dependent on the incentives of the human agents who decide to use it. This is both encouraging – it reminds us of the importance of addressing directly the incentives to go to war rather than pursue peaceful negotiations – and depressing, to the extent that it reminds us that agents who are determined to break the peace will not be short of technological means to deploy in doing so

There exist of course weapons that can do damage by themselves independently of any human decision to use them in a particular encounter. The most obvious example is landmines. They are laid for a particular purpose – typically defending a military position against a ground assault. Yet they can kill and maim long after the conflict is over, years or even decades afterwards. Landmines are not particularly technologically sophisticated. The nearest example in the field of AI consists of “sleeper” viruses that may penetrate computer systems and destroy them at some random later date. However, for any attacker a virus that acts at a random date is likely to be less useful than one that acts in a more targeted fashion. This reminds us again that most of the damage from AI is likely to reflect conscious decisions by human agents, and it is the incentives for those decisions that need tackling at source.

The overall conclusion from this overview of ecosystem risk is that tackling the risks posed by AI will depend on a collective willingness to reach credible agreements to rein in the harms that can be inflicted by human agents on each other. Some of these harms are inadvertent, such as the risk of increased unemployment for individuals whose work can be more effectively achieved by AI and who don’t have the skills to find alternative employment easily.2A number of studies show, however, that for many kinds of task, AI can help low-skilled individuals more than those of high or medium skill. See, for example: Erik Brynjolfsson, Danielle Li & Lindsey R. Raymond: “Generative AI at Work”, NBER Working Paper no. 31161, November 2023. These harms require political action, but are mainly containable at the nation state level. Other harms, which are consciously inflicted, will require international collaboration – a daunting prospect, though one that humanity has been living with since the invention of nuclear weapons gave us the ability to destroy ourselves. It’s unlikely that AI will prove a harder challenge than nuclear weapons – a fact that some will find encouraging, some depressing, according to whether they see the nuclear peace that has obtained since the end of the second world war a potential model for the long term, or a lucky fluke that might break down at any time.

3. Existential risk

The first main category of existential risk envisages the possibility that AI systems might escape the control of their human inventors. This is indeed a possibility that should concern us. One way this might occur would be that those who install AI systems give them the capability to take major decisions (such as launching missile systems) without a human intervention in the causal chain. In most circumstances it’s unlikely that any strategic agent would knowingly do this, given the risk to themselves from creating such an autonomous causal chain. But it’s clearly desirable that there be shared awareness of ways in which this might occur inadvertently. Dealing with this risk clearly requires a degree of collective control of the sources of existential risk: if every citizen possessed their own nuclear missile system it would be hard to enforce a collective solution in which they all agree to put in place common safeguards. But in the present state of military technology that is not at issue. Here the relatively uncompetitive state of the market for advanced military technology is reassuring.

A second way in which such existential risks could occur is that malicious agents might deliberately create a system with the ability to cause existential damage. This already occurs with software viruses. And to the extent that systems such as nuclear launchers might be open to hacking by viruses, the solution (isolating them from exposure to external IT systems) is already known and widely implemented. It’s possible that future nuclear proliferation to small-scale inexperienced actors (insurgent groups, or successor states of some eventual implosion of the Russian empire, for example) might create a situation in which weapons systems of existential reach became exposed to potential hacking. This would indeed be a very alarming development. There is no reason to think we are there yet, but that time might come sooner than we are prepared to face it.

Finally, concerns about existential risk sometimes refer to the idea that an AI system begins replicating “on its own”, and comes to “take over the world”. This is unlikely in a world anything like the present, simply because to “take over the world” an AI needs to be connected up to mechanisms capable of acting in the world. Some AI systems may currently have the ability to resolve fairly general problems, but the physical mechanisms to which they are connected tend to be quite specialized. This might change – 3D printers, for example, might allow an AI to produce a large range of objects permitting it to function in the world, but sophisticated technologies rely on a whole supply chain (such as for semiconductors) that at present no AI has the connectivity to reproduce. It is connectivity rather than intelligence that is the scarce resource at this point. And it is precisely because connectivity is not controlled by a monopoly that there is no easy way for any AI to take it over.

Some commentators have expressed skepticism that AI systems would even have the goal of “taking over the world”. This skepticism is grounded in the idea that there is no intrinsic link between intelligence and the desire to dominate.3For instance, Yann Le Cun, recipient of the 2018 Turing Prize and currently a director at Meta, writing in Le Monde:”. To the extent that such a link might appear to hold among human beings, this is a contingent consequence of our being highly competitive group-living primates. AI systems are not group-living primates and thus there is no need to fear they would begin to behave as though they were. While undoubtedly true as far as it goes, this skepticism somewhat misses the point that a replicating AI system would not have to have a human-like desire to dominate to represent an existential risk to humanity. Replicators adapt to do whatever facilitates their replication. While that need not intrinsically be antithetical to the interests of human beings, to the extent that human beings claim scarce resources that might have a higher value to the AI when invested in uses that conduce to its own replication, the AI would have no concern or interest in the humans that might stand in its way. Furthermore, the speed of replication of AI systems could be orders of magnitude larger than that of human beings, since such systems are not constrained by human physiology. However, the point remains that it is connectivity and not just intelligence that would determine whether AI systems could function effectively enough in the world to pose a threat to all of humanity.

4. Concluding comment

This short overview has linked the severity of various categories of AI risk to features of the market structure within which users of AI operate. For ecosystem risks, competitive market structures characterized by few externalities pose little AI risk. However, if the technologies are subject to large scale economies or are useful to existing market participants in highly asymmetric ways, those competitive market structures may no longer be viable. Regulatory enforcement of open access requirements for at least some AI systems may be helpful here, though the devil would be in the details (open access is a form of mandated inter-operability, which has proved difficult to implement in many cases but is more than an empty slogan). Structures with market power but few externalities may see fewer AI benefits for individual citizens and consumers, but the overall risks are likely to be low. With externalities such as pollution we may see worse outcomes, as AI systems allow firms to produce higher pollution. The greatest dangers are where malicious externalities may arise through the infliction of violence by nation states or other users of weapon systems – and even through the threat of violence where none is actually exercized.

Here there is a paradox. Market power – specifically the ability of a cartel of powerful users to enforce collective safeguards – may be what allows a group of “responsible insiders” to prevent deliberate use of AI for catastrophically destructive purposes, and to implement safeguards against the accidental escape of AI from supervision by its users. But where accidental escape of AI systems from their users is concerned, the reassuring circumstance is precisely the lack of market power in the supply chains that would make it possible for a rogue AI system to command sufficient power of acting in the world to pose an existential risk to humanity. Some market power – but not too much. These seem to be the conditions under which we can be most optimistic that AI will realize its potential for human good while minimizing the risks of catastrophically bad outcomes.

Citation: Paul Seabright, Artificial Intelligence and Market Power, Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker), Network Law Review, Winter 2023.

Related Posts