Thibault Schrepel: “Toward A Working Theory of Ecosystems in Antitrust Law: The Role of Complexity Science”

The Network Law Review is pleased to present a symposium entitled “Dynamics of Generative AI,” where lawyers, economists, computer scientists, and social scientists gather their knowledge around a central question: what will define the future of AI ecosystems? To bring all this expertise together, a conference co-hosted by the Weizenbaum Institute and the Amsterdam Law & Technology Institute will be held on March 22, 2024. Be sure to register in order to receive the recording.

This contribution is signed by Thibault Schrepel, Associate Professor at the Vrije Universiteit Amsterdam, and Faculty Affiliate at Stanford University CodeX Center. The entire symposium is edited by Thibault Schrepel (Vrije Universiteit Amsterdam) and Volker Stocker (Weizenbaum Institute).

***

Antitrust agencies are getting increasingly interested in understanding digital ecosystems.1European Commission, “Commission Prohibits Proposed Acquisition of ETraveli”; European Commission, “Commission Notice on the Definition of the Relevant Market for the Purposes of Union Competition Law (C/2024/1645).” As someone who has long advocated for examining ecosystems in antitrust (ah, Ph.D. days…),2Schrepel, L’innovation Prédatrice En Droit de La Concurrence. I can only welcome this development. However, my working hypothesis is that the concept of ecosystems can only be understood through complexity science. And thus far, there appears to be limited interest in complexity among antitrust policymakers and enforcers (although this may be changing with the recent advent of the Dynamic Competition Initiative).3“Dynamic Competition Initiative.” Against this background, this contribution aims to introduce the critical role of complexity science in developing a functional theory of ecosystems in antitrust law (1.) and expose the perils of ignoring it (2.).

1. Complexity Science: The Gateway to A Theory of Ecosystems

Ecosystem theories without insights from complexity science are at best incomplete. In what follows, I offer several arguments to justify this statement, but first I need to lay out the foundations of complexity science.

At a general level, complexity science is the study of how adaptive systems respond to the context they create. By adaptive systems, complexity researchers typically mean any kind of system (organic, economic, technological, legal…) whose agents or subjects interact with each other. Their interactions create some patterns that affect their environment and, in turn, the behavior of each agent. As a result, agents in adaptive systems are constantly confronted with “ill-defined” situations (i.e., situation for which there is no single optimal behavior).4Arthur, “Foundations of Complexity Economics.” Agents adapt to what (temporarily) works in their environment. When they converge on a strategy, it creates opportunities to explore other strategies, which some agents do, and so on. I explore the policy and institutional implications of these dynamics in a forthcoming article for the Journal of Institutional Economics.5Schrepel, “The Evolution of Economies, Technologies, and Other Institutions: Exploring W. Brian Arthur’s Insights.”

On a more granular level, complexity science has developed — or deepened — several concepts to explain the underlying dynamics of these ecosystems. The first relate to the distinction between negative and positive feedback loops. Negative feedback loops reduce deviations from a set point, thereby promoting stability and preventing extreme changes in the system (think of having a fever: a virus enters the body, the body warms up, expels the virus, and returns to the original state of not being sick). Positive feedback loops are mechanisms by which a change in the system causes further changes, thereby amplifying the initial change (think of childbirth: the body releases oxytocin which causes contractions, which in turn signal the body to release more oxytocin, which further increases contractions, etc.).

In to the field of economics, W. Brian Arthur has developed a working theory of increasing returns, which are typical examples of positive feedback loops.6Arthur, “Competing Technologies, Increasing Returns, and Lock-In by Historical Events.” Under increasing returns, the more a product or service is used or adopted, the more valuable or efficient it becomes, leading to even greater adoption, or use.7Note that network effects are one kind of increasing returns. Economies of scale are another one. They can be combined. Growth begets more growth. The companies that benefit from these increasing returns tend to have robust market shares.

This begs the question of which agents benefit from positive feedback loops in the first place. Again, complexity theorists have come up with conceptual and technical tools to answer this question. On a conceptual level — staying in the field of economics — one can often observe non-ergodicity8North, “Dealing with a Non-Ergodic World: Institutional Economics, Property Rights, and the Global Environment.”: historical small events are not averaged away.9Arthur, “Competing Technologies, Increasing Returns, and Lock-In by Historical Events.” This means that random events (e.g., Taylor Swift promoting a GenAI app) can decide the fate of the ecosystem by introducing dynamics in favor of one product, which will then benefit from increasing returns (if any) and will eventually lock the market until disruption occurs. In other words, randomness + timing + skill = success. On a technical level, complexity theorists have developed agent-based modeling to enable the simulation — and granular understanding — of these dynamics. Complexity researchers are also pushing for computational thinking, arguing that computation is not as limited as algebra — which can only express balanced quantities because the left part of the equation must equal the right part — as it can capture chains of events.

2. The Danger of Ignoring Complexity Science: GenAI As A Case Study

Complexity science provides an understanding of the dynamics that underlie adaptive systems. Given that antitrust deals with such adaptive systems — firms and technologies are adaptive10Dooley, “A Complex Adaptive Systems Model of Organization Change”; Fleming and Sorenson, “Technology as a Complex Adaptive System: Evidence from Patent Data.” — agencies cannot afford to ignore the insights of complexity science. Should they be tempted to work on a theory of ecosystems that neglect these insights, they would end up (1) with a static view of ecosystems (at most mentioning that behaviors affect multiple layers or markets), and (2) false working presumptions such as considering that all agents within the ecosystem are equal and, with perfect knowledge of other agents and effective interventions, can collectively arrive at an optimal state. In short, agencies would approach competitive problems as well-defined, rather than considering how actions, strategies, or expectations are constantly adapting to the aggregate patterns they create.11Thibault Schrepel, The Evolution of Economies, Technologies, and Other Institutions: Exploring W. Brian Arthur’s Insights, Journal of Institutional Economics, 2024

I now turn to generative AI to drive the point home.

The generative AI ecosystem consists of several layers with millions of adaptive agents. As I have explained elsewhere, the first layer is made of infrastructures, i.e., computing power, cloud, etc.12Schrepel, “Generative AI, Pyramids and Legal Institutionalism.” The second layer is that of AI foundation models. The third comprises all the applications, such as ChatGPT, etc. The fourth has all users.

As documented in the Network Law Review, antitrust agencies are becoming interested in the field of generative AI, largely to avoid what they perceive as a failure (or, let us say, mitigated success) of their approach against big tech companies.13Schrepel, “A Database of Antitrust Initiatives Targeting Generative AI.” While it is too early to assess the approach agencies will take in the field of Generative AI, the first writings and official statements we have seem to indicate that they are mostly interested in the infrastructure layer.14European Commission, “Competition in Virtual Worlds and Generative AI: Calls for Contributions”; Federal Trade Commission, “FTC Launches Inquiry into Generative AI Investments and Partnerships | Federal Trade Commission”; Autorité de la Concurrence, “The Autorité Starts Inquiries Ex Officio into the Generative Artificial Intelligence Sector and Launches a Public Consultation | Autorité de La Concurrence.” This follows an ecosystem view, they say, because a lack of competition at that layer will affect all the other layers. This assessment is correct, but dangerously incomplete. It is correct because a good AI foundation model that runs on poor infrastructure (and takes two minutes to respond to prompts) is not compelling.15Chamath Palihapitiya, “Chamath Palihapitiya on X.” It is incomplete for at least four reasons.

First, a lack of competition at the infrastructure layer would certainly affect AI foundation models, but the agents at the layer of those models would respond by investing in infrastructures. This dynamic is already in play. OpenAI is reportedly trying to raise $7 trillion to develop its own chips and computing power.16Michelle Cheng, “OpenAI’s Sam Altman Has Huge Chip Ambitions. They Might Not Work.” Aware of this risk, Nvidia is pushing to steadily lower the cost of training LLMs, from $10 million a few months ago to as little as $400,000, as Alex Pentland and I wrote back in June 2023.17Schrepel and Pentland, “Competition Between AI Foundation Models: Dynamics and Policy Recommendations.”

Second, some agencies seem to be looking at the infrastructure layer from a structuralist point of view, claiming here and there that a few (big tech) companies dominating this layer would be evidence of failure. Well, if that is how we measure success, then we will have failure. The infrastructure layer does not directly benefit from strong increasing returns (i.e., there are scale economies limited by the costs of components), but it interacts with the foundation layer, which does. The idea is this: the more users, the more revenue the company running the model can generate and pay for access to unique data, thus improving the model, attracting more users, etc. A handful of foundation models that benefit the most from increasing returns will then dominate (if they can scale properly), leading to a concentration of the infrastructures that these models rely on. This is what I would like to call ‘increasing returns by proxy’. But to be clear, the concentration of the infrastructure layer will not necessarily mean that there will be a lack of competition, for the reason I explained in the previous paragraph.

Third, complexity insights suggest that the target of policymakers and regulators should be the practices that deprive firms of the benefits of increasing returns. This is a clear, workable, and predictable agenda. Practices related to prices, for example, may be interesting, but they are not critical to defining dynamism. The same is true of practices related to big data. If anything, the technical literature and market reality converge on one idea: big data is necessary to compete at the foundation level, but big data certainly does not define dynamics in this space.18See Schrepel and Pentland for references to the technical literature. Small companies may have access to enormous amounts of data and can use new techniques to compete with smaller data sets. On the other hand, practices like predatory innovation19Schrepel, “Predatory Innovation: The Definite Need for Legal Recognition Predatory Innovation: The Definite Need for Legal Recognition.” – where a company updates its products to hurt competitors – are key if they cut off access to users, which Alex Pentland and I have identified as the source of increasing returns in generative AI.20Schrepel and Pentland, “Competition Between AI Foundation Models: Dynamics and Policy Recommendations.”

Fourth, looking at the preservation of the dynamics at/between each layer as ill-defined (an objective that cannot accommodate a fixed, optimal solution) pushes toward adaptive policymaking and interventions. The AI Act, as I discuss in a recent working paper, is largely non-adaptive: for example, the provisions on high-risk systems cannot be removed if they turn out to be ineffective or even harmful.21Schrepel, “Decoding the AI Act: A Critical Guide for Competition Experts.” This is problematic because by assuming that problems are well-defined (i.e., that agents will not adapt to regulation and thus cause new issues elsewhere), the AI Act runs the risk of becoming quickly obsolete. The ability to adapt to how agents respond to new regulations and enforcement strategies is central to making regulations and market interventions effective. To be clear: instead of relying primarily on experience (such as enforcement actions against big tech companies) to write new rules and standards, complexity economics indicate the need for flexible regulations that adapt to current events and observations.22Kupers and Colander, Complexity and the Art of Public Policy: Solving Society’s Problems from the Bottom Up. This is especially relevant considering the fact that the new competitive battleground centers around open-source vs. proprietary models, and that big tech companies do not play on the same side. Mental models from the 2010s do not fit well within Generative AI. In practice, adaptive rules imply agreeing on how to measure success of intervention, find ways to get the data, and implement mechanisms to adapt to the data.

In short, policymaking and enforcement in Generative AI without a strong complexity mindset will be doomed to failure, i.e., aiming at the wrong targets and/or being ineffective.

I hope this short contribution begins to show that a complexity mindset does not require a complicated approach. Complexity science has been around for several decades. Researchers have produced a serious body of scientific literature from which lawyers can derive actionable insights. It is a matter of applying those insights to antitrust. For more publications on this topic, I recommend these two pieces (here and here), and ask for your patience.23Petit and Schrepel, “Complexity-Minded Antitrust”; Schrepel, “The Evolution of Economies, Technologies, and Other Institutions: Exploring W. Brian Arthur’s Insights.” There is more to come.

***

Citation: Thibault Schrepel, Toward A Working Theory of Ecosystems in Antitrust Law: The Role of Complexity Science, Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker), Network Law Review, Winter 2023.

References

  • Arthur, W. Brian. “Competing Technologies, Increasing Returns, and Lock-In by Historical Events.” Economic Journal. Vol. 99, 1989.
  • ———. “Foundations of Complexity Economics.” Nature Reviews Physics 3, no. 2 (2021): 136–45.
  • Autorité de la Concurrence. “The Autorité Starts Inquiries Ex Officio into the Generative Artificial Intelligence Sector and Launches a Public Consultation | Autorité de La Concurrence.” Accessed March 1, 2024. https://www.autoritedelaconcurrence.fr/en/article/autorite-starts-inquiries-ex-officio-generative-artificial-intelligence-sector-and-launches.
  • Chamath Palihapitiya. “Chamath Palihapitiya on X.” X, 2024. https://twitter.com/chamath/status/1754641005851328553.
  • Dooley, Kevin J. “A Complex Adaptive Systems Model of Organization Change.” Nonlinear Dynamics, Psychology, and Life Sciences 1 (1997): 69–97.
  • “Dynamic Competition Initiative,” n.d. https://www.dynamiccompetition.com/.
  • European Commission. “Commission Notice on the Definition of the Relevant Market for the Purposes of Union Competition Law (C/2024/1645).” Accessed March 3, 2024. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:C_202401645.
  • ———. “Commission Prohibits Proposed Acquisition of ETraveli.” Accessed March 3, 2024. https://ec.europa.eu/commission/presscorner/detail/en/ip_23_4573.
  • ———. “Competition in Virtual Worlds and Generative AI: Calls for Contributions,” January 8, 2024. https://competition-policy.ec.europa.eu/document/e727c66a-af77-4014-962a-7c9a36800e2f_en.
  • Federal Trade Commission. “FTC Launches Inquiry into Generative AI Investments and Partnerships | Federal Trade Commission.” Accessed March 1, 2024. https://www.ftc.gov/news-events/news/press-releases/2024/01/ftc-launches-inquiry-generative-ai-investments-partnerships.
  • Fleming, Lee, and Olav Sorenson. “Technology as a Complex Adaptive System: Evidence from Patent Data.” Research Policy 30, no. 7 (2001): 1019–39.
  • Kupers, Roland, and David Colander. Complexity and the Art of Public Policy: Solving Society’s Problems from the Bottom Up. Princeton University Press, 2014.
  • Michelle Cheng. “OpenAI’s Sam Altman Has Huge Chip Ambitions. They Might Not Work.” Quartz, 2024. https://qz.com/openai-sam-altman-ai-chip-ambitions-1851261305.
  • North, Douglass C. “Dealing with a Non-Ergodic World: Institutional Economics, Property Rights, and the Global Environment.” Duke Envtl. L. & Pol’y F. 10 (1999): 1.
  • Petit, Nicolas, and Thibault Schrepel. “Complexity-Minded Antitrust.” Journal of Evolutionary Economics 33, no. 2 (April 1, 2023): 541–70. https://doi.org/10.1007/s00191-023-00808-8.
  • Schrepel, Thibault. “A Database of Antitrust Initiatives Targeting Generative AI.” Network Law Review, 2024. https://www.networklawreview.org/antitrust-generative-ai/.
  • ———. “Decoding the AI Act: A Critical Guide for Competition Experts.” SSRN, October 23, 2023. https://papers.ssrn.com/abstract=4609947.
  • ———. “Generative AI, Pyramids and Legal Institutionalism.” Concurrences, no. N° 4-2023 (November 1, 2023).
  • ———. L’innovation Prédatrice En Droit de La Concurrence. Bruylant. Larcier, 2018.
  • ———. “Predatory Innovation: The Definite Need for Legal Recognition Predatory Innovation: The Definite Need for Legal Recognition.” Science and Technology Law Review 21 (2018). https://scholar.smu.edu/scitech/vol21/iss1/3http://digitalrepository.smu.edu.
  • ———. “The Evolution of Economies, Technologies, and Other Institutions: Exploring W. Brian Arthur’s Insights.” Journal of Institutional Economics, no. forthcoming (2024).
  • Schrepel, Thibault, and Alex Sandy Pentland. “Competition Between AI Foundation Models: Dynamics and Policy Recommendations,” 2023.

Related Posts