Michal Shur-Ofry: “A Networks-of-Networks Perspective on AI Policy”

The Network Law Review is pleased to present a symposium entitled “Dynamics of Generative AI,” where lawyers, economists, computer scientists, and social scientists gather their knowledge around a central question: what will define the future of AI ecosystems? To bring all this expertise together, a conference co-hosted by the Weizenbaum Institute and the Amsterdam Law & Technology Institute will be held on March 22, 2024. Be sure to register in order to receive the recording.

This contribution is signed by Michal Shur-Ofry, Associate Professor of Law at The Hebrew University of Jerusalem, and Visiting Faculty, NYU Information Law Institute (2023). The entire symposium is edited by Thibault Schrepel (Vrije Universiteit Amsterdam) and Volker Stocker (Weizenbaum Institute).

***

This contribution suggests that the regulatory efforts in the field of AI can benefit from a Networks-of-Networks perspective. Networks-of-Networks is a cutting-edge subfield in the domain of complexity and network theory, which studies the interactions between networks. This body of research highlights the networks’ interdependencies and explores the dynamics of failures that cascade among networks-—including critical infrastructures such as electricity, health, the internet, and financial networks—potentially leading to large-scale catastrophes.

Embedding this perspective into policymaking in the field of AI suggests that, contrary to the initial regulatory inclination to define the risk of AI systems according to the areas in which they are utilized (e.g., health, or education), the regulatory effort must take into account the interactions of AI systems with additional systems, and the possible interdependencies among such systems. This perspective implies, inter alia, that general purpose models that are broadly accessible through the internet and easily adapt to a broad range of tasks can pose a greater systemic risk relative to models that operate in closed environments, and therefore deserve greater regulatory focus. It also illuminates that market reliance on a small number of models can enhance those systemic risks, thus adding to the literature that analyzes the perils of market concentration in the field of AI.

1. Introduction: The Emerging Regulatory Landscape

The recent advancements in the field of artificial intelligence, particularly in the area of generative AI and large language models, have sparked an intense debate pertaining to the regulation of AI systems. Some of the regulatory effort focuses on identifying “risky” fields in which AI systems are or expected to be utilized. The most prominent example is the European AI Act (still under final deliberations as this article is being written) [1]. The regulatory approach underlying the Act distinguishes between AI systems according to their level of risk, and suggests imposing stricter regulatory obligations on providers of “high-risk” systems. The latter include, inter alia, systems that pose a risk of harm to health and safety, or that may adversely impact fundamental rights [2]. Under the emerging regulation, high-risk systems will be subject, among others, to obligations concerning data governance, transparency, record keeping, security, and human oversight [3]. A similar approach was recently adopted by the Canadian regulator in its proposed AI legislation, and (given the pioneering nature of the European legislation and the acknowledged ‘Brussels Effect’) additional jurisdictions may follow suit [4].

Where do general purpose AI models fit into this scheme? The initial version of the AI Act did not explicitly refer to general purpose models, ostensibly leaving them completely free of regulatory obligations. Yet, things have changed with the more recent versions of the Act, negotiated after the launch of ChatGPT and the proliferation of additional large language models. The June 2023 Official Version referred to “foundation models”, defined in a way that likely included general purpose generative AI, and maintained that their developers must comply with a series of requirements, including safety and duties to provide information before placing their products on the market (Art. 28b). The recent Informal Version omitted the reference to foundation models, while referring extensively to “general purpose” AI systems that have “the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”, imposing on their developers similar obligations to provide information and documentation (Art. 52c). Both versions of the AI Act refrain from classifying general purpose models as inherently “high risk”. However, the recent Informal Version now acknowledges that general purpose models with “high-impact capabilities” can pose “systemic risk” [5], and subjects the developers of the latter to risk-mitigation duties, among other obligations (Art. 52d & 68a).

At first glance, regulatory distinctions between AI systems according their level of risk seems like a sound strategy: why not concentrate the regulatory effort on the use of AI in fields in which the risk is evident, such as health or safety? However, recent insights from cutting-edge research in the area of Networks-of-Networks cast doubt on the feasibility of this approach. In the following paragraphs I turn to review this literature.

2. The Networks-of-Networks Perspective

Networks-of-Networks (also referred to as “NetoNets”) is a recent scientific strand that is part of the broader field of complex systems (also referred to as “complex networks”, or simply “Complexity”). In order to understand how this research is relevant for questions of AI regulation, one should begin with a brief explanation of Complex systems. Complex systems are systems comprised of multiple interacting components: the social system, transportation systems, telecommunications systems, and the internet are a few non-exhaustive examples. The science of complex systems is a multidisciplinary field—deriving, inter alia, from physics, mathematics, and sociology—that studies those systems, their development, their common attributes, and their dynamics [6].

One of the prominent methodological tools used in studying complex systems is network analysis, which entails representing the various systems as networks, comprised of “nodes” (the individual components in the system) and “links” (the interactions between the components). To quickly illustrate, in a social system the nodes would be people and the links could be social ties, in the world wide web the nodes would be websites and the links could hyperlinks, while in a network of academic articles the nodes would be papers and the links could be citations to other papers. Alongside depicting various systems as networks and mapping their structure, the science of complex networks provides a unified terminology and a variety of mathematical metrics to study their traits, and enables to identify commonalities among systems of different kinds [7].

One trait, common to many real-world systems, is the presence of hubs, namely nodes with an exceptionally large number of links. To illustrate, in a social system the hubs would be people with an exceptionally high number of social contacts, whereas on the web the hubs would be websites that have a remarkable number of links (Wikipedia is one example). The presence of hubs shortens the distance between random nodes in the network, and is generally considered a property that increases network resilience: even if many of the nodes are dis-functioning, the hubs still enable the quick and efficient flow of “things” such as information or ideas through the network, thus keeping it overall operative [8].

While most research on complex networks has so far concentrated on patterns and dynamics emerging from the interactions within a single network, real-life networks rarely operate in isolation. Rather, many systems are connected to and interact with other systems: The electricity system is connected to the internet that is connected to the health system, to telecommunication systems, and so forth [9]. Against this background, the recent decade has seen the emergence of a research strand within Complexity titled Networks-of Networks, that focuses on the dynamics and interactions between networks [10]. This rapidly developing body of research is still in its infancy, and it is too early to draw definitive conclusions, let alone summarize its scientific findings. Nevertheless, there is already substantial evidence indicating that the interactions between different networks can lead to the emergence of new, unexpected vulnerabilities, that can percolate from network to network, which may eventually result in cascading failures. In network parlance, “the failure of nodes in one network leads to the failure of dependent nodes in other connected networks, which in turn may cause further damage to the first network”, a dynamic that can eventually lead to large scale failures with potentially catastrophic consequences [11].

One prominent real-life example is the power outage that occurred in Italy, back in September 2003. The failure of the electricity network has caused internet failures, leading to the dis-functioning of an internet-based control system that supervised the electricity system. The failure of the control system thus aggravated the failure of the power system, which in turn caused further stoppages of the internet and so forth. The failures in these networks percolated to additional interdependent networks, leading to the collapse of the railway network, healthcare systems, and financial services networks [12]. Another example is the failure of the electricity system in Cyprus as a result of an explosion in 2011, which led to the collapse of the country’s water supply system, due to the strong interdependencies between the two networks [13]. Likewise, a failure of a node in a transportation network (say, a sea port) can cause fuel shortage, that can lead to power failures, that in turn can cause further closure of sea ports.

These examples demonstrate that understanding the intra-networks dynamics is vital for guarding critical infrastructure, the disruption of which can cause significant economic and social harms. Because most infrastructure—from power grids to transportation networks, energy systems, health systems, or the internet—exhibit a networked structure, and because these networks are often connected to and interact with each other, understanding the risks and vulnerabilities entailed in those interdependencies is crucial for any policy making seeking to protect them [14].

Insights from Complexity research further clarify that the cause of wide-ranging calamities that involve several networks need not be a huge failure in one of the interdependent networks. Rather, complex networks are characterized by nonlinear responses. Nonlinearity implies that small-size failures do not necessarily result in small-scale problems. Instead, because of the interdependencies among the networks’ nodes, small failures can accumulate and eventually cause abrupt and large-scale catastrophes [15]. More specifically, Networks-of-Networks research has shown that even small local events can trigger feedback mechanisms among the interdependent networks, yielding self-amplifying processes that may eventually cause a substantial adverse change in the interconnected systems [16]. Likewise, network simulations indicate that in interdependent networks that have a spatial structure–such as an electricity network or the internet infrastructure– a small failure in one network may lead to a catastrophic cascading failure [17].

Additional research suggests that such interdependent networks may be more vulnerable to small failures relative to a single network: while in a single system the number of nodes that need to fail before the system collapses is substantial, in the case of interdependent networks the malfunction of a small number of nodes can result in a cascading failure of the entire interdependent systems. In other words (and without drawing definitive conclusions given the stage of research), this scholarship implies that interdependency among networks may increase the fragility of the entire interdependent system. Initial evidence further suggests that the previously mentioned hubs, which are generally considered as increasing the robustness of a single network, may actually increase the vulnerability of the system when such cascade of failures occurs in interdependent networks [18].

3. Implications for AI Policy

What are the implications of this perspective for AI policy? While research in Networks-of-Networks is still developing, and the question how to reduce vulnerabilities of interconnected systems is an ongoing endeavor ([19], the insights accumulated to date already provide crucial guidelines for AI regulation.

Prominently, the Networks-of-Networks literature clarifies that, because different interconnected systems affect each other, in assessing vulnerabilities no system can be viewed in isolation. This understanding is vital for AI regulatory policy. The advancements in the field of AI during the recent period have made it clear that many AI systems will not be operating in isolation. Rather, general purpose AI systems are increasingly embedded in different complex systems. Large language models, for example, are already integrated into search-engines, email systems, and central websites [20]. AI is already used in financial systems, health systems, and transportation systems.

While some of those systems are clearly “high risk” under the classification of the proposed AI Act, others, such as the world-wide-web or regular email networks, do not clearly fall within that classification. Yet, the Networks-of-Networks perspective instructs that it would be wrong to assume that the use of AI in such contexts is benign and risk-free. Rather, because of the interconnectedness among systems, an AI-related error in a system that is not classified as high-risk can easily percolate into other, high-risk, systems, leading to eventual cascades.

Take, for example, the prevalent case of large language models. One of the systemic risks that transpired with the vast diffusion of these models is their propensity to produce errors, inaccuracies, misinformation, and hallucinations [21]. Due to the massive use of these models, and their embeddedness in search engines, email, and multiple other applications, errors generated by these models can percolate (indeed, already percolate) into the web or other training datasets, and can further spread to multiple other interconnected systems that are in fact very high risk, such as health or transportation systems. Moreover, due to the nonlinearity of complex systems discussed above, errors can accumulate in a self-amplifying process: false information that finds its way into the world-wide-web or other datasets can then serve as training materials for the next generation of large language models, where it could be afforded greater prominence, and more severely affect additional systems [22].

Proposing detailed and comprehensive revisions to the extensive AI Act, whose provisions have yet to be finalized, is beyond the scope of this article. Nevertheless, the following paragraphs broadly sketch some non-exhaustive implications of the foregoing discussion. First, despite the intuitive appeal, the ability of the regulation to effectively draw dichotomous distinctions between “high-risk” and “other” AI systems is, at best, questionable.

Secondly, and assuming that the current approach is not overhauled in the immediate future, it is crucial to maintain flexibility in the interpretation of the “high-risk” categories, so as to allow courts and regulators to take into account Networks-of-Networks’ dynamics when determining the risks actually entailed in a certain system. Notably, the proposed AI Act sets out criteria for assessing “whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights” (Art. 7.2). The foregoing analysis indicates that the level of interconnectedness of the system with additional systems is a relevant factor that should be considered in this context: the more interconnected a system is, the more it is exposed to risks stemming from vulnerabilities of interconnected networks, and vice versa: closed systems with no (or minimal) interactions with other systems are less vulnerable (and less risky) in this respect. A Networks-of-Networks perspective, then, warrants that the level of interconnectedness should be added as a criterion for risk-assessment under the Act.

Third, the AI regulatory framework should incorporate flexible schemes that will allow regulators to timely adapt and respond to risks stemming from interdependencies among networks, when such risks transpire [23]. Thus, for example, regulators should ensure that the mechanisms for adapting and adding to the “high-risk” categories, already embedded in the proposed Act (e.g., Art. 7), are sufficiently efficient to enable such swift adaptation.

Finally, the Network-of-Networks perspective provides an additional angle to current analyses of market concentration in the field of general AI models. It illuminates that market reliance on a small number of models can enhance the systemic risks described above, if such models generate errors and failures. Referring to our previous discussion, those few powerful models might be analogized to hubs, whose vulnerabilities may undermine the robustness of various interdependent networks [24]. This perspective thus lends certain support to recent scholarly proposals to focus the regulatory efforts on the most powerful players in the field of general foundation models, even of such models do not qualify as “high-risk” [25].

4. Conclusion

Networks-of-Networks is an evolving scientific endeavor, and this paper does not purport to exhaust all its potential implications for AI policy. Nevertheless, the insights this science already provides us cast significant doubt on the feasibility of a reductionist regulatory approach, which does not sufficiently acknowledge interdependencies among systems. It indicates that AI models, which are broadly accessible through the internet and easily adapt to a broad range of tasks, can sometimes pose a greater systemic risk relative to AI models that operate in closed environments, and therefore deserve greater regulatory scrutiny. It further reinforces existing calls to limit market concentration in the field of general-purpose AI models. Any AI policy that aspires to protect critical infrastructure should take these insights into account.

***

Citation: Michal Shur-Ofry, A Networks-of-Networks Perspective on AI Policy, Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker), Network Law Review, Winter 2023.

Note

I thank Ofer Malcai, Thibault Schrepel, and Volker Stocker for useful input and comments.

References

  • [1] This piece is being published when the AI Act is in an interim phase: The last official version of the proposed Act dates back to June 2023: Regulation of The European Parliament and of The Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, 21 April 2021; Artificial Intelligence Act Amendments adopted by the European Parliament on 14 June 2023 on The Proposal For a Regulation of The European Parliament and of The Council on Laying Down Harmonised Rules on Artificial intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD))1, https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf (hereafter: “June 2023 Official Version””). An updated version was agreed upon in December 2023, but not yet published and it is unclear whether the drafting has been finalized. An informal version of the update was leaked by a journalist and can be found here (hereafter: “the recent Informal Version”). The formal final version is expected to be published only in February 2024. I therefore refer here both to the June 2023 Official Version, as well as to the recent Informal Version, while acknowledging that additional changes may transpire when the AI Act is officially published. Nevertheless, the fundamental principles relevant to the present analysis have not been materially changed between the different versions, and will likely underlie the final Act as well.
  • [2] June 2023 Official Version and recent Informal Version, Art. 6 & 7, and Annex III.
  • [3] Art. 8-15.
  • [4] For the Canadian proposal, see Bill C-27 (Can.), An Act to Enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, available at https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading (November 2022). For the term “Brussels Effect”, see ANU BRADFORD, THE BRUSSELS EFFECT: HOW THE EUROPEAN UNION RULES THE WORLD (2020).
  • [5] For the definition of “systemic risks”, see recital 60(m) of the recent Informal Version.
  • [6] See. e.g., MELANIE MITCHELL, COMPLEXITY–A GUIDED TOUR, 13 (2009); REUVEN COHEN, & SHLOMO HAVLIN, COMPLEX NETWORKS: STRUCTURE, ROBUSTNESS AND FUNCTION, 1 (2010).
  • [7] Id.
  • [8] See, e.g., COHEN, & HAVLIN, supra note 6; Buldyrev, S., Parshani, R., Paul, G. et al. Catastrophic cascade of failures in interdependent networks, 464 NATURE, 1025 (2010); Stephen Borgatti et al., Network Analysis in the Social Sciences, 323 SCIENCE 892 (2009); Wei Li, Amir Bashan, Sergey V. Buldyrev, H. Eugene Stanley, and Shlomo Havlin, Cascading Failures in Interdependent Lattice Networks: The Critical Role of the Length of Dependency Links, 108 PHYS. REV. LETT. 228702 (2012).
  • [9] In fact, the internet itself can be regarded as a network comprised of interconnected networks: See, e.g., Stocker, V., Smaragdakis, G., Lehr, W., & Bauer, S. The growing complexity of content delivery networks: Challenges and implications for the Internet ecosystem, 41(10) TELECOMMUNICATIONS POLICY, 1003–1016 (2017); Stocker, V., Knieps, G., and Dietzel, C., The Rise and Evolution of Clouds and Private Networks – Internet Interconnection, Ecosystem Fragmentation,,49th RESEARCH CONFERENCE ON COMMUNICATION, INFORMATION AND INTERNET POLICY (TPRC) (2021), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3910108.
  • [10] See, e.g. GREGORIO D’AGOSTINO & ANTONIO SCALA EDS., NETWORKS OF NETWORKS: THE LAST FRONTIER OF COMPLEXITY (2014)
  • [11] Dror Y. Kenneth et al., Network of Interdependent Networks: Overview of Theory and Application, in GREGORIO D’AGOSTINO & ANTONIO SCALA EDS., NETWORKS OF NETWORKS: THE LAST FRONTIER OF COMPLEXITY, 28 (2014).
  • [12] Vittorio Rosato et al., Modelling Interdependent Infrastructures Using Interacting Dynamical Models, 4 INT’L. J. CRITICAL INFRASTRUCTURES 63 (2008); Buldyrev et al., supra note 8.
  • [13] See, e.g., C. Abbey et al., Powering Through the Storm: Microgrids Operation for More Efficient Disaster Recovery, in 12 IEEE POWER AND ENERGY MAGAZINE, 67 (2014).
  • [14] See, e.g., D’AGOSTINO & SCALA, supra note 11; Bagheri, E. et al, An agent-based service-oriented simulation suite for critical infrastructure behaviour analysis, 2(4) INT. J. OF BUSINESS PROCESS INTEGRATION AND MANAGEMENT, 312 (2007). For regulatory policies on infrastructure protection, see, for example, Council Directive 2008/114/EC on the Identification and Designation of European Critical Infrastructures and the Assessment of the Need to Improve their Protection, https://www.legislation.gov.uk/eudr/2008/114/body/2020-12-31.
  • [15] Ilya Prigogine and Peter M. Allen, The Challenge of Complexity, in WILLAM C. SCHIEVE AND PETER M. ALLEN, EDS., SELF ORGANIZATION AND DISSIPATIVE STRUCTURES: APPLICATIONS IN THE PHYSICAL AND SOCIAL SCIENCES, 7 (1982). For policy implications of this trait, see Michal Shur-Ofry, Access to Error, 34 CARDOZO AELJ 357, 367 (2016); Michal Shur-Ofry, IP and the Lens of Complexity, 54 IDEA 55, 95 (2013).
  • [16] Bunamassa et al., Interdependent Superconducting Networks, NATURE PHYSICS (2023); cf. Noam Kolt, Algorithmic Black Swans, 101 WASHINGTON UNIVERSITY L. REV. 16 (forthcoming, 2024) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4370566(observing that AI-related catastrophic risks are not caused by a single factor but can “arise from the interaction of complex sociotechnical systems”).
  • [17] Amir Bashan et al., The Extreme Vulnerability of Interdependent Spatially Embedded Networks, 9 NATURE PHYSICS 667 (2013). See also Bagheri et al., supra note 14 (“Not only individual break down of networks raise concerns, but their mutual reliance is even more threatening, since a failure in one network can ripple down to additional networks”).
  • [18] Buldyrev et al., supra note 8.
  • [19] For some recent works, see C.D. Brummitt, R.M. D’Souza, E.A. Leicht, Suppressing Cascades of Load in Interdependent Networks, PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES (2012); Antonio Majdandzic et al., Multiple Tipping Points and Optimal Repairing in Interacting Networks, NATURE COMM. 1 (2016); Michael M. Danziger et al., Dynamic Interdependence and Competition in Multilayer Networks, 15 NATURE PHYSICS 178 (2019).
  • [20] See, e.g., Nico Grant, Google Connects A.I. Chatbot Bard to YouTube, Gmail and More Facts, NYTIMES, September 19, 2023, https://www.nytimes.com/2023/09/19/technology/google-bard-ai-chatbot-youtube-gmail.html?mc_cid=729e48b577&mc_eid=750e22fd26; Feredric Lardinois, Microsoft launches the new Bing, with ChatGPT built in, TECHCRUNCH (February 7, 2023), https://techcrunch.com/2023/02/07/microsoft-launches-the-new-bing-with-chatgpt-built-in/; Jonny Wilis, Microsoft Readies to Revolutionise the Workplace with ChatGPT, UCTODAY (January 19. 2023), https://www.uctoday.com/unified-communications/microsoft-readiesto-revolutionise-the-workplace-with-chatgpt/.
  • [21] E.g., Ziwei Ji et al., Survey of Hallucination in Natural Language Generation, ACM COMPUT. SURV. (November 2022), https://doi.org/10.1145/3571730.
  • [22] See, e.g., Eric Ulken, Generative AI Can Bring Wrongness at Scale, NiemanLab, https://www.niemanlab.org/2022/12/generative-ai-brings-wrongness-at-scale/; Ethan Perez et al., Discovering Language Model Behaviors with Model-Written Evaluations ARXIV (Dec 19, 2022), https://arxiv.org/pdf/2212.09251.pdf.
  • [23]For a detailed analysis of the need to incorporate responsive regulation as part of AI governance schemes, see Noam Kolt and Michal Shur-Ofry, Lessons from Complexity Theory for AI Governance (manuscript on file with the author).
  • [24] Supra, note 18 and the accompanying text.
  • [25] See Thibault Schrepel and Alex Pentland, Competition between AI Foundation Models: Dynamics and Policy Recommendations (June 28, 2023), https://ssrn.com/abstract=4493900. To a certain extent, the new reference under the recent Unofficial Version of the AI Act, to models with “high impact capabilities”, which can trigger risk mitigation duties, may constitute a step in the right direction.

Related Posts