The Network Law Review is pleased to present you with a special issue curated by the Dynamic Competition Initiative (“DCI”). Co-sponsored by UC Berkeley and the EUI, the DCI seeks to develop and advance innovation-based dynamic competition theories, tools, and policy processes adapted to the nature and pace of innovation in the 21st century. This special issue brings together contributions from speakers and panelists who participated in DCI’s second annual conference in October 2024. This article is authored by Volker Stocker, economist who has been leading the multidisciplinary research group “Digital Economy, Internet Ecosystem, and Internet Policy” at the Weizenbaum Institute , and William Lehr, an MIT telecommunications and Internet industry economist.
***
Abstract
The GenAI genie is out of the bottle. AI, and its vanguard GenAI, is a change agent that profoundly impacts the global transition to a digital economy. GenAI is already percolating through businesses and tasks, transforming how we create, innovate, and consume content (information), products, and services. Being deployed more widely across all layers and components of value chains, it brings new affordances that have transformed (or are slated to transform) nearly all conceivable social and economic contexts. Changes will affect online and offline worlds directly and indirectly. Users of AI models, tools, and services have been interacting with GenAI for some time already, but the indirect effects of GenAI are inherently less obvious and harder to assess, especially at this early stage. End users are often unaware of how the products and services they use are produced, and even for domain experts, the challenge of assessing the social and economic impact of ICTs has always proved difficult. Those challenges will only become more challenging with GenAI because its ability to operate in the background (a direct result of automation) means that many of those affected by GenAI will be unaware that—or how—GenAI is already impacting them.
In this article, we examine emerging policy challenges in two interrelated areas: the growing complexity of technical and business relationships in AI-driven digital ecosystems and changing concerns about asymmetric information and transparency. While GenAI should be viewed as part of a broader trajectory of ICT-based automation, our aim is to highlight how and why GenAI-related policy challenges differ. Although we cannot predict with any precision the post-waterfall future, it is clear that GenAI will be part of the landscape and will be a tool policymakers will need to use to address the future challenges. That makes two requirements for future policymaking clear. We need a much better and more capable multi-stakeholder measurement ecosystem, and we need to strengthen policymakers’ human multidisciplinary institutional capacity.
The GenAI genie is out of the bottle. AI, and its vanguard GenAI, is a change agent that profoundly impacts the global transition to a digital economy. Encompassing an evolving set of models, applications, and tools, GenAI offers unprecedented potential for adaptation and enhanced agentic potential. Its ability for automated self-improvement and potential to access and act upon new and real-time insights from unstructured data significantly expand the range of tasks amenable to digital automation.
GenAI is already percolating through businesses and tasks, transforming how we create, innovate, and consume content (information), products, and services.[1] Being deployed more widely across all layers and components of value chains, it brings new affordances that have transformed (or are slated to transform) nearly all conceivable social and economic contexts. Changes will affect online and offline worlds directly and indirectly. Users of AI models, tools, and services have been interacting with GenAI for some time already, but the indirect effects of GenAI are inherently less obvious and harder to assess, especially at this early stage. End users are often unaware of how the products and services they use are produced, and even for domain experts, the challenge of assessing the social and economic impact of ICTs has always proved difficult.[2] Those challenges will only become more challenging with GenAI because its ability to operate in the background (a direct result of automation) means that many of those affected by GenAI will be unaware that—or how—GenAI is already impacting them.
All of this adds to the recognition that the locus of digital technology-driven change is distributed and diffused. Adding the opacity of AI systems to the picture further amplifies the fundamental information and measurement challenges. Taken together, it is reasonable to assume that we are approaching an inflection point in human and economic history, a potential Fourth Industrial Revolution.[3] This inflection point can be described as a ‘waterfall moment’—we see the falls but have no anticipation of what comes after it, how deep the fall is, and so on. Despite this considerable uncertainty characterized by ‘known unknowns’ and ‘unknowables’, it is worth considering what this means for GenAI policymaking, a question we explored in greater detail elsewhere.[4]
In this article, we expand on this work by examining emerging policy challenges in two interrelated areas: the growing complexity of technical and business relationships in AI-driven digital ecosystems and changing concerns about asymmetric information and transparency. While GenAI should be viewed as part of a broader trajectory of ICT-based automation, our aim is to highlight how and why GenAI-related policy challenges differ. Although we cannot predict with any precision the post-waterfall future, it is clear that GenAI will be part of the landscape and will be a tool policymakers will need to use to address the future challenges. That makes two requirements for future policymaking clear. We need a much better and more capable multi-stakeholder measurement ecosystem, and we need to strengthen policymakers’ human multidisciplinary institutional capacity.
2. The Growing Complexity of AI-driven Digital Ecosystems
Digital ecosystems are in flux. Digital technologies have been evolving rapidly, but so too have the business models, industry structures, and markets that provide and depend on those technologies. This is well-known and has informed various policies and regulations governing digital technologies and platforms. The progress toward GenAI amplifies and accelerates the disruptive potential of digital systems and the regulatory policy challenges those give rise to. It is a force-multiplier.
To address those challenges, a necessary first step is to acknowledge that service provision in today’s increasingly digital economy is based on a complex and evolving fabric of interconnected and complementary resources, or collectively, digital infrastructures. These include networked computing and communications resources and data that are organized into complex and overlapping stacks of hardware and software platforms.[5]
Many of those resources are owned, managed, and controlled by different entities, which have imperfectly aligned and sometimes opposing interests.[6] AI technologies offer novel options for provisioning almost everything as a service (XaaS).[7] Enabled by softwarization and modularization, AI has enhanced the options to seamlessly combine (i.e., mix-and-match) these resources, rendering industry value chains and system architectures more fluid and dynamically changeable. That implies changing dependencies and more complex inter-dependencies within and across firms and industries. These may not be always apparent or visible[8] and further defy traditional efforts in economics to define clear boundaries between industries and markets or to classify inter-firm relations as vertical or horizontal. A first insight is that this calls for a more nuanced model of co-opetition rather than competition or coordination in the traditional sense.[9]
These developments have been coming for a long time. Well before ChatGPT ushered in mass recognition of the GenAI future in November 2022, leading tech firms like Google, Amazon, Microsoft, and Apple have been investing in digital infrastructures and developed multi-layered ecosystems that span various layers of service and network platforms.[10]Key elements of each of these ecosystems are highly manageable, extensible, and scalable ‘infrastructure substrates’ comprised of digital infrastructure resources that have supported the development and delivery of a widening variety of services. These make possible the rise of pervasive computing, where distributed sensing (IoT), communications (wireless everywhere) and robotic automation (actuators, computer vision, etc.) enshroud us in a digitally augmented world. A future where everything may be connectable to and augmented by digital technologies, often without direct human attention or even awareness, is coming. The capabilities of AI are determined by this infrastructure substrate, which in turn shapes AI’s potential for use and innovation, creating feedback loops that lead to further enhancements in the substrate. Investment, innovation spillovers, and feedback loops shape the trajectory of AI-driven ecosystems.[11]
In view of these co-dependencies, it is hardly surprising that leading tech platforms have emerged as key players in the GenAI sphere.[12] Tech giants are engaged in a corporate arms race for AI supremacy, with several companies (e.g., from the US[13] and China[14]) investing heavily in acquiring, building, and developing essential resources like data centers, AI chips, and AI models. A wave of novel collaborations and partnerships between leading tech companies and AI firms (e.g., providers of foundation models) have accompanied these investments. As a CMA report from 2024[15] showed, many of the leading tech companies now have multiple AI partners. Moreover, many companies are additionally building, developing, and using their own AI tools and resources.[16] A second insight is that these developments emphasize the fluidity and complexity of the technical and business relationship fabric that underlies AI-driven digital ecosystems.
The launch of DeepSeek’s low-resource and low-cost R1 model and the shock wave it sent caused stock market prices of several key AI players,[17] including AI chip giant NVIDIA, to plummet. While DeepSeek was reported to have fueled a rebound effect that boosted demand for NVIDIA chips,[18] the launch of xAI’s Grok 3—a model that has garnered much attention as it was reportedly trained on the company’s Colossus supercluster (which offers up to 200,000 GPUs)[19] and immediately ranked among the top-performing AI models—arguably sent the opposite signal.[20] A third insight is that these developments, the continuous improvements of existing models (i.e., new releases and versions (including reasoning models) by players like OpenAI, Anthropic, Google, or Meta, but also DeepSeek and Alibaba Cloud),[21] the rapidly growing variety of ‘open source’ models,[22] and the fact that leading players like Google,[23] OpenAI,[24] Microsoft,[25] Amazon,[26] and others[27] are ushering in the ‘agentic era’[28] indicate that the future of GenAI is both highly uncertain and dynamic.[29]
As the above shows, the rise of GenAI as an engine for the Fourth Industrial Revolution further accelerates, complexifies, and renders more dynamic the interdependencies, competition challenges, and fluidity of industry relationships in the ICT sector. It may alter the power dynamics between ecosystem stakeholders (e.g., providers of telecoms infrastructure, cloud infrastructure, AI foundation models, and AI enhancements) and increase the prevalence of unintended consequences associated with ‘waterbed effects.’[30] With GenAI, those effects are not limited to the ICT sectors that provide the infrastructure but impact wider parts of the (global) economy.
That being said, the co-opetition dynamics and the implications of those for contestability at different levels[31] are harder to assess or manage with existing policy tools and institutions. Indeed, policymakers are challenged to identify appropriate interventions and organize consensus on how to implement them in ways that do not create problems that are worse than the ones they sought to mitigate.[32] We need to acknowledge the inherent second-best efficiency of government interventions and the infeasibility of government micro-management outperforming market-competition directed industrial performance. At most, regulators can aspire to framing ex-ante guardrails and more targeted ex-post regulatory interventions to try and manage competition and remedy any market failures.
The key ecosystems contending for GenAI relevance, if not dominance, are global. The key technologies are software or algorithm based, and those are geographically footloose and increasingly difficult to regulate on any sovereign basis. Coordinating international policy interests is challenged by the divergent alignment of the ecosystems with national interests that are not collectively aligned. Each nation recognizes the risks of being left behind and some more than others see hope in potentially leading in the race to the digital future. This is illustrated by the significant support and investments national governments are devoting to AI development and industrial policy initiatives. For example, in the first two months of 2025, the US announced the 500 billion USD Stargate[33] joint venture and the EU its 200 billion EUR InvestAI[34] initiative. Amidst these initiatives, geopolitical tensions[35] surrounding AI supremacy and tech regulation grow,[36] with AI safety concerns looming larger.
3. Revisiting Asymmetric Information Challenges in the GenAI Era
If managing digital automation was a difficult problem before GenAI, it is a much more difficult challenge going forward. As the options for transferring agency and control to digital systems expand, these systems must be more directly involved in the management challenge. The future of the digital economy in the GenAI era will require using GenAI to oversee GenAI and coordinate human responses globally. If the model for effective policymaking in a complex society or nation-state aspires to the principles of sound Evidence-Based Decision Making (EBDM), that already difficult challenge becomes even more complex in a world of GenAI, especially if we want to keep humans in the loop.[37]
Beyond calling attention to this looming (maybe already too late) challenge, we want to be clear that neither we nor anyone else has a clear roadmap to direct an effective policy approach. Even if we could agree on the optimal legislative or regulatory policy frameworks or rules—whether ex ante or ex post—to govern GenAI, implementing and enforcing those rules will be extremely difficult. And, we see no general consensus on what regulatory interventions are needed or desirable, but hiding our heads in the sand is also not an option.
At a minimum, we offer thoughts about some first necessary steps. These include building the sort of measurement ecosystem that the GenAI future will require, as well as the (human) multidisciplinary institutional capacity required to interact with the measurement ecosystem.
To realize the potential of GenAI and to manage it effectively —regardless of who is in charge, whether policymakers, companies, or individual end-users—a much more advanced measurement ecosystem[38] offering the capability for much more granular and detailed data collection, management, and interpretation (at scale) is needed. Only then can users be empowered to make informed decisions and GenAI be effectively used, managed, and regulated.
While much of this was true about ICTs before AI, the information needs and the challenges addressing those needs present in AI-driven ecosystems are more intense. As the scale and scope of content and product creation has already been expanded tremendously—just think about the glut of fake content that a single end user can already produce with limited skills and resources—so too has the potential (or even ability) to (mass) customize AI model outputs in real-time. Information needs change for all stakeholders, and measurement challenges must be addressed to enable informed decisions. In the following, we illustrate this based on two sets of issues.
Beware of AI Black Boxes and the Many Shades of Openness
A first set of issues relates to AI opacity and openness. AI systems rely on models built on neural networks and deep learning—technologies that are known to create explainability and auditing challenges.[39] This is also true for GenAI and is arguably aggravated by agentic AI and workflows where (several) AI systems act (interact) autonomously.[40]Expanded options for human-AI collaborations and AI-to-AI interactions potentially offer many benefits, yet they come at the cost of transparency challenges.
When discussing AI transparency, examining foundation models is beneficial. Notably, the Foundation Model Transparency Index[41] has indicated that several widely deployed AI foundation models are significantly opaque regarding various subdomains of transparency, such as data, human labor, compute, model training-related subdomains, and mitigations (e.g., guardrails). Similar issues have been highlighted by the AI Data Transparency Index.[42] The transparency challenges examined here are closely intertwined with AI model openness, resulting from entrepreneurial decisions, which create a source of ‘avoidable’ opacity beyond what is inherent to the technology. That being said, contrary to some public discourse that promotes a dichotomy between closed and open models, this issue is not binary. A recent study identified and explored 14 dimensions of model openness. Beyond providing sobering results for leading AI models, it demonstrates that model openness should be characterized as “composite” and “gradient,” emphasizing how a binary understanding of openness can lead to misclassifications and “open-washing.”[43] Schrepel and Potts critically review AI model openness, presenting a framework to assess model openness beyond purely technical perspectives. Notably, the authors illustrate how model openness and AI licenses influence competition and innovation.[44]
A first insight here is that binary discussions of whether AI models are open or closed, biased or unbiased, transparent or opaque seldom capture the dimensions and nuances needed to evaluate them meaningfully. Policies that treat the world as dichotomous are misguided and hardly enforceable because they are overly reductionist, polarizing, and susceptible to strategic circumvention.
A second insight is that the sources cited above point to technical, systematic, and structural hurdles for auditors (including academic researchers) to detect, evaluate, and mitigate the causes of model performance and potentially undesirable outcomes (including biases).[45] The costs and benefits of collecting, sharing, and responding to such information at different time-scales vary across stakeholders for the AI systems to operate as expected and desired, which includes complying with industry regulations. It is clear that the providers of the digital infrastructure substrates will be tapping into the expanding (agentic) capabilities of GenAI to acquire the situational awareness needed to operate in the more fluid, dynamic, and complex market environments but with variable success. There is no presumption that industry value chains or platform ecosystem participants will collect or share the information needed for good system-level decision-making, let alone to address the challenges of enforcing digital governance. For instance, platform providers may be held liable for moderating content (e.g., combating fake news or preventing cybercrime), or if exempt from liability, may abuse their tools to modulate and manipulate consumer responses for private rather than their end-users’ or society’s benefit.[46] Concurrently, regulators need to monitor and audit markets to detect and enforce violations and to promote industrial policies (e.g., competition, universal accessibility, prohibited use of copyrighted material in training data, etc.). The watchers and the watched do not and should not trust each other without care. Fundamental and strategic asymmetric and imperfect information challenges will persist.
The design and implementation (including enforcement) of digital governance policies depends on stakeholders having the right information, which requires a robust measurement ecosystem. Without the ability to access and act upon timely and accurate market intelligence, key policies and regulations (e.g., the DSA) cannot be effectively enforced. Lastly, without the ability to measure the usage of GenAI and the infrastructure resources it depends on, it will be impossible to effectively govern GenAI’s economic and social impact.[47]
The GenAI Digital Literacy Conundrum
A second set of issues relates to what can be subsumed under the term ‘GenAI digital literacy’. Here, we adopt a more human-centric perspective to asymmetric information challenges. The latter are aggravated by the fact that many GenAI tools and pre-trained AI models are easy to use and interact with, lowering skill-related adoption and innovation costs and barriers for end-users (e.g., individuals including prosumers as well as companies). On the one hand, the ease of use conceals much of the complexity inherent to these technical systems. While we explained above several transparency challenges, this is arguably causing a mismatch between the ability to use GenAI tools and models and the literacy required to understand these complex technologies fully and use them appropriately.[48] The unprecedented pace of AI adoption and technology evolution (as the rise of AI agents demonstrates) further complicates the picture. Beyond privacy and security-related threats, it is hard for end-users to detect growing threats related to deception or manipulation (e.g., via synthetic content scams and fake news), or nudging through selective (i.e., biased) recommendations or AI-driven dark patterns.[49] In many contexts, we will directly confront or interact with GenAI tools, fully aware of their involvement. Here, asymmetric information challenges are complex. However, in even more contexts, we may lack this same awareness and will indirectly confront or engage with products, services, and content that may, in some shape or form, be influenced by GenAI.[50] Here, the challenges are even more complex.
GenAI systems are designed to be interactive with both the environment and users. Through this interactivity, they derive much of their potential power. Many GenAI tools are responsive and adaptive to user input, but not necessarily in ways that humans may understand or desire. We will illustrate this with a few examples. GenAI tools may be used to extract consumer surplus based on first-degree price discrimination. Additionally, in-context learning enables end-users to customize or personalize GenAI tools (such as ChatGPT) and/or their outputs, whether intentionally or unintentionally.[51]
The adaptation of AI systems may yield data feedback loops, attracting users, incentivizing engagement, and enabling data network effects through both cross-user learning and within-user learning, customization, and personalization.[52]This can influence end-user behavior and choices, based on nudging or manipulation,[53] as mentioned earlier, and can raise switching costs. For some users, the acquisition of GenAI digital literacy may lower switching costs. For GenAI tools, how they are used may determine which of these countervailing effects will prevail.
From a policy perspective, the growing complexity of digital literacy issues raises questions about what it means to be an informed consumer (or user more broadly) and what measures are necessary to protect them effectively.[54]
A first insight is that our discussion suggests that information problems distort adoption and usage decisions, thereby affecting market outcomes. As GenAI expands the range of digital automation possibilities across the economy and human activities, it simultaneously lowers and increases literacy requirements in different contexts, with implications for civic and social engagement (informed citizenry) as well as employment options (such as shifting labor market prices due to skill-based changes and ICT capital substituting for human labor, etc.).
A second insight is that asymmetric information challenges evolve and manifest in a personalized manner, making effective auditing not only difficult to scale but also challenging to implement. Without the ability to detect, we cannot act or protect. GenAI-augmented measurement tools need to be available to the industry stakeholders providing the digital infrastructure substrates supporting the GenAI tools, the providers of the GenAI services, the regulatory institutions (both governmental and non-governmental), and the end-users. There is no alternative to a more capable measurement ecosystem.
4. The GenAI Waterfall – Addressing Digital Policy Challenges in the Age of GenAI
Returning to the GenAI ‘waterfall’ moment metaphor, we might ask how far we are from the precipice, whether we can avoid going over the falls, or how high the falls may be. Those are all interesting and important questions, but our focus here is on what policymakers need to do when they do not, or cannot, know the answers to those earlier questions and do not know whether they have the right tools to navigate the falls—but still have the responsibility to steer while going over it. The policy challenges can be divided into three focal areas and challenges: (i) the quality of the ship that is headed over the falls; (ii) the steerage over the falls; and (iii) planning for what to do when the ship emerges from the other side.
Pay Attention to the Ship You Are in Going over the Falls
The ship is defined by the status quo world that we find ourselves in. It is characterized by today’s complex mix of digital infrastructures and the industry structures that sustain them. It also includes the legacy regulatory institutions. There is no GenAI without digital infrastructure, and the continued advances in digital infrastructure depend on GenAI. Their joint evolution also implies that the interests of competition authorities and sector-specific infrastructure regulators tasked with industrial policy goals (e.g., universal access to critical digital infrastructures) are inextricably intertwined and co-dependent. If we are going over the falls, we have to temper our policy interventions and evaluate proposed modifications to our ship with respect to how well it will meet the challenges of surviving, recovering, and adapting following a trip over the falls.
Steerage Capabilities are Jointly Determined by the Captain, Crew, and the Ship’s Capabilities
We may wish for a better ship before going over the falls, but assuming we have done all we can to ensure the ship is as seaworthy as possible, we will have to rely on the captain and crew to steer the ship. The steerage challenge will call for a mix of muscle memory to stay the course as well as the ability to make whatever (quick) adjustments may be feasible and needed to best navigate the falls. The skill of the captains will need to be matched to the capabilities of the ship, and the desire to implement modifications needs to be balanced against the need to be seaworthy in the event those modifications cannot be successfully implemented before going over the falls. Importantly, if the captain and crew are fighting over how to steer the ship, the lack of consensus may prove disastrous. A captain with a ship with tight but agile controls may be able to effect rapid adjustments. In contrast, a captain with rigid controls may have limited ability to adjust steerage. And, a captain with flaky controls may need to be excessively cautious with steerage. As GenAI capabilities and real-time intelligence about the conditions will vary and remain uncertain and subject to significant information asymmetries, policymakers will need to resist the temptation of back-seat drivers to micro-manage responses, focusing instead on trying to improve the available information and ensuring appropriate guardrails are in place.
After the Falls, Policymakers will have an Important Role in Road mapping the Future
Finally, when the ship emerges from the falls, it will need capabilities to coordinate plans for where to go next. That will entail addressing the disruption and disparities in economic impacts resulting from the trip over the falls. For example, the trip over the falls may result in the need to rethink what services ought to be regarded as critical infrastructure and how public efforts to address gaps ought to be refocused. Beneficiaries of pre-falls support and those calling for support reforms may be expected to disagree and resolving those differences will require better ground-truth data on the new status quo. Advances in GenAI can be expected to impact the structure of the value chain relationships influencing calls for regulatory interventions, and the comparative cost/benefit tradeoffs for alternative data or infrastructure provisioning strategies (e.g., public-private partnerships, national-international policy coordination, evaluating market boundaries, etc.).
None of those are easy problems, but it is clear that solving them will require multi-stakeholder engagement. Waterbed effects will require consideration of system-level and component-level interventions; and those will be national and transnational in scope. To improve hopes of successful results, policymakers should aspire to the principles of sound EBDM.[55] That requires information, which requires data, and its process and ingestion into the policy-making process. Measurement is the acquisition of that data, and undertaking the requisite measurements will require access to the measurement ecosystem discussed earlier.
Most of the measurement infrastructure and capabilities will be embedded in the digital infrastructures that will make GenAI possible as well as its management. Thus, the future of GenAI and the evolution of the measurement ecosystem and its infrastructure are also inextricably intertwined and co-dependent. For GenAI to realize its full potential, it will need access to the continuous stream of real-time measurements to enable it to learn and respond appropriately to its context. Those measurement capabilities will be embedded in the digital infrastructure that will support the GenAI. That infrastructure comprises the fabric of networked computing and communications, ‘on-demand’ resources provided within and across the interconnected platform ecosystems. The collection, sharing, and processing of the measurements will present a strategic governance challenge. GenAI can be both a tool to assist in addressing that challenge, as well as a demand driver for a solution. Just as there is no GenAI without infrastructure, and (less rather than no) infrastructure without GenAI, there is no GenAI without a measurement ecosystem and (a significantly less capable) measurement ecosystem without GenAI.
All of these GenAI, measurement, and infrastructure entanglements present a Gordian Knot that will require multidisciplinary, multistakeholder engagement to resolve. Policymaking needs to be international and engage a diverse collection of points-of-view across industry sectors, government agencies, and academic disciplines. Solutions will require quick-response action from appropriately skilled teams of lawyers, engineers/technologists, and social scientists (including economists) to make sense of the measurement data. While policymakers may not be able to forecast what the post-falls world will look like, they at least know that they will need more granular, detailed and complex measurements compiled from stakeholders (including GenAI) with imperfectly aligned incentives.
The essence of policy is to effectively shift the status quo toward a desired improvement. While we must acknowledge path dependencies in markets and governance, this requires a comprehensive understanding of the problem that policymakers seek to address. It also requires a clear idea about what would constitute an improvement and a desired future state. Having a better idea of where we are, where we want to go, and some sense of the obstacles and opportunities involved in getting there are essential to crafting appropriate and effective policy interventions. That being said, policies adding friction or grease are likely both needed – friction to move in the direction you want to move from, grease in the direction you want to move toward.
Although we do not see consensus (yet) on what the desired future should be and lack adequate ground-truth understanding of current and near-term-future conditions, we know that an engaged dialog informed by better information (measurement data) than is currently available needs to be part of any path forward. We will also need institutions that are more capable and flexible than the legacy regulatory authorities. However, discarding existing frameworks and starting anew based on a clean slate approach is a bad idea and certainly not the way forward to enable and empower ecosystem stakeholders to effectively navigate towards a desirable digital future.
Potential threats to competition or policy goals in the face of GenAI can change rapidly. Legacy notions of vertical or horizontal competition are outmoded. We need more agile policies. GenAI can help deliver that agility to policymakers and industry stakeholders interested in advancing toward a better future. Yet, GenAI may concurrently serve as a tool for stakeholders seeking to evade regulatory constraints—or worse, derail collective improvement efforts. For example, expanded capabilities that offer more dynamic flexibility that enables XaaS may create new bottlenecks or eliminate old ones. Such challenges bring infrastructure and data policies to the fore and cause them to overlap for GenAI policy. While we know after the falls we will be forced to confront “unknown unknowns” and that the GenAI tools that give rise to the threats of harms will be available to bad actors, unless we work to ensure that policymakers also have access to those tools, the likelihood of bad outcomes will be enhanced. Those bad outcomes may be the result of malicious activity or simply the failure to coordinate. To oppose the bad actors and to coordinate the good actors, a more capable measurement ecosystem is needed. That is true today, before the AI waterfall, but will be even more true after the AI waterfall if humans want to be able to ensure automation keeps humans in the control loop.
As an example of the need for more agile, better-(measurement)-informed policymaking, consider the growing relevance of contestability, which is more important in a world of XaaS where a greater range of infrastructure and resources are shared. Contestability increases the relevance of assessing potential competition and shifts competition policy concerns from ex post reactions to ex ante risk management. In the face of a faster, more complex world, with more distributed innovation enabled by GenAI, asymmetric information challenges will loom larger. While GenAI can accentuate or ameliorate those problems, policymakers will have to cooperate in building the sort of measurement ecosystem that the GenAI future will require. Moreover, they will need the human multidisciplinary institutional capacity that will be required to interact with the measurement ecosystem. As curators, gatekeepers, and regulators, policymakers will need to craft transparency and disclosure policies and invest in government
Citation: Volker Stocker & William Lehr, The Growing Complexity of Digital Economies over the GenAI Waterfall: Challenges and Policy Implications, Network Law Review, Spring 2025. |
* Volker Stocker would like to acknowledge funding by the Federal Ministry of Education and Research of Germany (BMBF) under grant No. 16DII131 (Weizenbaum-Institut für die vernetzte Gesellschaft – Das Deutsche Internet-Institut).
References:
- [1] In a joint paper with Zachary Cooper, we described this in the context of the law and economics of copyright in the GenAI era with a focus on music creation. Cooper, Z., Lehr, W.H., & Stocker, V. (2025, 23 January). The New Age: Legal & Economic Challenges to Copyright and Creative Economies in the Era of Generative AI. The Digital Constitutionalist. Available at: https://digi-con.org/the-new-age-legal-economic-challenges-to-copyright-and-creative-economies-in-the-era-of-generative-ai/. See also Cooper, Z. (2024). The AI Authorship Distraction: Why Copyright Should Not Be Dichotomised Based on Generative AI Use. Available at: https://ssrn.com/abstract=4932612.
- [2] Brynjolfsson, E. (1993). The productivity paradox of information technology. Communications of the ACM, 36(12), 66-77.
- [3] Schwab, K. (2016). The Fourth Industrial Revolution: What it Means, How to Respond. World Economic Forum. Available at: https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/.
- [4] Lehr, W.H. & Stocker, V. (2024). Competition Policy over the Generative AI Waterfall. Artificial Intelligence & Competition Policy (ed. Abbott, A. & Schrepel, T.), Concurrences. Available at: https://ssrn.com/abstract=5131798.
- [5] Relevant resources are comprised of physical components (hardware, real estate, utilities, etc.), software, data, and property rights over tangible and intangible assets. Those are interconnected via wired and wireless links to routers and servers deployed in the cloud (e.g., in hyperscale datacenters), at the edges or service-provider networks and on company premises and combined with software (including ML/AI and various foundation models), other hardware (e.g., end-user devices), and data (e.g., publicly available data, acquired data, proprietary data, or synthetic data). The physical resources are tangible assets that are localized in space, but the software assets that increasingly provide the functionality are capable of being virtualized and enable the delocalization of control and physical-world action. Intellectual property (IP) and other intangible assets (embodied in the form of contracts) are also key resources. The human assets are employees as well as customers. As we will explain below in more detail, business and technical relationships are complex and changing.
- [6] Two examples are noteworthy here to illustrate how interests may change over time. First, Apple has recently been reported to integrate both OpenAI’s GPT and Google’s Gemini in their Apple Intelligence. See Mauran, C. (2025, 23 February). Apple Intelligence with Google Gemini integration looks to be coming soon. Mashable. Available at: https://mashable.com/article/apple-intelligence-google-gemini-integration-reportedly-coming-soon. Second, Microsoft has been reported to reduce its dependency on OpenAI by building their own AI models. See Wiggers, K. (2025, 7 March). Microsoft reportedly ramps up AI efforts to compete with OpenAI. TechCrunch. Available at: https://techcrunch.com/2025/03/07/microsoft-reportedly-ramps-up-ai-efforts-to-compete-with-openai/
- [7] With XaaS, the network infrastructure that supports on-demand access is shared. That is a source of the scale and scope economies that reduce total costs and enable multiplexing of demand.
- [8] Consider developments towards enhancing software supply chain transparency based on Software Bill of Materials (SBOM).
- [9] See Brandenburger, A., & Nalebuff, B. (2021). The rules of co-opetition. Harvard Business Review, 99(1), 48-57 and Brandenburger, A., & Nalebuff, B. (1996). Co-opetition. Doubleday. New York.
- [10] Lehr, W.H., Clark, D.D., & Bauer, S. (2019). Regulation when platforms are layered. 30th European Conference of the International Telecommunications Society (ITS).
- [11] See, for example, Stocker, V., Knieps, G., & Dietzel, C. (2021). The Rise and Evolution of Clouds and Private Networks – Internet Interconnection, Ecosystem Fragmentation. TPRC49: The 49th Research Conference on Communication, Information and Internet Policy. Available at: https://ssrn.com/abstract=3910108. Note also that V. Stocker presented related work (joint with J.M. Bauer and A. Pourdamghani, A.; “Innovation dynamics in the internet ecosystem and digital economy policy”) in 2023 at the 32nd ITS European Conference in Madrid, Spain.
- [12] See Głowicka, E. & Málek, J. (2024). Digital Empires Reinforced? Generative AI Value Chain, Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker). Network Law Review.
- [13] See, for example, The Economist (2024). Big tech’s capex splurge may be irrationally exuberant. The Economist. Available at: https://www.economist.com/leaders/2024/05/16/big-techs-capex-splurge-may-be-irrationally-exuberant
- [14] See, for example, Reuters (2025, 24 February). Alibaba to invest more than $52 billion in AI over next 3 years. Reuters. Available at: https://www.reuters.com/technology/artificial-intelligence/alibaba-invest-more-than-52-billion-ai-over-next-3-years-2025-02-24/
- [15] CMA (2024). AI Foundation Models: Technical update report. Available at: https://assets.publishing.service.gov.uk/media/661e5a4c7469198185bd3d62/AI_Foundation_Models_technical_update_report.pdf
- [16] For example, Microsoft has partnered with (and funded) OpenAI and integrates GPT-4 into many of its products. Examples are their Copilot(see https://www.microsoft.com/en-us/microsoft-copilot/organizations#solutions; accessed: 27 March 2025) and their Azure OpenAI Service (see https://learn.microsoft.com/en-us/azure/ai-services/openai/overview; accessed 27 March 2025). However, OpenAI has launched its own search engine (see OpenAI (2024, 31 October). Introducing ChatGPT search. Available at: https://openai.com/index/introducing-chatgpt-search/), has been working on a browser (see Weiß, E.-M. (2024, 22 November). OpenAI wants to enter the browser war. Heise Online. Available at: https://www.heise.de/en/news/OpenAI-wants-to-enter-the-browser-war-10100663.html), and has partnered with Apple (see OpenAI (2024, 10 June). OpenAI and Apple announce partnership to integrate ChatGPT into Apple experiences. Available at: https://openai.com/index/openai-and-apple-announce-partnership/) (which has developed its own foundation models).
- [17] See, for example, Cui, Y. and Yang, A. (2025, 28 January). Why DeepSeek is different, in three charts. NBC News. Available at: https://www.nbcnews.com/data-graphics/deepseek-ai-comparison-openai-chatgpt-google-gemini-meta-llama-rcna189568. See also the comprehensive discussion of critical implications of the DeepSeek moment (e.g., about cost efficiency, environmental impacts, security, etc.) by Woods, A. (2025, 28 January). DeepSeek: What You Need to Know. MIT CSAIL Alliances. Available at: https://cap.csail.mit.edu/research/deepseek-what-you-need-know. Note that there were discussions about the illegal use of distillation to use OpenAI models to train DeepSeek’s R1 model; see Metz, C. (2025, 29 January). OpenAI Says DeepSeek May Have Improperly Harvested Its Data, The New York Times. Available at: https://www.nytimes.com/2025/01/29/technology/openai-deepseek-data-harvest.html
- [18] One news article points to the potentially complex impact of the DeepSeek moment, stating that “Despite initial market panic, investors now believe DeepSeek could fuel even greater demand for Nvidia’s chips and boost its market dominance.” See Nolan, B. (2025, 25 February). Nvidia gets a boost from China’s DeepSeek ahead of earnings. Forbes. Available at: https://fortune.com/2025/02/25/nvidia-china-deepseek-earnings/
- [19] Buntz, B. (2025, 18 February). Musk’s xAI launches Grok 3, which it says is the ‘best AI model to date’ thanks in part to a 200,000-GPU supercluster. R&D World. Available at: https://www.rdworldonline.com/musk-says-grok-3-will-be-best-ai-model-to-date/. For a recent overview of the evolution of such clusters or “AI supercomputers”, see Pilz, K. F., Sanders, J., Rahman, R., & Heim, L. (2025). Trends in AI supercomputers. arXiv preprint arXiv:2504.16026.
- [20] xAI (2025, 19 February). Grok 3 Beta — The Age of Reasoning Agents. Available at: https://x.ai/news/grok-3
- [21] To get an impression of the variety of more than 200 foundation models that developers can use based on Google Cloud’s Vertex AI, see Google Cloud (n.d.). Model Garden on Vertex AI. Available at: https://cloud.google.com/model-garden?hl=en; retrieved 17 May 2025. Another example is Amazon Web Services’s Bedrock platform, see AWS (2025). Amazon Bedrock (2025). Available at: https://aws.amazon.com/bedrock/; retrieved 17 May 2025.
- [22] Note that Hugging Face reported ca. 1 million models by the end of 2024 (see Hugging Face (2024). Open-source AI: year in review 2024. Available at: https://huggingface.co/spaces/huggingface/open-source-ai-year-in-review-2024?day=4). Presently, there are over 1.5 million models accessible on HuggingFace (see https://huggingface.co/models; retrieved: 14 March 2025).
- [23] Pichai, S., Hassabis, D., & Kavukcuoglu, L. (2024, 11 December). Introducing Gemini 2.0: our new AI model for the agentic era. Available at: https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/
- [24] OpenAI (2025, 23 January). Introducing Operator. Available at: https://openai.com/index/introducing-operator/
- [25] Microsoft (2025). Introducing agents. Available at: https://support.microsoft.com/en-us/topic/introducing-agents-943e563d-602d-40fa-bdd1-dbc83f582466
- [26] Amazon AGI (2025, 31 March). Introducing Amazon Nova Act. Available at: https://labs.amazon.science/blog/nova-act
- [27] Note that the recent launch of the Chinese autonomous AI agent Manus has been dubbed the second DeepSeek moment. See Smith, C.S. (2025, 8 March). China’s Autonomous Agent, Manus, Changes Everything. Forbes. Available at:https://www.forbes.com/sites/craigsmith/2025/03/08/chinas-autonomous-agent-manus-changes-everything/
- [28] See the AI Agent Index as presented in Casper, S., Bailey, L., Hunter, R., Ezell, C., Cabalé, E., Gerovitch, M., … & Kolt, N. (2025). The AI Agent Index. arXiv preprint arXiv:2502.01635.
- [29] The emergence of competitive models and tools suggests a degree of ‘contestability’ and competitive pressure. Admittedly, it remains to be seen how open sourcing and licensing will affect the profitability of models and services and, therefore, the ability to attract funding from third-party private investors. Open-source models certainly put pressure on commercial, closed models.
- [30] Unlike spring or foam mattresses that can isolate impacts on one part of the mattress from being felt at other parts, movement anywhere on a waterbed results in waves that produce effects that may be felt across the entire bed with various intensities, even at a distance from the force that initiated the wave. Thus, entrepreneurial or regulatory interventions at one platform level of an ecosystem may cause ripples and effects across other ecosystems and layers within the ecosystems. The same is true about efforts to regulate the large digital platform operators by focusing on the layers or markets where their market power may be of greatest concern (e.g., Google in search, Amazon in eCommerce, etc.).
- [31] Competition and innovation dynamics within and across different complementary layers of the AI stack are shaped by a fabric of technical and business relationships that require holistic analyses. For example, contestability at lower (upstream) layers may shape outcomes in downstream (layers). Cross-layer spillovers may be bidirectional and feedback loops can emerge. Additionally, distinct layers exhibit varying economic characteristics that can lead to different market concentrations and levels of control, whether centralized or decentralized. Economies of scale and scope on the supply side, along with direct and indirect network effects on the demand side (user-based and data-driven), can lead to significant size advantages. For example, while we observe relatively high market concentration in cloud markets and foundation models (where returns to scale are critical), there is remarkable innovation in AI models at the upper layers. See FN 21 for the staggering number of models accessible on HuggingFace.
- [32] Marc Andreesen analogized efforts to regulate AI as akin to US social-engineering efforts to prohibit the use of alcohol. See Andreesen, M. (2023, 6 June). Why AI will Save the World. Available at: https://a16z.com/ai-will-save-the-world/.
- [33] OpenAI (2025, 21 January). Announcing The Stargate Project. Available at: https://openai.com/index/announcing-the-stargate-project/
- [34] European Commission (2025, 11 February). EU launches InvestAI initiative to mobilise €200 billion of investment in artificial intelligence. Press Release. Available at:https://ec.europa.eu/commission/presscorner/detail/en/ip_25_467
- [35] See, for example, Marr, B. (2024, 18 September). The Geopolitics of AI. Forbes. Available at: https://www.forbes.com/sites/bernardmarr/2024/09/18/the-geopolitics-of-ai/
- [36] Examples of these growing tensions were the speeches of US Vice President Vance in Paris at the AI Summit and in Munich at the Munich Security Conference. See Cerulus, L. (2025, 15 February). Vance’s week of waging war on EU tech law. POLITICO. Available at: https://www.politico.eu/article/jd-vance-waging-war-eu-tech-law-msc-ai-summit/
- [37] The alternative may be to throw up our hands and say we cannot regulate when we do not know what we are regulating or where it is heading and let the GenAI regulate itself (or not). In many cases, digital system performance is enhanced by directly connecting the digital sub-systems and replacing what previously was a human-mediated control point or interface. That has long been proven to be the case with respect to the automation of data collection and form processing to bypass human transcription errors. With the rise of agentic AI, and the prospect that AI may offer the best way to protect an individual’s personal interests and autonomy with respect to interacting with AI systems controlled by firms or others, at what point will an individual just decide to leave it to the AI agent to negotiate, transact, and act on the individual’s behalf? A similar challenge confronts nation-state policymakers seeking to manage the evolution of GenAI technology to preserve human control at any scale.
- [38] We say ecosystem to highlight the fact that the measurements will and should be provided by multiple industry participants and stakeholders with different perspectives (and vantage points) engaged in co-opetition. In that multistakeholder environment, reaching consensus on measurement outcomes will require reconciling conflicting perspectives, and collective efforts among all honest participants to oppose efforts to disrupt welfare-enhancing measurement efforts. Criminals work best in the dark and asymmetric information and interests can lead to disruptions even if the parties are honestly engaged in trying to achieve consensus. We offer our vision for some of the key features that will and should characterize the needed measurement ecosystem for digital infrastructure in Frias, Z., Lehr, W.H., and Stocker, V. (2025). Building an ecosystem for mobile broadband measurement: Methods and policy challenges. Telecommunications Policy, 102905.
- [39] See the survey on large language models by Zhao, H., Chen, H., Yang, F., Liu, N., Deng, H., Cai, H., … & Du, M. (2024). Explainability for large language models: A survey. ACM Transactions on Intelligent Systems and Technology, 15(2), 1-38. See also Heaven, W.D. (2024, 4 March). Large language models can do jaw-dropping things. But nobody knows exactly why. MIT Technology Review. Available at: https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/ and Heaven, W.D. (2025, 27 March). Anthropic can now track the bizarre inner workings of a large language model. MIT Technology Review. Available at: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2025/03/27/1113916/anthropic-can-now-track-the-bizarre-inner-workings-of-a-large-language-model/amp/.
- [40] Possible scenarios include agent-to-agent interactions that may be part of AI augmented machine-to-machine scenarios or substitutes for human-to-human interactions.
- [41] Bommasani, R., Klyman, K., Kapoor, S., Longpre, S., Xiong, B., Maslej, N., & Liang, P. (2024). The 2024 Foundation Model Transparency Index v1. 1: July 2024. arXiv preprint arXiv:2407.12929.
- [42] See Open Data Institute (2024). Building a user centric AI data transparency approach. Available at: https://theodi.cdn.ngo/media/documents/Building_a_user-centric_AI_data_transparency_approach.pdf. Note that data can emerge as a critical bottleneck for building and developing AI models as well as a source of competitive advantage. However, data needs are context-dependent and may change over time. For example, building and pre-training general-purpose AI models often involves large amounts of high-quality datasets. For an overview of post training approaches, see Tie, G., Zhao, Z., Song, D., Wei, F., Zhou, R., Dai, Y., … & Gao, J. (2025). A Survey on Post-training of Large Language Models. arXiv preprint arXiv:2503.06072. Fine-tuning existing pre-trained models for a specific purpose, however, requires much less data. For insightful discussions on the role of data in GenAI, see Schrepel, T., & Pentland, A. S. (2024). Competition between AI foundation models: dynamics and policy recommendations. Industrial and Corporate Change, as well as Ohm, P. (2024). Focusing on Fine-Tuning: Understanding the Four Pathways for Shaping Generative AI. Science and Technology Law Review, 25(2). Data access and access conditions, as well as their symmetry, shape competition and innovation, and are heavily dependent on status quo and new regulatory restrictions that differ across nations and sectors. Asymmetric restrictions and capabilities can lead to significant competitive effects among ICT providers of substrate infrastructure, GenAI systems, and among downstream users. While discussions of key legal and economic issues that have emerged around the input data used to train AI models are beyond the remit of this article, we provide a discussion of how GenAI disrupts and potentially derails traditional IP regulation frameworks for copyright in Cooper, Z., W. Lehr, & Stocker, V. (2025) (see FN 1).
- [43] Liesenfeld, A., & Dingemanse, M. (2024, June). Rethinking open-source generative AI: open-washing and the EU AI Act. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 1774-1787.
- [44] Schrepel, T., & Pentland, A. S. (2024) (see FN 40)
- [45] Beyond aspects related to accuracy (e.g., in the context of hallucinations or confabulations), AI model output might be biased based on adaptive (personalized) model behavior that may be misleading or manipulative. Such undesirable model output may cause harm but not be (easily) detectable by those interacting and confronted with the system and its output. Such behavior can result from AI model training approaches like reinforcement learning with human feedback. See, for example, Batzner, J., Stocker, V., Schmid, S., & Kasneci, G. (2024). GermanPartiesQA: Benchmarking Commercial Large Language Models for Political Bias and Sycophancy. arXiv preprint arXiv:2407.18008. See also the following paper exploring third-party algorithm audits that also discusses implications for regulators: Zaccour, J., Binns, R., & Rocher, L. (2025). Access Denied: Meaningful Data Access for Quantitative Algorithm Audits. arXiv preprint arXiv:2502.00428.
- [46] AI tools to generate synthetic content to enable deepfakes and enable social-engineering phishing attacks are already facilitating deception, manipulation, and cybercrime. It seems likely that the AI will prove critical to developing effective countermeasures.
- [47] This lack of transparency is not only about what is used but also how it is used, configured, etc. All of this is important to make causal inferences to better understand what determines model outputs and develop mitigation strategies. Also, self-preferencing or other practices could be detectable.
- [48] Think about scenarios in which humans use GenAI to create code without possessing coding skills. Even if the generated code works as desired, fundamental security vulnerabilities due to bad code may go completely undetected. While similar issues can arise with trained coders who generate code at scale but lack the ability to check it thoroughly, other problems may occur when end users inadvertently disclose personal or business secrets, lacking awareness of the potential consequences.
- [49] See, for example, Guerrini, F. (2024, 17 November). AI-Driven Dark Patterns: How Artificial Intelligence Is Supercharging Digital Manipulation. Forbes. Available at: https://www.forbes.com/sites/federicoguerrini/2024/11/17/ai-driven-dark-patterns-how-artificial-intelligence-is-supercharging-digital-manipulation/
- [50] See, for example, the discussion of trust in Lobel, O. (2024, 29 February). Do We Need to Know What Is Artificial? Unpacking Disclosure & Generating Trust in an Era of Algorithmic Action. Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker). Network Law Review. See also Schneier, B. (2023, 4 December). AI and Trust. Schneier on Security. Available at: https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html.
- [51] Model behaviors described by terms like deception, sycophancy, and agreeability are well-known and may lead to undesirable outcomes (e.g., when human end-users form intimate relationships with AI companions and are manipulated).
- [52] See, for example, Hagiu, A., & Wright, J. (2025). Artificial intelligence and competition policy. International Journal of Industrial Organization, 103134. See also Schrepel & Pentland (2024, see Footnote 40). Again, note that while some forms of feedback loops and data network effects suggest that all user data is used to enhance the model, which subsequently generates more value for users, GenAI tools may adapt or improve for individual end-users through in-context learning, which may change in real-time and/or be ephemeral in nature (e.g., relating to a single chat or a memory about a user).
- [53] Think here about personalized nudging and personalized persuasion at scale. See Matz, S.C., Teeny, J.D., Vaid, S.S., Peters, H., Harari, G. M., & Cerf, M. (2024). The potential of generative AI for personalized persuasion at scale. Scientific Reports, 14(1), 4692.
- [54] See Gal, M. S. and Elkin-Koren, N. (2017). Algorithmic consumers. Harv. JL & Tech., 30(2), 309-353.
- [55] Better measurements may yield more granular data that may enable the identification of problems locally and suggest suitable local solutions (i.e., corrections). However, dependencies and interdependencies suggest that addressing a local problem might lead to the emergence of new issues or instabilities elsewhere. Waterbed problems can be viewed as unintended consequences of partial equilibrium control actions in complex dynamic systems that are poorly understood and modeled, becoming increasingly prevalent in our fast-paced automated world.
Bibliography:
- Andreesen, M. (2023, 6 June). Why AI will Save the World. Available at: https://a16z.com/ai-will-save-the-world/
- Batzner, J., Stocker, V., Schmid, S., & Kasneci, G. (2024). GermanPartiesQA: Benchmarking Commercial Large Language Models for Political Bias and Sycophancy. arXiv preprint arXiv:2407.18008
- Bommasani, R., Klyman, K., Kapoor, S., Longpre, S., Xiong, B., Maslej, N., & Liang, P. (2024). The 2024 Foundation Model Transparency Index v1. 1: July 2024. arXiv preprint arXiv:2407.12929
- Brandenburger, A., & Nalebuff, B. (1996). Co-opetition. Doubleday. New York
- Brandenburger, A., & Nalebuff, B. (2021). The rules of co-opetition. Harvard Business Review, 99(1), 48-57
- Brynjolfsson, E. (1993). The productivity paradox of information technology. Communications of the ACM, 36(12), 66-77
- Buntz, B. (2025, 18 February). Musk’s xAI launches Grok 3, which it says is the ‘best AI model to date’ thanks in part to a 200,000-GPU supercluster. R&D World. Available at: https://www.rdworldonline.com/musk-says-grok-3-will-be-best-ai-model-to-date/
- Casper, S., Bailey, L., Hunter, R., Ezell, C., Cabalé, E., Gerovitch, M., … & Kolt, N. (2025). The AI Agent Index. arXiv preprint arXiv:2502.01635
- Cerulus, L. (2025, 15 February). Vance’s week of waging war on EU tech law. POLITICO. Available at: https://www.politico.eu/article/jd-vance-waging-war-eu-tech-law-msc-ai-summit/
- CMA (2024). AI Foundation Models: Technical update report. Available at: https://assets.publishing.service.gov.uk/media/661e5a4c7469198185bd3d62/AI_Foundation_Models_technical_update_report.pdf
- Cooper, Z. (2024). The AI Authorship Distraction: Why Copyright Should Not Be Dichotomised Based on Generative AI Use. Available at: https://ssrn.com/abstract=4932612
- Cooper, Z., Lehr, W.H., & Stocker, V. (2025, 23 January). The New Age: Legal & Economic Challenges to Copyright and Creative Economies in the Era of Generative AI. The Digital Constitutionalist. Available at: https://digi-con.org/the-new-age-legal-economic-challenges-to-copyright-and-creative-economies-in-the-era-of-generative-ai/
- Cui, Y. and Yang, A. (2025, 28 January). Why DeepSeek is different, in three charts. NBC News. Available at: https://www.nbcnews.com/data-graphics/deepseek-ai-comparison-openai-chatgpt-google-gemini-meta-llama-rcna189568
- European Commission (2025, 11 February). EU launches InvestAI initiative to mobilise €200 billion of investment in artificial intelligence. Press Release. Available at:https://ec.europa.eu/commission/presscorner/detail/en/ip_25_467
- Frias, Z., Lehr, W.H., and Stocker, V. (2025). Building an ecosystem for mobile broadband measurement: Methods and policy challenges. Telecommunications Policy, 102905.
- Gal, M. S. and Elkin-Koren, N. (2017). Algorithmic consumers. Harv. JL & Tech., 30(2), 309-353
- Głowicka, E. & Málek, J. (2024). Digital Empires Reinforced? Generative AI Value Chain, Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker). Network Law Review
- Guerrini, F. (2024, 17 November). AI-Driven Dark Patterns: How Artificial Intelligence Is Supercharging Digital Manipulation. Forbes. Available at: https://www.forbes.com/sites/federicoguerrini/2024/11/17/ai-driven-dark-patterns-how-artificial-intelligence-is-supercharging-digital-manipulation/
- Hagiu, A., & Wright, J. (2025). Artificial intelligence and competition policy. International Journal of Industrial Organization, 103134
- Heaven, W.D. (2024, 4 March). Large language models can do jaw-dropping things. But nobody knows exactly why. MIT Technology Review. Available at: https://www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/
- Heaven, W.D. (2025, 27 March). Anthropic can now track the bizarre inner workings of a large language model. MIT Technology Review. Available at: https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2025/03/27/1113916/anthropic-can-now-track-the-bizarre-inner-workings-of-a-large-language-model/amp/
- Hugging Face (2024). Open-source AI: year in review 2024. Available at: https://huggingface.co/spaces/huggingface/open-source-ai-year-in-review-2024?day=4
- Lehr, W.H. & Stocker, V. (2024). Competition Policy over the Generative AI Waterfall. Artificial Intelligence & Competition Policy(ed. Abbott, A. & Schrepel, T.), Concurrences. Available at: https://ssrn.com/abstract=5131798
- Lehr, W.H., Clark, D.D., & Bauer, S. (2019). Regulation when platforms are layered. 30th European Conference of the International Telecommunications Society (ITS)
- Liesenfeld, A., & Dingemanse, M. (2024, June). Rethinking open-source generative AI: open-washing and the EU AI Act. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 1774-1787
- Lobel, O. (2024, 29 February). Do We Need to Know What Is Artificial? Unpacking Disclosure & Generating Trust in an Era of Algorithmic Action. Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker). Network Law Review
- Marr, B. (2024, 18 September). The Geopolitics of AI. Forbes. Available at: https://www.forbes.com/sites/bernardmarr/2024/09/18/the-geopolitics-of-ai/
- Matz, S.C., Teeny, J.D., Vaid, S.S., Peters, H., Harari, G. M., & Cerf, M. (2024). The potential of generative AI for personalized persuasion at scale. Scientific Reports, 14(1), 4692
- Mauran, C. (2025, 23 February). Apple Intelligence with Google Gemini integration looks to be coming soon. Mashable. Available at: https://mashable.com/article/apple-intelligence-google-gemini-integration-reportedly-coming-soon
- Microsoft (2025). Introducing agents. Available at: https://support.microsoft.com/en-us/topic/introducing-agents-943e563d-602d-40fa-bdd1-dbc83f582466
- Nolan, B. (2025, 25 February). Nvidia gets a boost from China’s DeepSeek ahead of earnings. Forbes. Available at: https://fortune.com/2025/02/25/nvidia-china-deepseek-earnings/
- Ohm, P. (2024). Focusing on Fine-Tuning: Understanding the Four Pathways for Shaping Generative AI. Science and Technology Law Review, 25(2)
- Open Data Institute (2024). Building a user centric AI data transparency approach. Available at: https://theodi.cdn.ngo/media/documents/Building_a_user-centric_AI_data_transparency_approach.pdf
- OpenAI (2024, 10 June). OpenAI and Apple announce partnership to integrate ChatGPT into Apple experiences. Available at: https://openai.com/index/openai-and-apple-announce-partnership/
- OpenAI (2024, 31 October). Introducing ChatGPT search. Available at: https://openai.com/index/introducing-chatgpt-search/
- OpenAI (2025, 21 January). Announcing The Stargate Project. Available at: https://openai.com/index/announcing-the-stargate-project/
- OpenAI (2025, 23 January). Introducing Operator. Available at: https://openai.com/index/introducing-operator/
- Pichai, S., Hassabis, D., & Kavukcuoglu, L. (2024, 11 December). Introducing Gemini 2.0: our new AI model for the agentic era. Available at: https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/
- Reuters (2025, 24 February). Alibaba to invest more than $52 billion in AI over next 3 years. Reuters. Available at: https://www.reuters.com/technology/artificial-intelligence/alibaba-invest-more-than-52-billion-ai-over-next-3-years-2025-02-24/
- Schneier, B. (2023, 4 December). AI and Trust. Schneier on Security. Available at: https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html
- Schrepel, T., & Pentland, A. S. (2024). Competition between AI foundation models: dynamics and policy recommendations. Industrial and Corporate Change
- Schwab, K. (2016). The Fourth Industrial Revolution: What it Means, How to Respond. World Economic Forum. Available at: https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/
- Smith, C.S. (2025, 8 March). China’s Autonomous Agent, Manus, Changes Everything. Forbes. Available at: https://www.forbes.com/sites/craigsmith/2025/03/08/chinas-autonomous-agent-manus-changes-everything/
- Stocker, V., Knieps, G., & Dietzel, C. (2021). The Rise and Evolution of Clouds and Private Networks – Internet Interconnection, Ecosystem Fragmentation. TPRC49: The 49th Research Conference on Communication, Information and Internet Policy. Available at: https://ssrn.com/abstract=3910108
- The Economist (2024, 16 May). Big tech’s capex splurge may be irrationally exuberant. The Economist. Available at: https://www.economist.com/leaders/2024/05/16/big-techs-capex-splurge-may-be-irrationally-exuberant
- Tie, G., Zhao, Z., Song, D., Wei, F., Zhou, R., Dai, Y., … & Gao, J. (2025). A Survey on Post-training of Large Language Models. arXiv preprint arXiv:2503.06072
- Weiß, E.-M. (2024, 22 November). OpenAI wants to enter the browser war. Heise Online. Available at: https://www.heise.de/en/news/OpenAI-wants-to-enter-the-browser-war-10100663.html
- Wiggers, K. (2025, 7 March). Microsoft reportedly ramps up AI efforts to compete with OpenAI. TechCrunch. Available at: https://techcrunch.com/2025/03/07/microsoft-reportedly-ramps-up-ai-efforts-to-compete-with-openai/
- Woods, A. (2025, 28 January). DeepSeek: What You Need to Know. MIT CSAIL Alliances. Available at: https://cap.csail.mit.edu/research/deepseek-what-you-need-know
- xAI (2025, 19 February). Grok 3 Beta — The Age of Reasoning Agents. Available at: https://x.ai/news/grok-3
- Zaccour, J., Binns, R., & Rocher, L. (2025). Access Denied: Meaningful Data Access for Quantitative Algorithm Audits. arXiv preprint arXiv:2502.00428
- Zhao, H., Chen, H., Yang, F., Liu, N., Deng, H., Cai, H., … & Du, M. (2024). Explainability for large language models: A survey. ACM Transactions on Intelligent Systems and Technology, 15(2), 1-38.