This article examines the problem of statutory obsolescence in the regulation of rapidly evolving technologies, with a focus on GDPR and generative AI. It shows how core GDPR provisions on lawful processing, accuracy, and erasure prove difficult—if not impossible—to apply to AI systems, generating legal uncertainty and divergent national enforcement. The analysis highlights how comprehensive,...Read More
The emerging regulatory landscape in the field of AI will substantially influence the construction of collective memory and play a critical role in shaping our “future’s past”. This contribution takes a close look at the traits of collective memory, maps its potential frictions with the emerging AI field, and demonstrates how multiple AI governance schemes—from...Read More
As the levels of speed and control of Generative AI tech increase to a degree that any media can be changed in real-time to the whim of the consumer, content is becoming interactive – each image its own canvas, each song its own instrument. Counter-intentionally, copyright appears to drive this new form of production towards...Read More
Artificial intelligence is not simply a hard regulatory problem, but a fundamentally different kind of object – a hyperobject. Like global warming or the internet, AI operates at scales of space, time, and complexity that exceed the conceptual and institutional boundaries of conventional regulatory systems. Traditional law and economics frameworks that are designed for bounded,...Read More
This essay examines how the EU’s Digital Markets Act unintentionally reshapes the foundations of generative AI. By barring gatekeepers from relying on “legitimate interest” as a legal basis for processing personal data, the DMA creates a fragmented two-tier regime: smaller firms retain flexibility, Europe’s largest AI deployers face structural limits on scale. The piece argues...Read More
The Network Law Review is pleased to present a special issue entitled “The Law & Technology & Economics of AI.” This issue brings together multiple disciplines around a central question: What kind of governance does AI demand? A workshop with all the contributors took place on May 22–23, 2025, in Hong Kong, hosted by Adrian...Read More
AI systems now perform core compliance tasks once reserved for humans. Prof. Ohm argues that this will drive the marginal cost of regulatory compliance toward zero. The claim is grounded in the nature of compliance work, a walkthrough of automatable duties under the EU AI Act, and a Hamburg court’s embrace of LLM-enabled “machine-readable” opt-outs....Read More
Regulators across the globe are rushing to ramp up their regulatory frameworks to rein in and deal with Big Tech and AI. Brussels has seized first-mover status and already rolled out a dense corpus of digital acts. The move has stirred controversy. While firms and users may face higher costs, doubts about regulatory effectiveness linger....Read More
This paper explores patterns from the Stanford Computational Antitrust project’s fourth annual report, which includes contributions from 25 agencies worldwide. The findings reveal that most agencies are converging around similar technological solutions, particularly large language models and machine learning tools, and face common challenges related to explainability, data security, and organizational adaptation. The analysis suggests...Read More
The European Union’s AI Act is one of the first attempts to regulate artificial intelligence technologies. Given that previous European digital regulations, such as the GDPR, have influenced laws all over the world, can we expect to see the AI Act as a global standard? This article argues that there are good reasons to believe...Read More
Many of Europe’s economies are hampered by a waning number of innovations, which in part is attributable to the European financial system’s aversion to funding innovative enterprises and initiatives. Specifically, Europe’s innovation finance ecosystem lacks scale, plurality, and risk appetite. These problems could be addressed by new and creative approaches and technologies for financing dynamism...Read More
The year 2024 marked a pivotal moment as antitrust agencies accelerated their efforts in generative AI (see our database). AI partnerships have drawn significant attention in the space, with the CMA, DOJ, European Commission, and other agencies launching investigations and issuing their first decisions. These agencies have been addressing AI partnerships on a case-by-case basis—a...Read More
Dear readers, the Network Law Review is delighted to present you with this month’s guest article by Richard N. Langlois, Professor of Economics at the University of Connecticut. * This paper emerged from a keynote talk at the Third Annual Mercatus Center Antitrust Forum on January 18, 2024, at George Mason University, in Arlington, Virginia. It was...Read More
Context: In Aldous Huxley’s Brave New World, the World State is governed by ten men known as World Controllers. They oversee a society where stability, order, and happiness are preserved at the expense of individuality, freedom, and genuine human emotion. The following piece is satirical and should be interpreted accordingly. It has been written in...Read More
This short article serves as an introduction to the working paper by Thibault Schrepel and Jason Pott entitled “Measuring the Openness of AI Foundation Models: Competition and Policy Implications” *** Antitrust agencies are showing a strong interest in AI foundation models and Generative AI (“GenAI”) applications. They want to ensure that the AI ecosystem remains...Read More