Toward Compliance Zero: AI and the Vanishing Costs of Regulatory Compliance

The Network Law Review is pleased to present a special issue entitled “The Law & Technology & Economics of AI.” This issue brings together multiple disciplines around a central question: What kind of governance does AI demand? A workshop with all the contributors took place on May 22–23, 2025, in Hong Kong, hosted by Adrian Kuenzler (HKU Law School), Thibault Schrepel (Vrije Universiteit Amsterdam), and Volker Stocker (Weizenbaum Institute). They also serve as the editors.

**

1. Toward Compliance Zero

Thanks to recent advances in artificial intelligence, we can now automate tasks that we have never before been able to automate.[2] Work that was expensive has become cheap;[3] tasks that once required a lot of time can now be accomplished nearly instantaneously.[4] We can often now bypass the messy bottleneck of human labor.[5] These developments have placed a bullseye on the back of knowledge and creative workers across industries, making us fear for the future prospects of lawyers, computer programmers, artists, authors, and educators.[6]

Another kind of worker hasn’t often been mentioned in these prognostications, yet there is no reason to omit them from the list of the disrupted: compliance experts.[7] Any person who makes a living by helping companies, government agencies, and individuals comply with the burdens of government regulation will also soon be replaced by new forms of AI that can automate their work. These range from highly paid workers at name-brand firms like PricewaterhouseCoopers, McKinsey, or Covington, to workaday lawyers and consultants toiling in relative obscurity.[8]

I am not indulging in fantasies of Artificial General Intelligence (AGI). My prediction is based on an understanding of three factors: the work required for regulatory compliance, the AI advances that have been made to date, and modest and grounded predictions of what AI will soon be able to do in the near term. Compliance work entails precisely the kind of work that the latest advances in AI have disrupted. Compliance involves, among other things, interpreting complex regulatory texts, staying apprised of changes to the rules, and performing a series of knowledge-bound tasks such as assessing risk, summarizing documents, writing reports, and making disclosures. AI can assist or single-handedly accomplish all these tasks.[9]

If modest predictions of current and near-future AI capability come to pass, then AI automation will drive the cost of regulatory compliance down so close to zero that, going forward, we should consider most regulations to impose no compliance burden whatsoever.

This development will be a boon for investment and innovation. The era of crippling government compliance costs is over.

The automation of compliance will give rise to low-cost consultancies that use AI to specialize in helping companies comply with a broad set of government regulations. Cheap and ready access to these specialists will mean that small businesses and startups, not only deep pocketed giants, will enjoy the benefits of near-zero-cost compliance.

This will also be a boon for those promulgating, defending, and supporting new and aggressive forms of regulation. It will remove one of the most significant objections that can be lodged at a regulation: the high cost of compliance. For as long as we have had regulations, critics have assailed them for the way they create waste and inefficiency.[10] Going forward, we can dismiss arguments like these as false and outdated, a relic of a bygone era. Regulatory compliance is now basically free.

2. The Road to Compliance Zero

In this short essay, I cannot offer a complete proof that we are heading for compliance zero. Instead, I present significant supporting evidence, confident that future work will more fully prove the thesis. I offer three forms of evidence. First, the compliance zero thesis is a simple extension of broader contemporary claims that have been made about AI’s looming impact on knowledge work. Second, I survey the compliance obligations of the EU AI Act, demonstrating how they require mostly the kind of work that can be automated by large language models and agentic AI. Third, I point to a recent judicial opinion from a court in Hamburg, Germany, which provides one concrete example of how large language models can now automate a once burdensome compliance task, namely recognizing the “no scraping allowed” opt-out wishes of website owners.

2.1. AI and Knowledge Work

It is very important to clarify what I mean by “compliance burden.” I am talking about the effort and work required to comply with a law or regulation. I mean what might also be called paperwork, office work, or due diligence. But this goes beyond rote or mechanistic paperwork—I also mean sophisticated and complex steps that have, until now, required human interpretation, judgment, coordination, and organization.

This definition of compliance burden excludes the changes in behavior and system and organizational design mandated, encouraged, or incentivized by a law or regulation. In other words, it does not cover a law or rule’s direct regulatory mechanisms. For example, a ban on facial recognition technology (FRT) may impose an expensive burden on police departments to dismantle preexisting FRT systems and to find alternative means of conducting investigations, but these are not the kinds of paperwork burdens I mean.

Begin with two propositions: (1) most compliance work counts as legal work; and (2) compliance tasks tend to be on the simpler, more rote, less creative end of the spectrum of types of tasks that lawyers perform. Think simple contract review versus complex strategic advising.

Consider evidence to support these propositions. Several U.S.-based law schools now offer degree programs for compliance.[11] Compared to a full JD program, these tend to be shorter (one year versus three), are often held exclusively online, require courses drawn from a limited subset of the law school curriculum, and are targeted at students who do not need to pass the bar or practice law.[12]

If these propositions are mostly true, meaning compliance work tends to be a simplified subset of legal work, then every argument that has been made in the past few years about how AI will begin to automate legal work will have even greater force when it comes to compliance work.

The emerging literature debating and measuring whether AI will replace lawyers is already vast and rapidly expanding.[13] Controlled studies suggest that large language models can pass the bar exam[14] and ace some law school exams.[15] Researchers have conducted randomized controlled experiments in which law students have completed typical legal work both with and without using AI tools, and they suggest that these tools speed up legal tasks, make legal workers more productive, and produce higher quality work on most tasks.[16]

The conventional wisdom is that AI will replace humans for some legal work, although many hold out hope that work requiring judgment and subtlety and creativity cannot be automated, at least not given the state of the art of AI.[17] Compliance work tends to be on the automatable end of this spectrum.

Others have noted the likelihood that AI can and will be used for compliance tasks.[18] Many vendors pitch their products and services specifically for regulatory compliance.[19] Some use the label, RegTech, particularly for compliance in the FinTech industry.[20] Recently, reports suggest that Meta will begin to automate “up to 90% of all risk assessments.”[21]

2.2. Compliance Under the EU AI Act

To make this argument more concrete, consider a recently enacted law that has been decried as cripplingly burdensome:[22] the European Union’s AI Act.[23] To be clear, my claim extends beyond rules that regulate AI, and I could have illustrated the point with many other types of law, such as data protection, consumer protection, securities, or tax law. One reason to highlight an AI regulation is because the companies covered by these rules will also be the companies best positioned to harness the power of AI to reduce their regulatory burden.

Every single supposedly burdensome compliance obligation or requirement introduced by the AI Act can be partially or entirely automated given recent advances in frontier models.

To start, the EU AI Act is a long and complex text. It is as long as a typical novel.[24] But a novel this isn’t, and the language is densely written and jargon laden. Reading it once through would take a typical lawyer hours. Comprehending and organizing the Act’s many interlocking requirements would require weeks of study by teams of lawyers. Modern LLMs excel at summarizing long texts. The entire EU AI Act can fit within most cutting-edge LLMs’ “context windows,” and produce a passable and accurate summary of the Act’s obligations in seconds.[25] To be fair, some provisions of the AI Act have been criticized as vague, contradictory, or ambiguous,[26] problems that an LLM cannot alone remedy.

Moving beyond simply understanding the rules, the central focus of the AI Act is on technical documentation, record keeping, and transparency.[27] For systems deemed “high risk,”[28] the Act obligates companies to track dozens of categories of information, such as the “intended purpose” of the system, “the forms in which the AI system is placed on the market or put into service,” “the methods and steps performed for the development of the AI system,” “the design specification of the system,” and “the description of the system architecture.”[29] Large language models excel in analyzing texts, internally representing the subtle meaning of those texts, and summarizing those texts in human-understandable prose.[30]

The AI Act obligates companies to analyze and summarize large datasets, for example training datasets.[31] Once again for high-risk systems, it requires “data governance and management practices” concerning “design choices,” “data collection processes,” “an assessment of the availability, quantity, and suitability of the datasets that are needed,” and a number of other assessments necessary to assess bias in the system.[32] LLMs are also quite capable of automating the analysis of large amounts of data.[33]

Many obligations in the AI Act require the production of long and detailed texts: assessments, risk summaries, compliance summaries, etc.[34] Providers of high-risk systems must generate “technical documentation” to permit the government to “assess the compliance” of their system under the Act.[35] They must also produce “instructions for use” of the system by deployers by conveying “concise, complete, correct, and clear information that is relevant, accessible, and comprehensible to deployers.”[36] LLMs are famously capable at writing coherent texts quickly.[37]

Some AI Act requirements require the generation of images and videos. For example, the AI Act obligates providers and deployers of AI systems “to ensure, to their best extent, a sufficient level of ‘AI literacy’ of their staff and other persons” dealing with their AI systems.[38] One way to comply is by delivering tailored training and education materials for employees.[39] A European Commission FAQ opines that this might require more than simple instructions for use, in favor of “trainings and guidance.”[40] Many companies have produced video tutorials for their staff to comply with this requirement.[41] Image and video generation models are increasingly capable at generating visuals based on user inputs and prompts.[42]

The AI Act will require the evaluation and creation of computer source code and configuration code. For example, provisions of the law require cybersecurity assessments, remediation, and countermeasures.[43] This obligates companies to put in place systems “to prevent, detect, respond to, resolve and control for attacks” in their systems.[44] LLMs are quite proficient at interpreting, modifying, and writing this kind of technical code.[45] Of course, cybersecurity requires far more than detection, and it is one area in which LLM assistance will only be partial; we will still need human expertise to fully comply with parts of the AI Act, at least for now.

Other advances in AI will help stitch together these disparate tasks. So-called reasoning models excel at breaking down complex tasks into discrete separate pieces.[46] Mixture-of-experts approaches deploy different strategies to solving problems, helping find optimal solutions.[47] Increasingly advanced tools orchestrate multiple AI agents each performing different tasks, combining their inputs into a pipeline of work.[48]

2.3. LAION v Robert Kneschke

In addition to this back-of-the-envelope analysis of the EU AI Act, consider a more discrete example drawn from real-world legal practice. A trial judge in Hamburg, Germany, recently demonstrated how the law will be interpreted to recognize the way AI models can be used to automate legal compliance in ways that would have felt like science fiction only a few years ago. Understanding the analysis requires a short primer on European Union and German copyright law.

The case involved the activity of a nonprofit organization called LAION.[49] LAION built a dataset that contained the URL locations of billions of images by analyzing the data crawled by another group, called Common Crawl.[50] Many of the images associated with LAION’s URLs were protected under German copyright law. LAION makes this list of URLs (but not the images themselves) free for all as an internet download,[51] and many image generation and multimodal models, such as Stable Diffusion, have used the LAION list to find and download millions of images.[52] These images have served as crucial training data for these models,[53] and these downloads have been characterized as massive infringements of copyright.[54]

A group of the owners of some of these copyrights sued LAION in German court for infringement.[55] Focus only on one narrow issue litigated in this case: a caveat to an exception to an exception to the ordinary prohibition against infringement.

The exception is the Text and Data Mining (TDM) exception.[56] It is a defense to copyright infringement to scrape images for the purpose of text and data mining.[57] LAION asserted that because its list is primarily useful for TDM, it can claim this exception.[58]

The TDM exception has a significant exception to the exception, at least for TDM that cannot qualify as “scientific research purposes.”[59] That exception to the exception applies when the copyright owner invokes a “reservation of rights” alongside the copyrighted content.[60] In other words, it creates an “opt out” opportunity for copyright owners to the TDM exception.[61]

Finally, the caveat: the copyright owner must convey this reservation of rights to the web scraper in a “machine readable” way.[62] This phrase had already received considerable expert commentary and analysis.[63] Much of this commentary focused on the so-called robots.txt protocol, a widely but informally recognized internet standard.[64] According to the standard, the owner of a webpage may include a file called robots.txt on its web server and within the file, it can specify which web scraping bots are welcome or not.[65] Well-behaved (i.e., standard-compliant) scraping bots will respect the wishes expressed in a robots.txt file. The Internet Engineering Task Force (IETF), the standards body that promulgated the original robots.txt protocol, has actively been debating whether to modify it to better serve the TDM opt-out purpose.[66]

Regardless, many of the images in the LAION dataset were not protected by a machine readable robots.txt entry.[67] In the Hamburg litigation, the copyright owners argued that they had expressed their reservations of rights, through “terms of service” documents that prohibited scraping for certain purposes, including for training machine learning models.[68] Although terms of service are written in human language (in many cases, English) and ostensibly published for human comprehension, these plaintiffs argued that these documents were sufficiently machine readable to qualify as a TDM reservation of rights under German copyright law.[69]

The Hamburg trial court agreed, expressing the kind of logic that is at the core of this essay.[70] “Machine readable,” within the meaning of the act, includes anything that is possible using “state-of-the-art technologies,” the court said.[71] Today’s state-of-the-art includes Large Language Models (LLMs) embedded in web scraping systems, tools that seem amply capable of noticing the presence of a terms-of-service hyperlink, following that link to download the terms, finding the part of the terms that address web scraping, and accurately interpreting whether scraping for TDM purposes has been forbidden.[72]

The court found that this kind of ability was well within the capabilities of recent advances in LLMs.[73] It noted the special irony of these particular defendants disclaiming this power, hoisting them on their own AI petard:[74] “it would, in the Chamber’s view, present a certain inconsistency in valuation to allow the development of increasingly powerful text-understanding and text-generating AI models through the [TDM] exception . . . , while simultaneously not requiring the use of already existing AI models within the [reservation of rights provision.]”[75]

2.4. To the AI Skeptics

Some readers will still be skeptical. For every believer in the unbounded potential of AI power, there is an equally confident skeptic convinced that such claims are overinflated hype. The skeptics point to the persistent tendency of LLMs to hallucinate, the brittleness of guardrails, and the track record of the tech industry to overpromise and underdeliver to support their disbelief. LLMs are not ready for prime time, these people argue, not even for relatively constrained tasks like regulatory compliance.

I am not so skeptical. I have been convinced that LLMs are quite capable at performing complex tasks that could not have been automated a decade ago, and that they will begin to replace the human labor used for many knowledge industry tasks.

But if the skeptics are right—if AI solutions are not nearly as powerful as some claim them to be, and if AI cannot capably perform compliance work, then time will tell. And if future developments reveal that the skeptics are right and LLMs are too error-prone and irredeemably hallucinatory to conduct somewhat rote compliance work, even though it would disprove this essay’s core thesis, it would nevertheless strengthen arguments in favor of the beefed-up regulatory compliance of the AI industry, which is a principal takeaway of this essay. For if AI is too error prone for even simple compliance work, we surely must regulate it to prevent the inevitable harms those errors will cause.

3. What This Means, and Doesn’t Mean

Advances in AI will tend to drive the costs of regulatory compliance toward zero beyond the regulation of AI. In every possible sphere of industrial regulation, many once labor-intensive, impossible-to-automate compliance tasks can now be performed by AI, and we will soon see a dramatic reduction in the amount of time and money spent on compliance. From tax to climate to consumer protection and beyond, the ability of LLMs to gather, process, interpret, and generate texts; the ability of image-and-video generation models to generate visuals; and the ability of AI agentic systems to perform complex multi-step data tasks will streamline the way companies comply with regulation.

To be clear, this does not mean that we can stop debating the efficacy or wisdom of new regulation. Quite the contrary, this will have the simple, salutary effect of removing one major objection to forms of new regulation: compliance costs. Saying that a regulation poses no significant costs of compliance is not the same thing as saying the law is a good law. We will be liberated to debate non-compliance-cost-based arguments in favor or against a new regulation, for example whether it addresses an important problem, whether it is sufficiently tailored to the problem, whether it improperly selects winners and losers, and whether it assigns a proper role to government. I predict that many proposed regulations will fall in this gantlet of considerations. The difference is we can avoid debating whether a proposal imposes an undue compliance burden: going forward, the answer will always be “no.”

We must also be mindful that the promise of compliance zero could lead us to expand the regulatory state in ways we should resist. If we are not judicious, regulating with compliance zero might tend to produce laws that are more complex and technocratic and beyond human comprehension than the laws we create when humans perform compliance. After all, the complexity of any legal regime can now be met by the complexity of AI-powered compliance responses. Within an increasingly byzantine call-and-response of regulation and compliance, we may create warrens of arbitrage, fraud, and unfair competition. The trick is to recognize the way that automation reduces the burden of compliance, without using it as an excuse to maximize complexity. In other words, the fact that companies can comply with any law with minimal burden should not liberate us from the goal of crafting human-scale, explainable, interpretable, frictionful regulation.

This short essay will not be the final word on these matters; there is much left for future work. First, although we are sliding down an asymptote toward compliance zero, we will never reach truly cost-free compliance. Setting up agentic workflows to do compliance work will cost something, and running a large system of AI agents imposes real costs, both financial and environmental. We need to learn to measure and accurately characterize these irreducible costs. Second, some regulations will impose compliance work that will indeed be difficult to automate, despite AI. For example, regulations that require humans to work with one another—say a rule that requires board scrutiny, review by an oversight committee, or consultation with users or the public—cannot be fully automated, by definition. Poorly drafted or ambiguous regulations will also resist automation. We should study the kinds of compliance burdens that cannot be automated, to teach us both about regulation and about AI.

The burden of regulatory compliance will soon be nothing but a bad memory of days past, a story the old-timers tell the young ones about how the law used to operate. Thanks to advances in AI, most compliance work will soon be automated. Liberated from worrying about the costs imposed by our rules, our challenge is to write rules that are effective, tailored, and wise.

Paul Ohm[1]

Citation: Paul Ohm, Toward Compliance Zero: AI and the Vanishing Costs of Regulatory Compliance, The Law & Technology & Economics of AI (ed. Adrian Kuenzler, Thibault Schrepel & Volker Stocker), Network Law Review, Summer 2025.

References:

About the author

Paul Ohm is a Professor of Law at the Georgetown University Law Center in Washington, D.C. In his research, service, and teaching, Professor Ohm builds bridges between computer science and law, utilizing his training and experience as a lawyer, policymaker, computer programmer, and network systems administrator. His research focuses on information privacy, computer crime law, surveillance, technology and the law, and artificial intelligence and the law. Professor Ohm has published landmark articles about the failure of anonymization, the Fourth Amendment and new technology, and broadband privacy. His work has defined fields of scholarly inquiry and influenced policymakers around the world.

Related Posts