The Proof Is In the Digital Enforcement Pudding

Welcome to DigiConsumers, a trimonthly exploration of recent courts and decisions about digital consumers in European law & policy. Authored by Catalina Goanta, an Associate Professor of Law specialized in the field, this series go through the latest and most important developments in the space. Enjoy!


Technology Regulation: A Moving Target

The past years have seen a tremendous rise in the volume of technology regulation in the European Union, with the Digital Services package, the AI Act, and the Data Act, as mere examples of the waves of digital regulation affecting the Internet and its netizens. Content moderation, data governance, and digital services are increasingly being scrutinized for systemic harms, transparency, and commercial accountability. This reflects an attempt to flag to the world that (Internet) technology companies do not reign supreme in the Digital Single Market, and that whenever met with dangerous risks, the European regulator proposes a skeptical approach to innovation. Many of these instruments have awakened the global interest of researchers, policy-makers, and civil society from across the Atlantic, leading some US research institutions like AI Now to release policy briefs lobbying the EU for even stronger regulation.

In addition to such initiatives, which aim to tackle the very technologies (e.g., AI) or technology-related practices of companies in relevant industries (e.g., content moderation), the rise of digitalization as the driver of regulatory reform shows how legislation that once had nothing to do with the digital realm may be patched with amendments tackling the impact of technology. For instance, the French government recently launched a public inquiry to understand public sentiment relating to the influencer industry in order to further regulate emerging consumer protection issues. Similarly, the Dutch government has announced plans to further govern child labour in the form of influencer activities. Make no mistake – it is not like France, and the Netherlands have no explicit regulation on consumer protection and misleading advertising, or child labour and entertainment, both tackling practices that well pre-date the Internet and its discontents. However, these are mere examples of how technology regulation is omnipresent, whether in a more sui generis form that addresses technology itself or in the form of trying to control perceived digitalization externalities arising across the entire legal board. As with everything, this is not really something new – Gunther Teubner was writing about EU legislative inflation as early as 19871Gunther Teubner, Juridification of social spheres: A comparative analysis in the areas of labor, corporate, antitrust, and social welfare Law (Berlin: New York: De Gruyter, 1987), pp. 6-9; Dace Šulmane, ‘”Legislative Inflation” – An Analysis of the Phenomenon in Contemporary Legal Discourse’, Baltic Journal of Law & Politics 4:2 (2011): 78-101.. Without empirical insights, it is hard to say whether we are living through some new eras of EU law where the legislator wants to overcompensate for the regulatory subsidies given to the technology sector in the past three decades or whether it is a mere cycle of regulatory inflation (e.g., a growing number of laws) due to a passing state of risk and uncertainty.

Yet the omnipresence of technology regulation is not backed up by an equally strong, harmonized and institutionally equal administrative setup which is supposed to tackle the actual enforcement of all the standards embedded in the current regulatory monsoon.

A Superficial Line between Public and Market Surveillance

In an earlier publication for the Stanford Computational Antitrust initiative, Jerry Spanakis and I proposed looking at the idea of digital enforcement from the perspective of market surveillance, which has a long-standing and positive connotation in the European Union. Grounded in product safety legislation, market surveillance has had a simple goal: removing consumer hazards, such as unsafe or illegal products which may not be distributed within the internal market. But widespread public attention to work such as Shoshana Zuboff’s ‘The Age of Surveillance Capitalism’, together with other sources of anti-tech solutionism rhetoric (ranging from very to not so reasonable), saturated the notion of surveillance with negative connotations. However, surveillance is becoming increasingly complex, and while some of it is problematic and unaccountable, the blanket approach of fighting surveillance is not helping the enforcement issues arising out of the myriad of laws which have been and are still waiting to be passed in Brussels and in national parliaments in the EU.

As I describe with Jerry in our paper, surveillance used to be public and private: states would follow citizens (e.g., by issuing and tracking individual documentation in public records), and companies would follow consumers (e.g., by tracking consumer preferences through loyalty cards, focus groups, surveys). This delineation is no longer neat, as states increasingly rely on businesses to surveil citizens (e.g., outsourcing algorithmic systems for public decision-making), and they may also end up monitoring consumers when market surveillance is deployed on unclear categories of legal subjects (e.g., the fine line between a trader and a consumer in the influencer economy).

We see that much attention has focused on public surveillance (e.g., algorithms used for allocating teachers on short-term contracts in Italy, or Spain’s sick leave benefit fraud detection algorithm). These incidents and resulting analyses have led to the development of frameworks around transparency and accountability for situations where automated decision-making is used by, for instance, public authorities. In the Netherlands, my colleagues Janneke Gerards, Mirko Schäfer, Arthur Vankan and Iris Muis proposed an Impact Assessment for Human Rights and Algorithms, which the government now uses. This is a very important step in understanding what exactly is needed, from an institutional perspective, if public authorities decide to deploy automated decision-making.

However, we see that for a lot of the technology regulations mentioned at the beginning of this piece, new silos are forming in public policy and research alike. The Digital Services Act has a famous provision covering data access for a wide range of stakeholders (draft Article 31 and new Article 40), which Paddy Leerssen has extensively covered. The same Article mentions that “providers of very large online platforms or of very large online search engines shall provide the Digital Services Coordinator of establishment or the Commission, at their reasoned request and within a reasonable period specified in that request, access to data that are necessary to monitor and assess compliance with this Regulation” (paragraph 1). In other words, this is an example of mandating market surveillance in the course of investigations and enforcement actions as per the DSA’s enforcement chapter.

What I believe needs to become increasingly clear is that the DSA enforcement authorities, as well as the authorities that enforce legislation on data protection, child influencers, and any other field of law where digitalization is involved in the monitoring and enforcement of legal standards, will bump into the same issues as the municipalities and governments that conducted public surveillance: transparency and accountability on the side of the authorities enforcing this legislation. In addition to these aspects, it is important to consider the lack of administrative law harmonization, as well as massive discrepancies between different agencies and fields of enforcement, where institutional technology literacy may be more or less present.

It is, therefore, important that discussions around digital enforcement cross the superficial public/private divide, as well as the different legislative silos that may have been built around these concepts. As long as we are talking about public enforcement, we talk about public administration authorities who need to be better funded, prepared and coordinated to ensure effective digital enforcement.

Enforcement technology

This is not to say that it is not relevant to continue addressing the developments in specific fields. For consumer protection in particular, prof. Christine Riefa and tech policy expert Liz Coll have dubbed this trend as ‘enforcement technology’, namely the marriage of digital technology and enforcement efforts, which they define as a “[…] broad term for the use of technological innovations by enforcement agencies to help deliver enforcement activities. This could include market surveillance, such as scanning for misleading pricing or fake advertising; investigative activity which uses machine learning to interrogate company documentation; preventative measures such as reviewing consumer contracts for unfair clauses before they reach the market. In the future it may have the potential to directly execute or enable an enforcement action such as a warning, takedown or sanction.”

Their current UNCTAD project reflects a series of reports and events aimed at providing a multistakeholder forum for the discussion of the challenges and promises of enforcement technology. The second event in the series, ‘Introducing EnfTech: A Technological Approach to Consumer Law Enforcement’ took place on 20 April 2023.

Catalina Goanta


Citation: Catalina Goanta, The Proof Is In the Digital Enforcement Pudding, Network Law Review, Spring 2023.

Related Posts