Personalized Competition Law: The New Frontier of AI Market Governance

The Network Law Review is pleased to present a special issue entitled The Law & Technology & Economics of AI.” This issue brings together multiple disciplines around a central question: What kind of governance does AI demand? A workshop with all the contributors took place on May 22–23, 2025, in Hong Kong, hosted by Adrian Kuenzler (HKU Law School), Thibault Schrepel (Vrije Universiteit Amsterdam), and Volker Stocker (Weizenbaum Institute). They also serve as the editors.

**

Abstract

Artificial Intelligence technologies prompt several doctrinal shifts in competition law. For AI market governance, this means moving toward personalized enforcement. Rather than applying one-size-fits-all legal tests, regulators may need to tailor rules and liability standards by sector, by actor, or by the sophistication of algorithms in use. This approach requires greater transparency, context-sensitive oversight, and documentation of algorithmic logic to facilitate audits, especially in high-stakes fields like healthcare and insurance, where discriminatory outcomes carry heightened societal risk. Personalized enforcement acknowledges that not all AI applications pose the same risks, balancing innovation incentives with safeguards against exclusion and discrimination in highly dynamic, data-driven markets.

*

1. Introduction

Artificial intelligence (AI) requires a fundamental shift in how we conceptualize data-driven markets and, consequently, apply competition law. AI can lead to tacit collusion among firms by learning how to coordinate prices without engaging in explicit communication. AI can also increase transparency and facilitate rapid price adjustments under circumstances where tacit collusion was previously inconceivable. Moreover, where AI is utilized to collect and analyze widespread customer data, it can promote sophisticated price discrimination by tailoring prices to individual consumers based on willingness to pay, loyalty, or specific product attributes.

These two issues – tacit collusion and discrimination – can bring about artificially inflated prices for consumers and lead to the exclusion of competitors, all without direct human involvement. In sensitive sectors like healthcare or insurance, the use of algorithms can cause unfair market valuations and undermine the competitive process.

This short contribution argues that the progressive use of AI technologies requires us to reconsider how we conceptualize data-driven markets and how we apply competition law, because tacit collusion and price discrimination are typically not deemed illegal, but their outcomes are inherently undesirable.

Specifically, the contribution highlights three distinct points: First, it surveys the existing scholarly debate around algorithmic collusion and discrimination; second, it shows that AI turns two of competition law’s most fundamental assumptions on their head; and third, it charts some possible paths forward for legal doctrine in the digital economy. The following provides a summary of these points and asks what AI’s implications for market governance are.[1]

2. AI Upends the Foundations of Competition Law

Legal disputes around algorithmic collusion and discrimination often revolve around the ability of AI in the market context to raise prices, exclude competitors, and cause harm to consumers without human intervention.[2] Furthermore, there are sector-specific concerns around discrimination, such as in health or finance, where the use of algorithms can result in unfair market valuations.[3] These issues pose serious challenges for competition authorities and courts and raise fundamental questions about the economics of AI, the nature of rivalry, and the extent to which market behavior is inherently tied to human activity.[4]

Scholars increasingly argue that AI renders tacit collusion and exclusion more prevalent and realistic, and that the law should take a stricter posture in relation to such conduct. However, current legal doctrine struggles to address algorithmic collusion and exclusion because it focuses on contractual relationships and market conduct, not just market outcomes.[5]

Current legal doctrine is ill-suited to incorporate competition law liability for AI-driven market behavior. In particular, AI confronts competition law with a fundamental tension:

  • If competition law aims to prohibit AI because AI increasingly leads to harmful market outcomes, competition law needs to reject the conventional assumption that firms behave as rational, profit-maximizing actors.
  • Conversely, if the use of AI in the marketplace is presumed lawful but some of its outcomes are nonetheless condemned, competition law must revisit its existing tests for establishing infringement, which associate market conduct with performance. Under this alternative, liability would be based purely on the outcomes generated by AI, rather than on the intent or nature of the conduct, effectively creating an outcome-centered regime of competition law liability.

The tension stands for a fundamental choice: either discard the rationality assumption or abandon conduct-based assessments in favor of outcome-oriented competition law liability.

3. The Assumption of Rational and Profit-Maximizing Firm Behavior

The assumption of rational and profit-maximizing firm behavior constitutes the most basic constraint on the reach of competition law liability. Violations of competition rules should not extend to conduct that would be rational or profitable absent the tendency of such conduct to harm or suppress competition.[6] This well-established principle prevents competition law from condemning reasonable business practices when there is ambiguity in assessing firm behavior. If the object or effect of an undertaking’s conduct is indeterminate, the assumption of rational and profit-maximizing firm behavior ensures that economically plausible and legitimate business strategies remain lawful.

The principle is vital in evaluating the most contentious aspects of collusive and exclusionary behavior. Consider competitors who independently and simultaneously adjust their prices without any explicit agreement – a scenario long legitimized by the assumption that each is responding to relevant economic information, rather than coordinating unlawfully.[7]

That logic worked well in an economy where firms faced substantial risks engaging in competition, and competition law assumed a significant degree of uncertainty. Uncertainty prevents firms from predicting the behavior of rivals and helps to maintain competitive tension. If firms could entirely anticipate the decisions of their competitors, markets would regularly verge on tacit collusion and exclusion, reducing competition and harming consumers.[8]

AI significantly mitigates much of the risk of market failure – the risk that something is produced or offered that may not be in demand. By analyzing vast amounts of market data, AI can enhance the ability of firms to understand the conduct of competitors and enable them to predict consumer preferences precisely. If firms employ AI for specific tasks, AI can autonomously learn to coordinate prices beyond oligopolistic settings, foreclose markets without the need for unwarranted discrimination, and, through rapid detection of and response to market conditions, reduce the uncertainties accompanied by competition.

The consequence is that AI-driven collusion and exclusion may become more likely, even if firms do not intend to act in anticompetitive ways. This is because AI-driven collusion and exclusion are often the result of advanced analytics rather than deliberate misconduct.[9] However, condemning AI simply based on undesirable market outcomes would force firms to ignore profit-maximizing incentives and act contrary to their motivations. As firms progressively use AI systems designed solely to maximize profits, it becomes difficult to classify their conduct as unlawful under existing legal standards.

Even if condemnations of AI could be justified as a policy measure, the practical complexities of implementing such measures would likely be tremendous.[10] Should such a policy involve outright bans on the use of AI in a firm’s operation? Should it involve criteria to define specific classes of AI that are concerning from a competition law perspective?

Perhaps the most sensible approach from a competition law perspective is to ground liability not on a particular type of conduct but on the fact that firms must be ‘aware’ that the use of AI may facilitate collusion and exclusion.[11] Recent advancements in AI are rapidly pushing firms toward fully delegating pricing and strategic choices to autonomous AI systems. Importantly, these systems are not explicitly instructed to engage in collusion or exclusion; rather, they are merely set to maximize profits. Without explicit agreement or intent, or even consideration given to the subsistence of competitors, autonomous AI systems can achieve noncompetitive outcomes under conditions that human actors have long failed to realize.

4. The Relationship Between Conduct and Performance

Algorithmic collusion and discrimination also upend competition law’s basic tests for establishing infringement. Findings of anticompetitive behavior typically depend on a particular kind of conduct, and existing legal tests regularly focus on two elements: alleged anticompetitive conduct restricting competition; and, as a result, harm done to consumer welfare, demonstrated through adverse effects on price, quality, choice, or innovation.

This analytical framework falters in the context of AI. As firms deploy advanced, autonomous AI systems to maximize profits, collusion and exclusion become increasingly probable—even under circumstances where this was previously inconceivable. AI systems eliminate sources of irrational decision-making, offer improved possibilities for demand forecasting, and enable firms to rapidly detect and penalize competitors’ deviations from profit-maximizing equilibrium strategies. Similarly, AI’s data advantages and forecasting capabilities permit undertakings to systematically extract surplus from consumers with heterogeneous willingness to pay.[12]

The ability of AI to bring about collusion and exclusion in a wide range of circumstances arises from (1) the capacity of AI to reduce the risk of market failure by pursuing profit-maximizing strategies with remarkable persistence, speed, and foresight; and (2) the AI’s comprehension of consumer and competitor behavior through data-driven pattern recognition, learning, and adaptation. The more data and computing power the AI has, the more likely it is to bring about collusion and exclusion.

Yet collusion and exclusion – taken on their own – are not probative of anticompetitive conduct. They could just as easily result from independent acts of rational computers with abundant information that are immediately responsive to quick and subtle variations in market fluctuations. Even though in practice, courts often infer infringement by correlating market outcomes – like uniform or discriminatory pricing – with circumstantial evidence such as particular market structures or cost asymmetries, AI undermines this framework: collusive or exclusionary outcomes may merely reflect impartial responses to market data, rather than unlawful coordination or discrimination. What authorities and courts therefore really want to know, to demonstrate a competition law violation, is not simply whether collusive or exclusionary conduct can be inferred from particular cost and market structures, but whether the AI engaged in an unlawful act of concertation or discrimination. This turns out to be challenging, however, because AI behaves in fundamentally different ways than human actors, and the ‘reasoning’ behind AI-driven business strategies is typically not accessible. If authorities and courts nonetheless seek to condemn uniform or discriminatory firm behavior when AI is involved, outcome-based liability will grow in importance, and the significance of conduct-based assessments will decline.

An ‘awareness-based’ approach may thus gain traction – requiring that firms are not allowed to submit to any conduct that is ‘known,’ or ‘should be known,’ to facilitate collusion or exclusion. The mere fact that competition law infringements rest on a requirement of ‘awareness’ gives firms an incentive to be concerned about the use of AI – particularly black box algorithms, where the AI’s decision-making process is opaque.[13]

Still, the scope of the ensuing competition law liability will be slim and will need to be significantly rethought to effectively address anticompetitive harms. As firms progressively rely on AI agents programmed solely to maximize profits, it becomes difficult to characterize their behavior as unlawful. The distinction between legitimate business conduct and unlawful behavior may no longer be consistently discernible.

This is not merely analogous to liability involving the use of an employee’s computer. Firms are held accountable for their employees’ behavior because such conduct is attributable to the undertaking itself.[14] But as long as rational firm behavior can be assumed in AI-driven markets, the traditional basis for competition law liability is no longer useful. As a result, competition law infringements in the era of AI will be more about implementing appropriate compliance mechanisms than about identifying particular forms of anticompetitive conduct.

5. The Way Forward for Competition Law

Competition law will likely assume a very different shape in the context of AI. The growing deployment of AI may even prompt a reconsideration of what competition law protects and to what end. Businesses will have a strong motivation to leverage AI to maximize profits and increase efficiency through precise targeting. Conversely, there will likely be a pushback against the adverse market effects of AI, not only from consumers and the public, but also, and potentially even more, from competitors lacking access to advanced technologies.[15]

Regulating AI in the market context alone will not solve the underlying competition problem if authorities or courts in a specific case cannot demonstrate a clear understanding of the AI’s working logic or effectively monitor the algorithm’s behavior in real-time. Authorities and courts will increasingly be required to make inferences of unlawful behavior based on the mere use and capacity of AI. Alternatively, competition law would need to establish a framework of presumptions to corroborate the premise that AI facilitated collusion or exclusion. Perhaps a more refined approach involves recognizing some of the benefits of AI in the market context by enabling companies with high fixed costs to charge higher prices to less price-sensitive customers and offering lower prices to price-sensitive consumers. This could increase product affordability overall and result in a more diverse range of products that are tailored to several different segments.

The bottom line is that any resulting competition law liability will need to strike a balance between lowering the threshold for legal intervention and giving rise to a multiplicity of standards and presumptions that allow for a finding of competition law infringements in a greater extent of personalization. This suggests that AI-driven business strategies may eventually require different rules for different actors – personalized competition law.[16] Just as firms have understood how to take advantage of the long tail of niche demand, authorities and courts might be required to divide undertakings into niche objects of regulation. This involves the application of different tests to different actors in a manner that accounts for specific practices across several different situations. The goal of such an approach would be to ensure that the enforcement of competition law remains effective in markets where new technologies allow firms to engage in dynamic and highly personalized practices.

Personalization can be implemented through a tiered system based on market share or algorithmic sophistication, with dominant firms facing stricter scrutiny, such as limitations on AI use or outcome-based liability, and smaller players being exempt from compliance burdens due to their limited control over large datasets – unless their AI use harms competition. Industry-specific tailoring or variation by markets or consumer vulnerabilities would allow engagement with technology and legal experts on the competition law implications of AI. Specifically, personalized competition law does not mean there are no basic principles. For instance, uniform requirements for all firms using AI – such as mandates to document algorithmic logic for retrospective audits – can be established.[17] Additionally, the spirit of competition law, including the prohibition of anticompetitive conduct, must still be upheld. Although biased algorithms can lead to unfair market valuations in sectors like healthcare and insurance, personalized competition law allows for a framework tailored to the specificities of these sectors. And when it comes to the enforcement of competition law, authorities will increasingly need to integrate AI into their instruments and tools to address the legal challenges that arise from it.[18]

6. Conclusion

In the not-too-distant future, authorities and courts may find ways to condemn firm conduct where algorithmic decision-making facilitates anticompetitive behavior such as tacit collusion and exclusion, even absent explicit agreements or exclusionary intent. This means that competition law’s basic doctrines are ill-suited to a context where AI transforms competitive behavior and market outcomes in unanticipated ways. Against this backdrop, competition law must rethink, retool, and shift its framework to effectively balance innovation with the prevention of anticompetitive harm, navigating legal uncertainty and the need for timely enforcement.

Adrian Kuenzler

Citation: Adrian Kuenzler, Personalized Competition Law: The New Frontier of AI Market Governance, The Law & Technology & Economics of AI (ed. Adrian Kuenzler, Thibault Schrepel & Volker Stocker), Network Law Review, Fall 2025.

References

  • [1] For a detailed analysis, see Adrian Kuenzler, ‘Why Algorithmic Collusion and Discrimination Upend the Foundations of Competition Law’, in Yeşim M. Atamer and Alexander Hellgardt (eds.), The Oxford Handbook of Regulatory Contract Law (Oxford: Oxford University Press, 2026), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5296869.
  • [2] J Miklós-Thal and C Tucker, ‘Collusion by Algorithm: Does Better Demand Prediction Facilitate Coordination Between Sellers?’ (2019) 65 Management Science, 1552; E Calvano, G Calzolari, V Denicolò and S Pastorello, ‘Artificial Intelligence, Algorithmic Pricing, and Collusion’ (2020) 110 American Economic Review, 3267; T Klein, ‘Autonomous Algorithmic Collusion: Q-Learning Under Sequential Pricing’ (2021) 52 RAND Journal of Economics, 538; J O’Connor and NE Wilson, ‘Reduced Demand Uncertainty and the Sustainability of Collusion: How AI Could Affect Competition’ (2021) 54 Information Economics and Policy, 100882; HT Normann and M Sternberg, ‘Do Machines Collude Better than Humans?’ (2021) 12 Journal of European Competition Law and Practice, 765; HT Normann and M Sternberg, ‘Human-Algorithm Interaction: Algorithmic Pricing in Hybrid Laboratory Markets’ (2023) 152 European Economic Review, 104347; A Rhodes and J Zhou, ‘Personalized Pricing and Competition’ (2024) 114 American Economic Review, 2141; F Marty and T Warin, ‘Deciphering Algorithmic Collusion: Insights from Bandit Algorithms and Implications for Antitrust Enforcement’ (2025) 3 Journal of Economy and Technology, 34.
  • [3] A Gautier, A Ittoo and P Van Cleynenbreugel, ‘AI Algorithms, Price Discrimination and Collusion: A Technological, Economic and Legal Perspective’ (2020) 50 European Journal of Law and Economics, 405; MS Gal, ‘Limiting Algorithmic Coordination’ (2023) 38 Berkeley Technology Law Journal, 173; MS Gal and DL Rubinfeld, ‘Algorithms, AI, and Mergers’ (2024) Antitrust Law Journal, 683; DA Crane, ‘Antitrust After the Coming Wave’ (2024) 99 New York University Law Review, 1187.
  • [4] In Re: RealPage, Inc., Rental Software Antitrust Litigation (No. II), No. 3:23-MD-03071 (M.D. Tenn. 2023); Cornish-Adebiyi et al v. Caesars Entertainment, Inc. et al No. 1:2023-cv-02536 (D.N.J. 2024); Duffy v. Yardi Systems Inc. et al, No. 2:2023-cv-01391 (W.D. Wash. 2024).
  • [5] See sources quoted supra notes (2-3).
  • [6] H Hovenkamp, ‘Rationality in Law and Economics’ (1992) 60 George Washington Law Review, 293; CR Leslie, ‘Rationality Analysis in Antitrust’ (2010) 158 University of Pennsylvania Law Review, 261.
  • [7] WK Viscusi, JE Harrington and JM Vernon, Economics of Regulation and Antitrust (4th ed. Cambridge MA: MIT Press, 2005).
  • [8] See FA Hayek, ‘The Use of Knowledge in Society’ (1945) 35 American Economic Review, 519.
  • [9] E Calvano, C Calzolari, V Denicolò and S Pastorello, ‘Algorithmic Pricing What Implications for Competition Policy?’ (2019) 55 Review of Industrial Organization, 155; KT Hansen, K Misra and MM Pai, ‘Frontiers: Algorithmic Collusion: Supra-competitive Prices via Independent Algorithms’ (2021) 40 Marketing Science, 1.
  • [10] See R Picciotto, ‘Rent-Setting Algorithms Find Legal Lifeline’ (2025) Wall Street Journal, May 27, 2025, Section Real Estate; W Lehr and V Stocker, ‘Competition Policy over the Generative AI Waterfall’ in A Abbott and T Schrepel (eds), Artificial Intelligence and Competition Policy(Paris: Concurrences, 2024), 335-357.
  • [11] T Chen, ‘Competition Law and AI’ in E Lim and P Morgan (eds), The Cambridge Handbook of Private Law and Artificial Intelligence (Cambridge: Cambridge University Press, 2024), 472-491; CJEU Case C-74/14, Eturas, ECLI:EU:C:2016:42.
  • [12] OECD, Algorithms and Collusion: Competition Policy in the Digital Age, OECD Roundtables on Competition Policy Papers, May 17, 2017.
  • [13] J Rodu and M Baiocchi, ‘When Black Box Algorithms Are (Not) Appropriate’ (2023) 9 Observational Studies, 79.
  • [14] CJEU Case C-22/98, Criminal Proceedings Against Becu, ECLI:EU:C:1999:419; CJEU Case C-413/13, FNV v Netherlands, ECLI:EU:2014:2411; Guidelines on the applicability of Article 101 of the Treaty on the Functioning of the European Union to horizontal co-operation agreements (2023/C 259/01), paras 379, 401.
  • [15] T Schrepel and A Pentland, ‘Competition Between AI Foundation Models: Dynamics and Policy Recommendations’ (2024) Industrial and Corporate Change, 1.
  • [16] O Ben-Shahar and A Porat, Personalized Law. Different Rules for Different People (Oxford: Oxford University Press, 2021); see Kuenzler, supra note 1.
  • [17] AJ Casey and A Niblett, ‘A Framework for the New Personalization of Law’ (2019) 86 University of Chicago Law Review, 333; TB Gillis and JL Spiess, ‘Big Data and Discrimination’ (2019) 86 University of Chicago Law Review, 459.
  • [18] T Schrepel and T Groza, Computational Antitrust Worldwide: Fourth Cross-Agency Report (Stanford Computational Antitrust Project, 2025).
About the author

Adrian Kuenzler is Associate Professor at the University of Hong Kong Faculty of Law and Affiliate Fellow at the Information Society Project, Yale Law School. His research focuses on technology, innovation policy and competition, and examines problems in antitrust, intellectual property and consumer law from a comparative and interdisciplinary perspective. Adrian graduated from the University of Zürich (M.A., Ph.D.) and from Yale Law School (LL.M., J.S.D.). He has served as a Professor in the Faculty of Law at Zürich University and has held visiting academic positions at New York University School of Law, the Max Planck Institute for Research on Collective Goods, Yale Law School, ETH Zürich, the European University Institute, the Weizenbaum Institute for the Networked Society and Oxford University. Adrian has held visiting professorship positions at Universidad de San Andrés (Buenos Aires) and the University of Münster. He has also been a Robert S. Campbell Visiting Fellow at Magdalen College, Oxford.

Related Posts