Implementing the European AI Act: Balancing Horizontal Consistency with Sector-Specific Requirements

The Network Law Review is pleased to present a special issue entitled The Law & Technology & Economics of AI.” This issue brings together multiple disciplines around a central question: What kind of governance does AI demand? A workshop with all the contributors took place on May 22–23, 2025, in Hong Kong, hosted by Adrian Kuenzler (HKU Law School), Thibault Schrepel (Vrije Universiteit Amsterdam), and Volker Stocker (Weizenbaum Institute). They also serve as the editors.

**

Abstract

Each country has distinct governmental and political systems, which reflect its unique approaches to digital regulation. Korea stands out for its proactive regulatory policies in the digital markets, including the field of artificial intelligence. Several enforcement agencies are responsible for digital regulation in Korea. Historically, these agencies have demonstrated both coordination and conflict in the enforcement of their respective laws, highlighting a significant issue of jurisdictional overlap. Notably, Korea has recently adopted legislation on artificial intelligence, which is expected to prompt further discussions on effective cooperation among these agencies. It is therefore timely to examine the costs and benefits of both coordination and competition among regulatory authorities in the context of AI legislation. This article also discusses recent developments in the Korean government’s efforts to enhance coordination among enforcement agencies, aiming to reduce inter-agency conflicts.

*

 1. Introduction

With the AI Act (AIA) the European Union has adopted a landmark regulation and the world’s first comprehensive AI law. As a horizontal regulation, the AIA spans a broad and diverse range of domains and use cases. While this approach seeks to ensure consistent rules across sectors and prevent the sectoral fragmentation of regulation, its implementation faces significant challenges due to sector-specific requirements and the heterogeneity of use cases across application domains. This paper outlines several key challenges in implementing horizontal AI regulation and examines their implications for the establishment of an effective institutional governance framework.

2. Key Implementation Challenges for Horizontal AI Regulation

2.1. Challenge 1: Proportionate Risk Mitigation for AI as a General-Purpose Technology

AI is now widely recognized as a general-purpose technology (Brynjolfsson et al., 2019), meaning that it is pervasive, able to be improved upon over time and able to spawn complementary technologies (Bresnahan & Trajtenberg, 1995). The rise of generative AI in particular (Lehr and Stocker, 2025), and the effectiveness of pre-trained general-purpose models, demonstrate the broad applicability of AI across heterogeneous use cases and diverse application domains (McAfee, 2024). As such, AI features several idiosyncratic characteristics that distinguish it from more traditional information technology and software applications, such as its capability for “machine learning” and a higher degree of autonomy. It is because of these distinct features, and the potentially associated risks for human safety and fundamental rights (NIST, 2023), that European policymakers have deemed a horizontal regulatory approach to AI both adequate and necessary.

However, the specific risks arising from the use of AI technologies differ significantly in relevance depending on the particular use cases. Moreover, effective risk mitigation critically depends on the specific context, as, for example, users may differ in their level of expertise and the degree of control they have over the system. Consequently, a strictly uniform approach to implementing horizontal AI regulation, applied to a general-purpose technology that is widely used across sectors with varying characteristics and risk profiles, risks being both ineffective and disproportionate.

For example, when requiring appropriate accuracy for high-risk AI systems (as required by Article 15 (1) AIA), both the metric used to measure accuracy and the threshold for what is considered appropriate should be determined with regard to the specificities of the use case (Schnurr, 2025). Otherwise, uniform mandatory thresholds may be either too strict regarding some applications, thus stifling potential innovation and welfare benefits, or too lenient for others, thereby undermining the goal of effective protection against risks from AI. Similarly, the need, benefit and technical feasibility of interpretability requirements for black-box AI systems differ significantly between application areas. Whereas interpretability may effectively mitigate risks in health applications, it may not be appropriate nor feasible in autonomous driving settings.

Standards or regulatory guidelines, which could serve as important instruments for facilitating implementation and reducing uncertainty for AI providers, are therefore difficult to establish within the broad scope of horizontal regulations such as the AIA.

2.2. Challenge 2: Balancing Trade-offs Between Competing Policy Goals of AI Regulation

The implementation of abstract horizontal rules and principles into an operationalizable compliance framework for specific sectors will not only critically determine the effectiveness and proportionality of these rules (see Challenge 1), but will also influence how trade-offs between the different policy goals of AI regulation are balanced. On the one hand, AI regulations such as the AIA are driven by the objective of safeguarding against harmful effects of AI and protecting health, safety, and fundamental rights. On the other hand, these same regulations are also intended to foster innovation (see Recital 1 AIA). Pursuing both policy goals entails implementation trade-offs, as a stricter interpretation of provisions and more demanding risk mitigation requirements can reduce the potential harms of AI systems but also increase compliance costs and constrain opportunities for technological development and entrepreneurial experimentation.

At a high level, the AIA aims to balance this trade-off by differentiating between different risk levels, with regulatory obligations increasing in proportion to the likelihood and severity of anticipated risks. However, as the AIA categorizes high-risk AI systems according to relatively broad application domains (see Annex I and Annex III AIA in particular), the type and level of risks may vary significantly even within the high-risk category. Therefore, striking an appropriate balance between competing policy goals will be difficult to achieve through uniform implementation requirements on the broad level of the AI Act. In particular, there is a key concern that requirements for risk mitigation will be too stringent for certain domains, because they are considered necessary to protect against risks in other domains. For instance, as noted above in relation to interpretability requirements for AI systems, post-hoc explanation techniques may be justified and effective in medical contexts to ensure the interpretability of AI systems’ outputs and to prevent harmful outcomes from misleading AI-generated advice. However, such techniques may offer little added value or may even be technically unfeasible in other domains, such as autonomous driving, where decisions must be made instantaneously and human oversight is unlikely to contribute additional knowledge complementary to the AI system. Furthermore, technical risk mitigation methods are often subject to inherent trade-offs themselves, such as the well-documented tension between accuracy and explainability in interpretable machine learning (DARPA, 2016). Therefore, the appropriateness and effectiveness of these methods must be evaluated with reference to specific application contexts (Crook et al., 2023; Schnurr, 2025). This implies that the implementation of requirements for AI systems must go beyond the general risk categories defined by the AIA and account for the specific characteristics of different sectors.

2.3. Challenge 3: An Adaptive Governance Framework for Rapidly Evolving AI Technologies

Another key challenge to the effective implementation of horizontal AI regulation lies in the rapid pace of technological progress and the emergence of potentially disruptive innovations, which can swiftly change the underlying technologies of AI systems and the associated risks. This has been illustrated by the emergence of general-purpose AI models, particularly large-language models like OpenAI’s ChatGPT, during the negotiations of the AIA draft regulation, which led to the inclusion of new provisions that deviate from the regulation’s general risk-based framework (Larouche, 2025). More recently, the release of the DeepSeek-R1 AI model and its reported performance have challenged prevailing assumptions regarding the computational resources required to train highly capable general-purpose AI models. In consequence, risk mitigation provisions focused on a small number of upstream providers of general-purpose AI model providers (as assumed by the AIA provisions on general-purpose AI models with systemic risks) may be undermined by the broad availability of general-purpose AI models with lower computational requirements. Therefore, the types and severity of risks associated with AI systems can shift quickly, calling for timely adjustments to regulatory requirements.

Moreover, not only do these risks evolve quickly, but the technical approaches available to mitigate them also advance at a similar pace. Trustworthy AI and explainable AI, among other areas, have become highly dynamic fields of research in which the state of the art is continually progressing (Arrieta et al., 2020; Kaur et al., 2023). This does not only lead to the development of new technical methods for mitigating risks, but also generates empirical insights into their use and adoption by human users and operators of high-risk AI systems, which are important for evaluating the practical suitability and effectiveness of these methods.

Furthermore, as a general-purpose technology, AI technologies are expected to become increasingly specialized, driving sector-specific innovations. While AI functions as a cross-cutting technology, its development and innovation trajectories are likely to diverge across different application areas. Consequently, governance frameworks must accommodate these diverse innovation paths, along with the distinct risks and specialized mitigation strategies associated with each.

The rapid pace of innovation in AI technologies presents a further challenge for the ex-ante risk identification before the deployment of AI systems. In general, risks associated with AI arise because the formal problem specification that an AI system is supposed to address is necessarily incomplete (Doshi-Velez & Kim, 2017). As a result, ex-ante testing can account for only a subset of the possible outcomes the AI system may generate during inference once deployed. Hence, establishing appropriate standards and thresholds for the ex-ante testing of high-risk AI systems is difficult on its own. This challenge is further exacerbated when there are significant changes to the technologies underlying an AI system, making a comprehensive identification of all relevant risks even more problematic. Therefore, agile post-deployment risk mitigation approaches should be recognized as a necessary complement to ex-ante risk measures (see also Section 3.2).

2.4. Challenge 4: The AI Value Chain and Shared Responsibilities for Risk Mitigation

As AI continues to diffuse across various applications and economic sectors, it becomes evident that AI systems are frequently not provided by monolithic actors, but are instead supplied through ecosystems and value chains comprising different economic actors and roles in AI-driven value creation. At a high level, one can distinguish between the provider of a (general-purpose) AI model, the provider of the AI system built on top of that AI model, and the deployer who puts the system into use (Larouche, 2025). In many cases, value chains will involve even more actors and more fine-grained roles.

As a result of this distributed value creation, responsibility for mitigating the risks associated with an AI system is shared among various economic actors, including model providers, system providers, and system deployers. After deployment, risks typically materialize at the level of the deployer, whereas corrective measures are often most effectively implemented by the model or system provider. Effective risk mitigation therefore requires both coordination and a willingness to cooperate among actors across the value chain. Without such collaboration, risks may go unnoticed by providers or may be inadequately addressed, even when identified by deployers. In the worst case, this can render risk mitigation ineffective, despite each actor fulfilling its individual compliance obligations. From a regulatory perspective, this underscores the need for institutional arrangements and incentives that promote and facilitate cooperation across the AI value chain (see also Section 3.2).

2.5. Challenge 5: Coherence with Other Horizontal and Sector-Specific Regulations

A comprehensive regulatory framework governing AI inevitably intersects with other existing horizontal laws, such as data protection, consumer protection, and liability law. For example, data protection authorities and courts in Europe have been applying the General Data Protection Regulation (GDPR) and related data protection laws to AI for several years. This established body of precedent will now need to operate in concert with the AIA (see, e.g., Metikoš & Ausloos, 2025). Moreover, there are several overlapping, although not necessarily congruent provisions between the GDPR and the AIA (Hacker, 2024; Wolff et al., 2023). For example, both the AIA and the GDPR stipulate a right to explainability, which now applies concurrently and may require harmonized interpretation, for example with respect to its scope of application or the required level of specificity in explanations (Metikoš & Ausloos, 2025; Nisevic et al., 2024). Similarly, consumer protection law may intersect with specific provisions of the AIA, for example through the specification of a provider’s obligations regarding high-risk AI systems under Article 16 AIA, raising challenges for compliance and the coherent application of overlapping legal regimes.

Horizontal AI regulation further interacts with sector-specific regulation. This is particularly evident in the case of the AIA, which explicitly considers the existence of sector-specific product safety regulations as a key criterion for classifying AI systems as high-risk (see Art. 6(1)(a) and Annex 1 AIA). While the AIA makes specific references to these sector-specific regulations in several provisions, the requirements it imposes on high-risk systems may nonetheless diverge from those set out in individual sector-specific regulations and their corresponding implementation frameworks. This may create regulatory tensions and significant compliance challenges (Hacker, 2024).

3. Implications for the Institutional Governance Framework for AI Regulation

The challenges outlined above have significant implications for the implementation of horizontal AI regulation, such as the European AIA. This section discusses potential institutional and procedural approaches to support effective implementation in response to these challenges.

3.1. Consideration of Sector-Specific Requirements and Mechanisms for Cross-Sector Alignment

Although the AIA establishes a new horizontal legal framework, its implementation and enforcement rely heavily on existing sector-specific institutions, making a certain degree of sectoral fragmentation likely inevitable (Larouche, 2025). For example, market surveillance authorities, which will be tasked with both pre-market conformity and post-market surveillance duties to enforce the AIA at the national level, will likely be represented by sector-specific agencies. Given the need to account for sector-specific requirements and the peculiar conditions of individual application domains when translating the AIA’s abstract rules and principles into practice, this may in fact support more effective and proportionate implementation. Tailoring horizontal provisions to sectoral contexts can in principle also help prevent overregulation by allowing for a more nuanced differentiation of required risk mitigation measures within the broad category of high-risk AI systems. Moreover, this approach holds particular promise for reconciling potential tensions between the horizontal AI regulation and sector-specific provisions, especially through the development of regulatory guidelines tailored to the individual sectors and the relevant use cases.

More generally, standards and regulatory guidelines should serve as key instruments to promote legal certainty and facilitate compliance. As foreseen by the AIA, adherence to standards can offer a presumption of conformity for AI providers and deployers. However, given the challenges outlined above, developing standards that are both practically implementable and broadly applicable across diverse sectors will be difficult at the horizontal level. Therefore, standards and guidelines should be established on a sectoral level, possibly complementing more abstract standards and guidelines at the horizontal level (Schnurr, 2025). Issuing regulatory guidelines and agreeing to standards on the sectoral level would also enable more agile responses to technological advancements and innovations that significantly impact risk assessment and risk mitigation.

At the same time, the likelihood of sectoral fragmentation calls for institutions capable of coordinating implementation frameworks across sectors by facilitating communication and exchange of ideas and experiences among sectoral authorities. The AIA acknowledges the need for such coordination and foresees the establishment of a European AI Board, intended to facilitate coordination among authorities across member states (Art. 65 AIA). However, regarding the national level, the AIA only states that member states shall facilitate coordination between the various sector-specific market surveillance authorities without further reference to specific institutions or procedures (Art. 74(10) AIA). Given the need for cross-sector coordination, establishing such institutions and empowering them to convene the different national market surveillance authorities and align their enforcement efforts under the AIA appears crucial.

Ideally, such institutions would ensure that bottom-up implementation approaches developed within individual sectors are aligned with overarching principles, thereby promoting coherence across the broader regulatory framework. This is particularly important for adjacent sectors, where the same organizations may operate across multiple domains and where uncoordinated implementation could result in parallel frameworks that increase compliance burdens. Furthermore, these institutions could serve as forums for the exchange of experiences and lessons learned from sector-specific implementation efforts, thereby fostering cross-sectoral learning and continuous improvement.

3.2. An Institutional Framework for Agile and Effective Post-Deployment Risk Mitigation

To effectively protect against risks and potential harms from AI, while minimizing adverse effects to innovation, post-deployment monitoring and mitigation of emerging harms should be a priority in the implementation of AI regulation. These harms may include safety hazards, discrimination against specific subgroups, restrictions on free speech, and threats to personal rights posed by inauthentic AI-generated content, which may only become apparent after an AI system has been deployed. This requires the establishment of institutions and procedures that ensure the quick dissemination of information on AI system failures and emerging risks among the responsible actors across the AI value chain, as well as regulatory authorities.

Such institutions and procedures may be established through industry-driven codes of conduct or by regulatory authorities. For example, dedicated offices could serve as centralized points of contact for collecting information about incidents involving AI systems and their associated harms. In addition, these entities should support the accessibility and distribution of critical technical updates and fixes to deployers of an affected high-risk AI system, once such measures are made available by the model or system provider. Already today, institutions exist that identify, define, and catalog cybersecurity vulnerabilities (for example, the MITRE Corporation’s Common Vulnerabilities and Exposures database), which have become critical resources for the broader digital economy (cf. Scroxton, 2025). These existing structures could be empowered to serve as facilitating bodies for the post-deployment risk mitigation of high-risk AI systems.

Furthermore, institutions established by regulatory authorities to support industry-wide exchange of identified risks and best practices for risk mitigation could enhance effective implementation. A well-functioning post-deployment risk mitigation framework, enabled by information-sharing and update-distribution-mechanisms, can, in turn, alleviate the burden of ex-ante risk identification and assessment, which may present significant barriers for innovation and competition. Providing authorities with transparency through continuous risk monitoring (for example, by reporting key metrics on a high-risk AI system’s accuracy or incident rates over time), along with the capacity for timely intervention when risks materialize, is then vital to enable a more permissive, innovation-friendly ex-ante regulatory approach.

To further support ex-ante risk identification in application settings with very high risks (such as those involving potential harm to individuals’ health) and promote legal certainty for AI providers and deployers, collaborative efforts between industry and authorities should aim to develop robust testbeds and regulatory sandboxes. For example, this could include randomized testing procedures for specific high-risk applications, conducted by regulatory authorities, with successful completion providing a presumption of conformity for the respective AI system.

3.3. Continuous Engagement between AI providers and Regulatory Authorities

As highlighted by Larouche (2025), there is a need for continuous engagement and dialogue between AI providers and regulatory authorities if AI regulation is to follow a “responsible approach”, as envisioned by the AIA. Adopting such an intermediate approach that is neither fully protective nor permissive requires AI providers to internalize and actively engage with the societal risks associated with AI. In this context, the design of liability rules, particularly fault-based liability provisions for individuals harmed by AI systems, represents an important governance mechanism, as such rules directly shape the economic incentives for AI providers to account for and mitigate potential harms. In this context, it is noteworthy that the European Commission has recently announced the withdrawal of its proposed AI Liability Directive (Duffourc, 2025), thereby leaving AI-related liability to be governed by general fault-based liability regimes and the revised Product Liability Directive (see also Buiten et al., 2021).

At the same time, regulatory authorities must develop the necessary competencies (especially technical expertise in AI systems and methods for ensuring trustworthy AI) to engage meaningfully in such a dialogue. In particular, they must build technical expertise to assess and anticipate how the design of IT artifacts influences regulatory outcomes, given that these artifacts mediate the effect of regulatory rules on actual practices (Fast et al., 2023). Ongoing engagement between authorities and AI providers is especially important in light of rapidly evolving AI technologies, which will likely necessitate significant revisions to the implementation framework over time.

4. Conclusions

This paper has outlined five key challenges in implementing horizontal AI regulation based on the European AIA. These challenges concern the need for proportionate risk mitigation for AI as a general-purpose technology, managing trade-offs between competing policy objectives, ensuring the adaptability of governance frameworks to keep pace with technological developments, clarifying shared responsibilities across the AI value chain, and achieving coherence with other horizontal and sector-specific regulatory regimes.

Addressing these governance challenges requires not only legal and technical solutions but also institutional innovation. To that end, the paper has identified several key implications for regulatory design and practice. First, there is a need to incorporate sector-specific requirements while maintaining mechanisms for cross-sector alignment. Second, the institutional framework should place particular emphasis on supporting agile and effective post-deployment risk mitigation. Third, continuous engagement between AI providers and regulatory authorities is essential to ensure a successful “responsible approach” to AI governance, as envisioned by the AIA.

Together, these insights aim to support the effective implementation of AI regulation by balancing horizontal consistency with sector-specific requirements in a rapidly evolving technological landscape.

Daniel Schnurr

Citation: Daniel Schnurr, Implementing the European AI Act: Balancing Horizontal Consistency with Sector-Specific Requirements, The Law & Technology & Economics of AI (ed. Adrian Kuenzler, Thibault Schrepel & Volker Stocker), Network Law Review, Fall 2025.

References

About the author

Daniel Schnurr is a professor of information systems and holds the Chair of Machine Learning and Uncertainty Quantification at the University of Regensburg. Previously, he led the Data Policies research group at the University of Passau. He received his Ph.D. in Information Systems from the Karlsruhe Institute of Technology in 2016, where he also completed his B.Sc. and M.Sc. in Information Engineering and Management. He was a visiting student at the Singapore Management University and the John Molson School of Business at Concordia University in Montreal, Canada. Daniel Schnurr has served as a research associate at the Chair of Information & Market Engineering at the Karlsruhe Institute of Technology and as a post-doctoral researcher at the Chair of Internet and Telecommunications Business at the University of Passau.

Related Posts