William Lehr & Volker Stocker: “AI Regulation and 6G: The Measurement Ecosystem Challenge”

The Network Law Review is pleased to present a symposium entitled “Dynamics of Generative AI,” where lawyers, economists, computer scientists, and social scientists gather their knowledge around a central question: what will define the future of AI ecosystems? To bring all this expertise together, a conference co-hosted by the Weizenbaum Institute and the Amsterdam Law & Technology Institute will be held on March 22, 2024. Be sure to register in order to receive the recording.

This contribution is signed by William Lehr, economist and research scientist in CSAIL/MIT and an associated researcher of the Weizenbaum-Institute in the research group “Digital Economy, Internet Ecosystem, and Internet Policy”, and Volker Stocker, economist and heads the research group “Digital Economy, Internet cosystem, and Internet Policy” at the Weizenbaum-Institute. The entire symposium is edited by Thibault Schrepel (Vrije Universiteit Amsterdam) and Volker Stocker (Weizenbaum Institute).

***

1. Introduction

Artificial intelligence (AI) and its economic impacts have been emerging over decades. Even though the public launch of OpenAI’s ChatGPT in November 2022 has caused a frenzy about generative AI that arguably prompted a surge in public interest in AI regulation, policymakers on both sides of the Atlantic have been grappling for some time with the challenge of creating a comprehensive policy framework that addresses AI’s broad economy-wide impacts. Importantly, such policy frameworks seek to address adjustment cost issues and to safeguard against potential abuses. Two recent efforts are illustrative. On the one hand, the recent approval of the EU’s AI Act.1See Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM (2021) 206 final (Apr. 21, 2021), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A5 2021PC0206 On the other hand, the US White House AI Executive Order.2See Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, U.S. White House, October 30, 2023, available at https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/. Both efforts are staking out ambitious goals but with scant details of how those might precisely be achieved. Even among experts, there is little consensus on the pace, direction, and necessity of AI innovation, or the ideal “systems architecture” for integrating AI algorithms, data, and applications into our socio-economic fabric. Consequently, there is limited agreement on the design of appropriate policies.

Regardless of how AI or the AI policy discourse evolves, it is obvious that policymakers have a role to play in managing the AI-powered transition towards our digital future. With this in mind, two things are clear. First, legacy sector and domain specific regulatory frameworks will be on the front lines for regulating AI. How they adapt will impact the trajectory for how AI technology and markets will evolve and shape the requirement for and design of novel policies directed at AI. Second, ensuring effective enforcement of policies, dispute resolution, etc., relies on the capability to monitor markets, the conduct of multiple ecosystem actors, and performance (e.g., customer experience). In the age of AI, this gives rise to complex and fluid measurement challenges. While we will explain that addressing these challenges is critical and calls for novel assessments of the instruments and transparency needs to achieve functioning markets, the aggregation and analysis of measurement data add to the complexity of the overall measurement challenge.

We explore the (symbiotic) relationship between AI and next-generation 6G networks. We explain that a healthy multi-stakeholder measurement ecosystem will be crucial for evidence-based regulation to have a hope of success and elucidate the challenges such an ecosystem will need to confront in an AI-powered future, with application to focal areas of concern for telecommunications policy.3Focusing on telecommunications regulatory policy provides a useful example to focus attention to the importance of sector/issue-specific measurement challenges. Moreover, given its ICT resource dependence AI’s future will bear a special relationship to the networked computing infrastructure (i.e., connectivity and clouds) that has been and remains the focus of telecommunications and Internet policy. The article challenges us to envision what the AI future is expected to enable, how that depends on the future of next-generation networks as embodied in the aspirational capabilities of 6G, and how that will affect measurement and policymaking.

2. AI+6G Future – Shaping Our Digital Future

AI technologies have been emerging for decades. These technologies, along with the more recent development of generative AI (hereafter: GenAI)4Generative AI systems co-exist with other types of AI (e.g., symbolic AI) and present “…a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. AI technologies attempt to mimic human intelligence in nontraditional computing tasks like image recognition, natural language processing (NLP), and translation.” (Amazon Web Services, 2024). They include so-called foundation models (FMs) that can process huge amounts of (unstructured and unlabeled) data via Machine Learning (ML) technologies. A subset of FMs are Large Language Models (LLMs) that power generative AI applications like ChatGPT. LLMs make use of ML technologies like deep learning (DL) that are based on (deep) neural nets (Amazon Web Services, 2024). Originally, as the name suggests, LLMs have ingested and processed natural language (i.e., text) inputs to “independently” produce natural language outputs. In the meantime, leading LLMs have become much more capable and increasingly multimodal. Many models are already capable of ingesting and producing natural language and images, audio, videos, etc. did not so much create new ICT roles in augmenting or substituting for human roles in an ever-widening-range of tasks as offer new and better ways for ICTs to contribute. GPT-type Large Language Models (LLMs) that power ChatGPT and similar GenAI software applications are just another step along the continuing trajectory of ever-more-capable software ICT applications.5GPT is short for Generative Pre-Trained Transformer and is a class of LLMs widely adopted. Prominent examples include OpenAI’s GPT-4, Google’s Gemini, and Meta’s LLaMA. In fact, augmentation and automation processes (substituting machine technology for a human-role in a task) have been progressing even before there were digital computers.

Over the better part of a century, our ICT infrastructures have continuously evolved to become ever more pervasive, ubiquitous, and capable. Today, AI and 6G are our “poster-children” visions for our digital future, particularly Smart-X software applications and next-generation networked computing infrastructures, the latter include the public Internet and non-public networks, as well as broadband access networks and clouds. Here, Smart-X refers to any context in which ICTs can augment the task or tasks in virtually any domain of economic activity. Thus, X can refer to Smart Cities, Supply-Chains, FinTech, HealthTech, Grids, etc.6See Lehr (2022a) and Oughton & Lehr (2022) for further elaboration. Together, AI and 6G are expected to provide the ICT resources and tools for the automation or augmentation of virtually any human task, enabling (some hope) the seamless integration of physical and virtual environments. Such integration is expected, for example, to support ever-more-capable Virtual Reality (VR), Augmented Reality (AR), robotic automation, and Digital Twin analytic support systems.7VR/AR promise their most significant benefits in immersive application scenarios (e.g., in education) and future idealized visions of Digital Twins: digital objects or “worlds” that can realistically model the physical objects (e.g., machines) humans use or worlds (e.g., cities) they inhabit, enabling arbitrary exploration of what-if scenarios, without the time, resource, reversibility, or many of the other costs and constraints that limit exploration opportunities for strategic choices in the physical world. And with Internet of Things (IoT) augmentation, such Digital Twins could interact with the physical world to apply generated insights to control tasks in the physical world. Some refer to this vision as the Fourth Industrial Revolution.8The term was coined by Klaus Schwab, founder of the World Economic Forum, in 2016 (Schwab, 2016). See also Lehr (2022a).

Although we are far from being able to realize the capabilities promised by this vision, we are well-advanced already along the path toward automation. We already experience some of the economic implications that further realization of this vision anticipates. For example, there is increasing evidence that AI has the potential to be a General-Purpose Technology capable of driving sustained economic growth and innovation, and transforming industries, markets, and business processes.9We explore AI as what economists refer to as a General-purpose Technology and discuss subsequent measurement and policy challenges in Uhl, Lehr, & Stocker (2023). Note that the General-purpose Technology concept shares its abbreviation with Generative Pre-Trained Transformers, but is a distinct concept coined by economists Timothy Bresnahan and Manuel Trajtenberg (Bresnahan & Trajtenberg, 1995) to characterize a group of disruptive and pervasive technologies that have an economy-wide impact and drive innovation feedback loops. Fundamentally, this would alter human-machine interactions (expanding the spectrum for human + ICT/AI) within and across sectors and the global economy. If AI is indeed a General-Purpose Technology – even if we do not know where that will eventually take the economy and society – we do know that that will call for significant and wide-spread adjustments that will incur large one-time adjustment costs. We also know that those adjustments and implications will be incurred asymmetrically across individuals, firms, sectors, and space. On the one hand, this will give rise to new Digital Divides that need to be tackled.10It is important to note that the necessary ICT resources, skills, and infrastructure to develop, adopt, or otherwise integrate novel AI technologies are unevenly distributed. This renders digital divides a dynamic and multidimensional proposition. On the other hand, the full effects will only be realized over long time periods, likely measured in decades, and shaped by ongoing technological evolution.

Of special concern to economists, will be Solow-Paradox issues related to challenges in measuring input and output quantity and price effects. Insights into how these evolve over time and across space and applications (contexts) will inform our understanding of economic growth, innovation, competition, and equity policies. Disentangling the AI-related impacts from other effects will be challenging. While this may be of secondary importance for many questions, it will be important for evidence-based policymaking either directed at AI or in contexts shaped substantially by AI.11Measuring the impact of inputs with impacts that are diffuse and may occur at multiple layers in the production change on productive output is difficult both in quantity and price terms. It is easier for tangible goods (“widgets”), but much more difficult for service goods and intangibles (e.g., knowledge goods). The latter is often inferred in terms of intellectual property metrics (e.g., value of copyrights, patent activity, etc.). These measurement challenges have bedeviled efforts to understand the productivity impacts of computer technology since their invention and the increased complexity of assessing the impact of a potential GPT like AI makes the challenge more difficult, although AI may also offer tools to attack the challenge (e.g., by facilitating enhanced data collection and analysis). GenAI with its potential to address unstructured data might emerge as being especially helpful. The complex implications for international trade, factor markets (whether for labor, intellectual property, or raw materials like radio frequency (RF) spectrum), and industrial organization (e.g., value chains, make-vs-buy, and investment decisions) already provide evidence of the need for AI to help solve some of the challenges that AI is already contributing to.

For AI-powered Smart-X applications to realize the ambitious visions for the Fourth Industrial Revolution, advanced networked computing and complementary infrastructure resources are required. The 6G standards effort envisions supporting such visions. This includes enabling ubiquitous (anywhere, everywhere, always), on-demand connectability to digital communications, computing and storage resources for all things (not just humans).12See Lehr & Stocker (2023) and Lehr et al. (2023). That means increased demand for high-performance access to (edge) cloud computing resources.13The location of the computing resources matters (e.g., for performance, cybersecurity, sustainability, or privacy/data residency reasons) but often is not the most important factor (see Lehr & Stocker, 2023; Lehr et al., 2023). However, over time and increasingly in the future, computing resources have been migrating to the network edges. Indeed, a key technology of 6G is AI.

Thus, the futures of AI and 6G are symbiotically bound through reciprocal feedback loops. The potential market demand for AI-powered Smart-X applications drives the expected demand for the performance capabilities of 6G networks and AI application development efforts. At the same time, without affordable access to on-demand computes from 6G, the capabilities and market demand for AI apps will be limited.14Ambitious AI-powered applications like AR/VR that may provide real-time monitoring, recommendations, or automation for tasks (e.g., AV/UAV control) that may occur anywhere/anytime may require significant computing resources subject to stringent cost, availability/reliability, security/privacy, and performance constraints. Those include extremely low-latency performance as well as access to resources not available on the end-device and provided via shared infrastructure. For example, networks that are enhanced with 6G capabilities expand the set of AI applications (e.g., ambitious AR applications) that can be deployed, thereby helping to encourage further AI innovation to exploit those capabilities. Meanwhile, AI applications can make it easier for users to configure complex ICT resources to take advantage of Smart-X applications (whether those are AI or non-AI-based), thereby helping to drive demand for expanded network capacity and 6G capabilities. At the same time, AI applications can facilitate more dynamic sharing of network resources, thereby helping to lower the costs of providing ICT resources to computation/traffic-intensive AI (and other) applications.

AI will be used extensively in future digital environments, both at edge interfaces between end-users (machine or human) and throughout the networked ICT resources. Much of the AI will automate tasks that are not directly observed or even observable by end-users as they may not be consumer-facing but located in upstream parts of the value chain. That is already the case for many of the non-AI ICTs pervading our world. Indeed, this ability to augment human endeavors in ways that do not require direct human attention or intervention, the ability to work tirelessly 24/7 and undertake repetitive tasks with better reliability and faster than humans, and at lower total resource cost than if undertaken by humans are key economic motivations for automation.

What AI helps accomplish is to extend the scope and frontiers of tasks and lower the costs of automation (which in many cases will depend mostly on the application of non-AI ICTs). More narrowly, what emerging GenAI systems enable are new capabilities for automating the collection and analysis of vast amounts of new (e.g., previously untappable) data, and potentially, using that data to adapt systems and to optimize their performance and capabilities in a data and demand-driven fashion. From a high-level perspective, decision making processes are changed and increasingly evidence-based.

3. The Measurement Challenge

Firms need real-time situational awareness to efficiently manage real-time systems. Policymakers critically require the capacity to detect and act, they need to be able to detect when bad (good) things are happening and be capable (have knowledge, authority, institutional capacity, etc.) to act to retard (encourage) to reduce (amplify) harms (benefits). Put simply, the widespread infrastructure-like impact of AI running on 6G networked computing will pose numerous governance and measurement challenges that will require all-hands-on-deck responses.

In this context, Evidence-Based Decision Making (EBDM) has been heralded as a best-practice process model for how to make scientifically-sound decisions in complex, multi-stakeholder environments.15It has been widely used for medical decision-making and healthcare policy (see Shafaghat et al., 2022 or Brownson et al., 1999), and was adopted into U.S. law in 2019 to establish standards for decision-making by government agencies (see “Foundations for Evidence-Based Policymaking Act of 2018,” Pub. L. No. 115-435, 132 Stat. 5529 (2019), available at https://uscode.house.gov/statutes/pl/115/435.pdf). An illustrative statement of what EBDM entails is provided in the following from a U.S. regulatory agency tasked with implementing national radio-frequency (RF) spectrum reforms:

The methodology will incorporate best practices, developed through the new collaborative framework, for conducting technical and economic analyses that are data-driven, science-based, and peer-reviewed. Best practices will include, at a minimum, greater transparency around reported findings to the extent practicable (subject to information security restrictions). Using best practices developed through collaboration between Federal and non-Federal stakeholders, and in compliance with existing law and policy, will serve to ensure better acceptance and fewer disputes over findings.”16The quote is from the NTIA’s plan for implementing US National Spectrum Strategy (available at: https://www.ntia.gov/issues/national-spectrum-strategy/spectrum-strategy-pillars/pillar-two/objective-two), following up on President Biden’s Executive Memorandum “Modernizing United States Spectrum Policy and Establishing a National Spectrum Strategy” (issued November 13, 2023, available at https://www.whitehouse.gov/briefing-room/presidential-actions/2023/11/13/memorandum-on-modernizing-united-states-spectrum-policy-and-establishing-a-national-spectrum-strategy/).

In the increasingly complex and diverse marketplaces and industry value chain constellations that are engaged in building the AI+6G future, meeting the needs of EBDM confronts difficult measurement challenges. Those are well-demonstrated by the extensive research literature and policy debates over how to measure and report broadband performance.17In the following, we draw on an expand our discussion in Frias et al. (2023). See also Bauer et al. (2010, 2020) and Lehr (2022b). Moreover, the earlier discussion should make clear that there will be a complex range of capabilities and needs for real-time network performance measurements to monitor and manage end-to-end services over future 6G networks. The latter will offer a range of dynamic, mix-and-match options for meeting end-users’ demands for good Quality-of-Experience (QoE) in the consumption of services.

In environments characterized by an evolving range of services delivered via a fluid variety of value chain constellations, there will not be any single best source/vantage point, metric, or measurement methodology for the 6G world and 6G services. Multiple valid vantage points for measuring performance and aggregating (summarizing) those measurements to address different questions of interest will be possible and should be considered. Therefore, what we need is a multi-stakeholder measurement ecosystem for the AI/6G ecosystem.18The term ecosystem is often reserved to refer to biological systems of interacting organisms and their physical environment. Although the measurement infrastructures (sensors, algorithms, metrics, networks, etc.) and the ecosystem actors and stakeholders (firms, collections of individuals, governments, etc.) who collect, share, and manage the measurement data used for decision-making are not a biotic system in the usual sense, the complex framework of interacting elements that we have in mind is analogous to such a system. In highly competitive markets for consumer goods, we have such a measurement ecosystem. There are government reports, consumer advocates, market researchers, consumer’s own reports, academics, and the vendors of the goods themselves all contributing measurement data.

For 6G performance, we should aspire to a similar diversity and liveliness of measurement. In such an ecosystem, there will be lots of disagreement about what is the right measurement or interpretation of measurements. There will be strategic behavior and maneuvering by stakeholders not only to gain a competitive advantage but also to influence how the ecosystem evolves to shape it toward their desired ends. These problems are not new. That they are more complicated and harder to address is a measure of our success in realizing the economic growth and capabilities of the Information Age. If we were dealing only with plain old telephony services over black phones controlled by a single monopoly provider’s network, the challenge and problems would be easier, but that is neither the world we have nor would want.

In addition to driving demand for more measurements and rendering the measurement challenge more complicated, 6G and AI also contribute significant forces to help address the challenge.

First, building and operating 6G networks will be impossible without lots of help from AI and lots of embedded measurement capabilities. Service providers will need the measurements to make their services work as intended and to enable the sort of fine-grained, dynamic customization and shared utilization that is envisaged (and hoped for). That implies that a substantial fraction of the costs of putting in place much of the requisite measurement infrastructure and capabilities will be incurred as a shared cost, which means that the incremental costs of implementing new measurements will be lower. Today, there are many measurements that we might want to undertake that would require too much new investment and time to put in place to make it economically or technically feasible. The architectures and technologies of modern 6G networks will significantly expand measurement capabilities (and softwarization helps here, too).19Softwarization is short-hand for the ongoing process in ICT systems of increasingly moving functionality that used to be in hardware into software. It also means more flexibility and agility in networking since the vendor lock-in between hardware and software is broken, rendering devices and networks “programmable” (see, for example, Shukla & Stocker, 2019). In addition to the economic implications of innovating in software instead of hardware, softwarization enables virtualization and delocalization. The first makes it feasible to offer customized slices to resources for individual users or uses; while the latter makes it feasible to separate in geo-space where functionality is located, control decisions are made, and actions occur. The move to Network Function Virtualization (NFV) and the rise of Software Defined Networking (SDN) were part of the softwarization of mobile networks during the 4G/5G transition. For further discussion, see, for example, Bouras et al. (2017), Oughton & Lehr (2022), Lehr et al. (2021), and Wijethilaka & Liyanage (2021).

Second, AI tools are already helping with the analysis of the Big Data that 6G networks will generate and need to process. Real-time traffic management is just one example. Market metrics and intelligence are also important and increasingly a source of revenue for service providers and ancillary online businesses. Indeed, much of the antitrust regulatory focus on digital platforms is directed at the potential competitive and privacy abuses that different participants in the Internet ecosystem may make of measurement data that characterizes end-user online behavior.

4. Policy Implications

Focusing on the implications of AI+6G for legacy telecommunications policy is instructive. Accepted national priorities for telecoms policymakers in the US and abroad include expanding the broadband Internet’s reach and accessibility (i.e., universal service policy), ensuring that the fabric of networks and services can (aspire to) deliver seamless support for ever-more-demanding applications (i.e., interconnection/competition/interoperability policies), and making sure that national resources necessary for the infrastructure to function are managed efficiently (e.g., RF spectrum management). AI has a role to play in each of these areas.

4.1. Universal Service Policy

The AI+6G future requires policymakers to expand their notion of what needs to be provided to ensure the substance of universal service policy goals are met. Updating universal service goals and obligations to encompass not only broadband but also access to computing resources needed by more interactive and immersive AI apps will be necessary to address the aforementioned new Digital Divides.

In addition to changing the goal posts for universal service policy, AI is assisting in executing those policies. For example, ML techniques are being applied to remote sensing data (from satellite imaging) to identify underserved (of which unserved are the most important) communities. This has proved especially important in addressing the connectivity challenges of the vast population of unconnected in less developed parts of the world.20See, for example, Oughton (2022).

4.2. Interconnection, Competition, and Interoperability

A key responsibility of telecommunications regulators is to promote competition among digital service providers. Effective management of competition while also ensuring the economic and technical viability of good QoE end-to-end services requires that multiple competing service providers are able to interconnect and interoperate. Those providers are engaged in more flexible and fluid (B2B) relations better characterized as co-opetition than traditional models of vertical or horizontal competition. Safeguarding and fostering competition in the complex ecosystem we see today and certainly will see in the future requires reconciling (legacy) Internet Service Provider (particularly telecoms) and digital platform regulation and a fundamental re-examination of industry structure.21For example, legacy regulatory frameworks were premised largely on models of more vertically-integrated business models in siloed industry structures (e.g., telecoms for telephony, cable networks for entertainment broadcasting). In contrast, today’s market structures comprise a much more complicated array of industry players at the edge (e.g., edge device competition between Apple’s iOS and Android devices) competing with a more complicated and flatter topology of access ISPs and edge-content and application providers (e.g., broadband access providers and providers of social media, streaming media, and other content or online services) and a host of service providers operating at different layers. The establishing legislation for many national regulators is based on definitions and characterizations of industry structure and relations that are outmoded. See, for example, Lehr et al. (2019a, 2019b) and Lehr & Sicker (2018) for further discussion.

AI and AI-enabled automation will prove necessary in the more dynamic world of 6G, where many more networks and technologies will be called upon to dynamically reconfigure and adapt interconnections in (near) real-time at inter-provider and end-user interfaces. The speed with which adjustments will be made will preclude, in many contexts, direct human involvement. In other situations, the complexity will make it necessary to have AI render the interaction between the human or, in many cases, the machines on either side of the interconnected resources sufficiently comprehensible to be actionable.

4.3. Spectrum Sharing

The final example where AI will find application is in enabling more dynamic spectrum sharing. All wireless services need access to the RF spectrum, and spectrum is a renewable shared resource (like bandwidth). RF that is not used in some dimension of spectrum space (time, location, frequency, direction, etc.) is available for other uses. Although technologies exist to share RF much more intensively than it is today, sustaining much more co-existence in the same RF space than is currently allowed or feasible, these technologies are not widely deployed for regulatory and economic reasons.

As with the interconnection challenges, sharing the spectrum among multiple devices and networks will require significant automation. Controlling and managing that automation will require significant on-device and cloud-based computation. AI is already widely employed in the research and development efforts to characterize co-existence capabilities. For example, GenAI applications are being used to analyze RF measurement data.22See Jagannath et al. (2022) and Adesina et al. (2019). Also, see “BAE Systems Wins DARPA Contract to Apply Machine Learning to the RF Spectrum,” November 27, 2018, available at: https://seapowermagazine.org/bae-systems-wins-darpa-contract-to-apply-machine-learning-to-the-rf-spectrum/.

Automating the matching of diverse radios to enable shared co-existence in an RF spectrum band may seem like an obvious challenge for AI to tackle; what is less obvious is why such automation does not equally apply to the matching problems incurred in other areas where AI will likely play a role, and how to identify and regulate those capabilities. For example, at another extreme, consider the challenge of matching for digital dating applications (e.g., Tinder, etc.). Like RF matching, it is typically spatially and temporally localized, the criteria for a successful match are arbitrarily complex, and there is significant value at stake if inappropriate matches are long-lived.23A bad date is usually not as bad as a bad marriage. A permanent assignment of spectrum to a less valuable use (e.g., below 1GHz spectrum to over-the-air TV instead of mobile broadband) is much more costly to society than incidental interference between TV and mobile broadband services. Of course, catastrophic examples could occur in either case. Obviously, the worlds of spectrum management for 6G and Smart-dating applications for the future are very different, but the AI may not be different in substantive ways that differ for regulatory policy, and regulations based on identification of the targeted use of the AI may prove wholly ineffective. To understand whether that may be the case and to recognize a need to act (potentially avoiding path-dependency lock-in), a more capable measurement ecosystem is needed.

What each of these examples illustrates is how AI is simultaneously creating new challenges and establishing itself as a solution to those challenges – a story that is common (but also with important sector/domain-specific wrinkles) across virtually all policy domains. The fundamental tools (e.g., enabling better matching techniques) that may be developed for one task (RF sharing) may easily find application for other tasks (Dating apps). At the same time, the implications of using different technologies in different usage contexts will be hard to target because a fundamental characteristic of today’s ICT systems is the multiplicity of technical options for task automation. The issue is what and how a task is automated, not the ICT technology used to accomplish that goal (which includes whether it is AI or non-AI digital tech).

5. Conclusion

The prior discussion described the joint co-dependencies between the evolution of 6G and AI for future visions of the Fourth Industrial Revolution to be realized. There is no single roadmap, but many roadmaps. The path to the future is uncertain, but is littered with path dependencies. Key to navigating that path will be a healthy measurement ecosystem. The need for that ecosystem existed before ChatGPT, but like climate change, is a problem that we would have been better off if we had started to address effectively sooner. We did not and now find ourselves in an over-heated global policy debate. That is seldom the best environment for good policymaking. Navigating that will be challenging, but will certainly require better and more information if we are to have evidence-based decision-making.

Policymakers have a variety of tools at their disposal to help steer the path of AI+6G evolution: they can subsidize investment in public goods; they can establish regulations and enforce compliance; and they can help inform and mediate multistakeholder decision-making. In a digital world, sovereign authority is more difficult to assert, and so global coordination efforts loom more important. At the same time, digital technologies also extend the range of capabilities and abilities of centralized authorities (whether they be governments, companies, or coalitions of stakeholders) to monitor and exert control in new ways. The threats of a surveillance economy and the chaos of inadequate mechanisms for coordinated management of externalities are the two rocks that global governance must steer between.

Policymakers should use their tools to help build the healthy measurement ecosystem that will be needed to collectively navigate the AI+6G future. In building that ecosystem, governments need to recognize they are seldom the most knowledgeable nor most capable to serve as authoritative arbiters of measurement disputes, but they can and should participate in all of the relevant stages. Regulators can and should assume multiple roles, as:

  • Designers of measurement infrastructure for use within government and to help establish best practices (potentially through the judicious use of their purchasing power and specification of standards-based approaches);
  • Orchestrators and Curators of measurements and data, aggregating, summarizing, and presenting the measurements from good/trusted sources (and in their role as curators, helping to exclude bad and misleading measurements), consistent with EBDM practices;
  • Facilitators, subsidizing and establishing rules for the sharing of measurement data to promote appropriate levels of sharing and openness that balance the needs of intellectual property, privacy, and security, while subsidizing measurements that have public goods characteristics. The judicious application of transparency and disclosure mandates (e.g., truth-in-advertising, labeling requirements, etc.) will prove important here.

To conclude, we want to underscore two final points. First, addressing the above challenge is inherently multidisciplinary. Meaningful progress toward establishing and interpreting performance measurement for AI+6G will necessarily require significant input from highly-trained technical experts. However, increasingly, the capabilities of the technologies have rendered purely technical constraints often less important than legal or economic constraints, and adequately addressing the latter requires more capacity building to establish and develop the multidisciplinary collaborative expertise that will be needed. Second, and of special relevance to the world of the ICT sector, policymakers will need to rethink the legacy separation between infrastructure and data management policies.24Infrastructure policies focus on the provision of the asset-heavy networking infrastructure and the rules needed to make those work.

In the U.S., for example, the Federal Communications Commission (FCC) has been the agency principally tasked with regulating that infrastructure. Data management policies focus on information access, which tends to be the domain of intellectual property, disclosure/transparency, and privacy policies. Although the FCC’s concerns overlap with those issues, responsibility for and regulatory authority over those is more distributed. For example, the Federal Trade Commission (FTC) is the primary Federal regulatory body when it comes to privacy regulation and intellectual property enforcement is mostly left up to the Courts and legislature that have established frameworks for allocating property rights. The future of AI will depend on data-hungry Smart-X applications, which will depend on the 6G infrastructure available to support those applications. The ownership, location, and management of that infrastructure will need to be consistent with data management policies promoting transparency, economic efficiency, privacy, and security.25For example, data localization laws may constrain how data centers are provisioned and higher-level applications are designed and implemented. Differential infrastructure policies across nations and markets will impact what Smart-X applications can be supported and at what cost.

Citation: William Lehr & Volker Stocker, AI Regulation and 6G: The Measurement Ecosystem Challenge, Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker), Network Law Review, Winter 2023.

Note

Volker Stocker would like to acknowledge funding by the Federal Ministry of Education and Research of Germany (BMBF) under grant no. 16DII131 (Weizenbaum-Institut für die vernetzte Gesellschaft—Das Deutsche Internet-Institut).

References

  • Adesina, D., J. Bassey, and L. Qian (2019), “Practical Radio Frequency Learning for Future Wireless Communication Systems”, MILCOM 2019 – 2019 IEEE Military Communications Conference (MILCOM), Norfolk, VA, USA, 2019, pp. 311-317, doi: 10.1109/MILCOM47813.2019.9020807
  • Amazon Web Services (2024), “What is Generative AI?”, available at: https://aws.amazon.com/what-is/generative-ai/
  • Bauer, S., W. Lehr, and D. Clark (2020), “Gigabit Broadband Measurement Workshop Report”, ACM SIGCOMM Computer Communications Review, 50(1), available at: https://dl.acm.org/doi/pdf/10.1145/3390251.3390259.
  • Bauer, S., D. Clark, and W. Lehr (2010), “Understanding Broadband Speed Measurements”, TPRC2010: Research Conference on Communication, Information and Internet Policy, Washington, DC, available at: http://ssrn.com/abstract=1988332
  • Bouras, C., A. Kollia, and A. Papazois (2017), “SDN & NFV in 5G: Advancements and challenges”, 2017 20th Conference on Innovations in Clouds, Internet and Networks (ICIN), pp. 107-111, doi: 10.1109/ICIN.2017.7899398
  • Bresnahan, T.F. and M. Trajtenberg (1995), “General purpose technologies ‘Engines of growth’?”, Journal of econometrics, 65(1), pp. 83-108.
  • Brownson, R., J. Gurney, and G. Land (1999), “Evidence-Based Decision Making in Public Health”, Journal of Public Health Management and Practice, 5(5), pp.86–97. http://www.jstor.org/stable/44967847.
  • Frias, Z., W. Lehr, V. Stocker, and L. Mendo (2023), “Measuring NextGen Mobile Broadband: Challenges and Research Agenda for Policymaking”, 32nd European International Telecommunications Society Conference (EuroITS2023), June 19-20, Madrid, available at https://www.econstor.eu/handle/10419/277959
  • Jagannath, A., J. Jagannath, P. Pattanshetty, and V. Kumar (2022), “A comprehensive survey on radio frequency (RF) fingerprinting: Traditional approaches, deep learning, and open challenges”, Computer Networks, 219 , available at https://doi.org/10.1016/j.comnet.2022.109455
  • Lehr, W. and V. Stocker (2023), “Next-generation networks: Necessity of edge sharing”, Frontiers Computer Science, available at https://doi.org/10.3389/fcomp.2023.1099582.
  • Lehr, W., D. Sicker, D. Raychaudhuri, and V. Singh (2023), “Edge Computing: digital infrastructure beyond broadband connectivity”, TPRC51: Annual Research Conference on Communications, Information and Internet Policy, September 22-23 American University, Washington DC., available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4522089
  • Lehr, W. (2022a), “5G and AI Convergence, and the Challenge of Regulating Smart Contracts”, Europe’s Future Connected: Policies and Challenges for 5G and 6G Networks, edited by E. Bohlin and F. Cappelletti, European Liberal Forum (ELF), pp. 72-80, available at: https://liberalforum.eu/publication/europes-future-connected-policies-and-challenges-for-5g-and-6g-networks/
  • Lehr, W., D. Clark, S. Bauer, A. Berger, and P. Richter (2019a), “Whither the Public Internet?” Journal of Information Policy 9, pp. 1-42. doi:10.5325/jinfopoli.9.2019.000.
  • Lehr, W. (2022b). “The Changing Context for Broadband Evaluation”, Transforming Everything?: Evaluating Broadband’s Impacts Across Policy Areas, edited by K. Mossberger, E. Welch, and Y. Wu, Oxford University Press, 2022
  • Lehr, W., D. Clark, S. Bauer, and K. Claffy (2019b), “Regulation when Platforms are Layered”, TPRC47: Research Conference on Communications, Information and Internet Policy, Washington DC, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3427499
  • Lehr, W., F. Queder, and J. Haucap (2021), “5G: A new future for Mobile Network Operators, or not?”, Telecommunications Policy, 45(3), doi:https://doi.org/10.1016/j.telpol.2020.102086
  • Lehr, W. and D. Sicker (2018), “Communications Act 2021”, Journal of High Technology Law, 18(2), pp. 270-330, available at: https://sites.suffolk.edu/jhtl/home/publications/volume-xviii-number-2/
  • Oughton, E. J. and W. Lehr (2022), “Surveying 5G Techno-Economic Research to Inform the Evaluation of 6G Wireless Technologies”, IEEE Access, 10, pp. 25237-25257, doi: 10.1109/ACCESS.2022.3153046.
  • Oughton, E. (2022), “Policy options for digital infrastructure strategies: A simulation model for broadband universal service in Africa”, Policy Research Working Paper #10263, World Bank Group, available at: https://documents1.worldbank.org/curated/en/099337012142213309/pdf/IDU02966dd930cfde041c308669055e3b8e316ad.pdf
  • Schwab, K. (2016). “The Fourth Industrial Revolution: What It Means, How to Respond”, World Economic Forum, available at: https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/
  • Shafaghat, T., P. Bastani, M. Nasab, M. Bahrami, M. Montazer, M. Zarchi, and S. Edirippulige (2022), “A framework of evidence-based decision-making in health system management: a best-fit framework synthesis”, Archives of Public Health, 80(1), pp.1-20.
  • Shukla, A. and V. Stocker (2019), “Navigating the Landscape of Programmable Networks: Looking beyond the Regulatory Status Quo”, TPRC47: Research Conference on Communications, Information and Internet Policy, Washington D.C., USA, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3427455
  • Uhl, M., W. Lehr, and V. Stocker (2023), “AI as a General Purpose Technology – Same but Different?”, 16th ITS Asia-Pacific 2023 Conference, Presentation, November 26-28, Bangkok.
  • Wijethilaka, S. and M. Liyanage (2021), “Survey on network slicing for Internet of Things realization in 5G networks”, IEEE Communications Surveys Tuts., 23(2), pp. 957-994, doi: 10.1109/COMST.2021.3067807.

Related Posts