Jason Potts: “Sources of Innovation in Generative AI”

The Network Law Review is pleased to present a symposium entitled “Dynamics of Generative AI,” where lawyers, economists, computer scientists, and social scientists gather their knowledge around a central question: what will define the future of AI ecosystems? To bring all this expertise together, a conference co-hosted by the Weizenbaum Institute and the Amsterdam Law & Technology Institute will be held on March 22, 2024. Be sure to register in order to receive the recording.

This contribution is signed by Jason Potts, Distinguished Professor of Economics at RMIT University, Melbourne, Australia. The entire symposium is edited by Thibault Schrepel (Vrije Universiteit Amsterdam) and Volker Stocker (Weizenbaum Institute).

***

1. Introduction

Innovation drives competition, so when designing competition policy in an industry we need to understand the sources of innovation in that industry. This note considers the institutions that shape innovation in generative AI and examines how they affect competition policy. I argue that generative AI is a highly complex and inherently distributed technology, both in the capabilities it requires and resources it must assemble. It is also manifestly a general-purpose technology, and thus does not sit in any one industry or sector and that the competitive dynamics of the technology are significantly affected by discovery of new uses.

A key insight, then, is that the sources of innovation in generative AI comes from both industrial or corporate innovation, which is supported by ‘dynamic capabilities’ (Teece et al 1997), and also from ‘user innovation’ (von Hippel 2017), for which I propose a new analogous concept called ‘hyper-capabilities’ that provides a target for competition policy. An instance of hyper-capabilities is token-gated community governance, which has been experimentally developed in web3 and might be usefully applied to enable complex distributed ownership in generative AI.

2. Public policy and generative AI

Generative AI is shaping up as the most disruptive and transformative new general-purpose technology of our age. The technology is a cumulative result of decades of research and development by individuals and teams that have built extremely powerful and often valuable capabilities and products. Some of this work circulates in public (open-source code), but also in publications, open forums, institutions (universities), labour markets (hires) and financial markets (acquisitions). Still, many elements remain closed and protected with intellectual property and corporate secrecy (algorithms, private training data), and through tacit knowledge (especially in training). So, its development through innovation is both open and closed, simultaneously public and private. It is institutionally complex.

It is also technologically complex. Across this mix of open and closed innovation, the technology stack itself exists as a complex technical ecosystem made of neural networks, foundation models, including especially large language models (LLMs), all computing across a threaded distributed infrastructure of devices and internet, operating systems and security, data centres and cloud, and over vast galaxies of digital training data. Generative AI is not a single technology, or even industry, and is composed of extremely complex and varied ownership and governance at each composite layer.

Moreover, the disruptive transformation of this innovative new technology is occurring at global scale, touching perhaps every major technology company, and with few exceptions is poised to affect every industry and job on earth (Agrawal et al 2023). As such, the rapid arrival of an extremely powerful, novel and institutionally and technologically complex new general-purpose technology is certainly among the most important public policy concerns of our time.

However, so far, the loudest and most prominent AI public policy concern has been safety and regulation (the latest hot war breakout, the OpenAI coup in November 2023). In the public choice theory of regulation (Stigler 1971), technology regulation is generally targeted at the companies that make and sell the technologies, or licenced professionals who deliver it, and so fits well into a model of industry policy in which regulations are used to support politically favoured groups, often large companies or unions, while providing the appearance of consumer (and voter) benefit. Much recent AI safety calls for regulation would seem to fit this analysis.

But among those concerned with economic and geopolitical consequences, the prime public policy issue is competition, and specifically the prospect of industrial concentration into powerful AI agglomerations that control entire industries or markets. This is a worrying prospect for governments who thus seek to limit or co-opt that power. But the coercive power of governments to forcibly acquire (nationalisation) or control (backdoors, prime contracting) is limited by the need to maintain incentives to develop the technology in the first place, which is plainly a product of the market. As such, competition policy is a powerful instrument to shape generative AI technology in the service of both government needs and wants, and (often distinctly) for the benefit of consumer welfare.

3. Dynamic competition policy and generative AI

Competition policy today is increasingly interested in how firms build and use AI (i.e. smart algorithms). As this attention develops, generative AI (i.e. foundation models, such as LLMs) will likely become the main regulatory focus, due to the exponentially growing capabilities and value of the technology itself and of the ever-growing scale and importance (both economic and political) of the companies creating and deploying it.

Many issues in competition policy applied to AI are of the ‘simple economics of digital technology’ type – e.g. scale economies (in R&D), network effects, platform economics, bundling, etc, all familiar concerns to earlier antitrust regulation in, for instance, telecommunications and operating systems (Petit 2017). But the Schumpeterian theory of how dynamic competition works in technology markets – as for instance argued by Teece et al (1997), Teece (2018, 2021), Petit and Teece (2021) – also applies here too, namely that innovation and competition are endogenous with respect to ‘dynamic capabilities’.

The ‘dynamic capabilities critique’ is that the economic model of antitrust emphasised only the static elements of competition (e.g. concentration, entry barriers, price competition) rather than a real-world (i.e. Schumpeterian innovation and capabilities-based) understanding of competition as a process and a system. There are many ways that generative AI firms (including the technology majors, such as Microsoft, Amazon, etc, and the raft of new VC-funded startups, including foundation spin-outs such as OpenAI) also seek to strategically create, through investment and acquisition, the sorts of dynamic capabilities needed for innovation. These innovations have already and will likely continue to deeply disrupt existing market structures and rents, and thereby continue to disrupt existing competitive positions and business models, and so drive competition. Competition policy can try to constrain anti-competitive behaviour in the burgeoning generative AI industry, but in doing so it must be careful not to harm innovation capabilities that drive competition in the long run.

The Teece critique focuses on firms. Consider two additional, external sources of innovation, also accelerated by generative AI, and that also drive competition: namely, (1) user innovation, which comes from individuals, households and lead users; and (2) governance innovation, from blockchain technologies. Neither originates in the firm (both are mostly in the commons), and so do not fall within the same ambit of concern with market structure, managerial decision-making and investment consequences that modern competition policy is focused on. But, I claim, these additional two factors – user innovation in generative AI, and governance innovation over generative AI – do matter because, as prime sources of innovation, they affect dynamic capabilities in firms and opportunities for competition in markets.

4. User innovation in generative AI

‘Schumpeterian innovation’ is an evolutionary form of entrepreneurial competition, and ‘von Hippel innovation’ is local problem-solving largely without firms or markets (von Hippel 2016, Potts 2023). The bridge between these two types of innovation in the economy is that von Hippel ‘user innovation’ feeds back into Schumpeterian ‘producer innovation’ by discovering new uses and sources of value for particular technologies. The prime economic factor in user innovation is the economic resource of local and tacit knowledge about problems and the range and properties of acceptable solutions. User innovation reveals potential demand and possible solutions to firms, who then develop these ‘markets for one’ along various dimensions of merit. User innovation feeds into producer innovation, disrupting markets and fomenting competition.

User innovation does not affect generative AI markets and industries through building and developing foundation models, as that requires specialist economic capabilities. But it can have powerful effects on deployment and discovery of uses. Generative AI, such as LLMs, are general purpose technologies, so the total social value of the innovation is a function of the discovery of new uses, many of which, as the history of technology attests, may have little relation to what its designers and builders imagined or intended. That discovery most efficiently takes place close to consumer problems due to sticky and tacit knowledge (Mollick and Euchner 2023).

Examples of user innovation in the application and deployment of generative AI to solve individual problems include domains such as developing a home-schooling syllabus or helping to rewrite emotionally fraught communications with colleagues (see this reddit forum). Or employees with deep knowledge of their own workflow can find innovative applications to automate the more mundane or time-consuming parts of it (such as report writing), and in many content production tasks, can use generative AI tools to get a workable first draft of a letter, advertising copy, writing working code, or a strategic plan. These are all instances of finding new and often quite specific uses of a generic tool that once discovered can be replicated by others (innovation diffusion) and further developed to build new capabilities and businesses.

The valuable innovation resource here is the use cases themselves and, for instance, the prompt libraries that generate effective solutions. These can be effectively pooled in the commons, as in the Reddit forum mentioned above, or they can form the basis of new specialisations that become tacit craft knowledge that can be contracted. These discoveries, while often cheap to make, are potentially highly disruptive of existing markets and industries, creating new opportunities for competition and market entry.

How should competition policy support this? The goal should be to support user innovation discovery that feeds back into, for instance, lowering costs of investment in new markets and services, thereby lowering the cost of market entry. This works by facilitating the transfer of information from user innovators to other consumers and producers. A critical role for competition policy to play here is aligned with minimising costs of regulatory compliance and other hurdles, many of which provide barriers to competition that generate rents for incumbents. Many barriers to the new types of tasks, roles and services that generative AI can deliver are protected through e.g. occupational or service licensing, or regulatory compliance costs. This is particularly the case in the knowledge professions, such as providing legal or financial advice, but barriers exist in many domains (such as advertising copywriting, or public sector administration) that are partially protected through institutional inertia. Regulatory reform and cultural adaptation will be required to get competition benefits from generative AI.

5. Hyper-capabilities

The target of competition policy for user-generated innovation are what I will call hyper-capabilities.[1] Hyper-capabilities are the user innovation analogue of Teece’s ‘dynamic capabilities’ in firms but refer to the total network of distributed capabilities for local problem-solving in the form of user innovation. They form an extended penumbra of the innovation ecosystem, and thus exist across many agents who may be distributed across many firms or organisations, networks or even households. Hyper-capabilities are to user innovation what dynamic capabilities are to industrial innovation, in that they are the objective phenomena that competition policy should be centrally concerned with, yet are not easily observed or measured statically, but can be inferred from their dynamic effect.

Hyper-capabilities are the total network and pool of individual capabilities in households and lead-users who, having access to generative AI toolsets and toolkits, can draw upon a powerful knowledge base (i.e. the training set and foundation model). They are ‘hyper’ because they are extremely large in cultural time and semantic space, but with effects that are also partially withdrawn and hidden (as in ‘hyperobjects’).

Hyper-capabilities are not public goods, nor administratively mapped or registered, so they are opaque and hard to see. But the claim here is that they exist (in the same way dynamic capabilities exist, as capabilities for change and adaptation) and have powerful indirect effects on competition as they grind away at figuring out local substitutions and novel problem solutions. A particular and often critical consideration concerns how firms might seek to support and develop these hyper-capabilities. Ways to do so might include offering developer support that extends into the user innovation community, or in building open access libraries or other ways supporting maintenance of hyper-capabilities as properties of free innovation in the commons (von Hippel 2016, Potts et al 2024).

A general theory of the endogeneity of innovation and competition in formulating antitrust considerations over generative AI needs to account for not only Teece’s dynamic capabilities (properties of firms), but also the user innovation analogue, which I propose we call hyper-capabilities (which are properties of the innovation commons).

6. Innovation in governance over generative AI

An instance of a hyper-capability is community governance of a technology. Indeed, one of the most difficult and fraught ongoing challenges of generative AI is ownership and governance. Who owns a model? And what rights does ownership entail? What is the value of those rights? Who owns and has governance rights over the training sets of the model? Who (or what) owns the prompts used to develop and deploy the model to create specific value? The standard way to manage that governance problem is with property rights, usually vested in a corporation. The firm that builds the model owns the model, and the governance structure over the generative AI works through the corporate governance of the firm – i.e. the principal agent problem of suppliers of finance and the appointment of management, intermediated through a board, and within the broader context of public regulation and societal culture and ethics. Governance matters because it structures incentives over decision-making, which powerfully shape competition.

The challenge with generative AI has been that the critical factors that combine to produce value – the model architecture, the training set, the compute, the (trained) foundation model, the learning and fine tuning, the implementations and APIs, the specialist skills and capabilities, the governance function and integration with external stakeholders (the world and all of the future!) – is that it does not necessarily sit neatly or comfortably inside a private corporate form with hierarchical governance. But generative AI (a species of software) also sits poorly in a public ownership model, i.e. owned and controlled by the state, and regulated by politics. Unfortunately, it also sits poorly in a not-for-profit foundation or trust, as the OpenAI fiasco in late 2023 illustrated all too clearly. It is not obvious what the best, most aligned or efficient governance structure is for owning, building and deploying generative AI, but we likely haven’t found it yet. That means there is social value in ongoing governance innovation.

A possible source of governance innovation comes from recent blockchain and crypto experiments with smart contract-based Decentralised Autonomous Organisations (DAOs) and new mechanisms for token-governed voting.

Web3 technologies bring innovative consequences for finance and ownership of new projects, which because they are distributed, requires non-centralised decision making and governance tools for a protocol to adapt and continue to develop and upgrade. It is possible that many of the governance tools and models that have been developed in crypto, which are battle-hardened for distributed digital environments with malicious actors, can be effectively ported to provide safe and innovative new governance tools for generative AI.

One use-case scenario is token-gated community voting on governance of upgrades and changes to the model to all stakeholders, using a large design space of mechanisms (rather than the current model of just corporate board plus nation state regulations). Additionally, training and learning, as well as fine-tuning, might be built into an economic system through smart contract calls on data sets, with crypto payments. Prompt libraries could have the same type of token-gated access and platform distribution.

7. Conclusion

To hold competition policy to the long-run consumer welfare standard is to hew to the innovation and competition endogeneity thesis that “competition drives innovation, but innovation also drives competition”. We operationalise the endogeneity thesis by recognising that a better approach to competition policy requires a broader approach to innovation. I have indicated here two further dimensions of innovation through which to consider the goals and instruments of competition policy apply to generative AI:

(1) user innovation (broadly, connecting the work of David Teece and Eric von Hippel), and which introduces the concept of ‘hyper-capabilities’ (as equivalent concept of ‘dynamic capabilities’)

(2) innovation in governance (broadly, new mechanisms from web3), as a key instance of a hyper-capability

Competition policy is already moving to target AI, seeing it through the lens of algorithmic governance and corporate compliance. The standard concerns will be to inhibit some firms from getting too big and engaging in threatening, exploitative or anticompetitive conduct. A further concern is whether the underlying technologies will be deemed protected, or in national strategic interest, and so whether industrial dynamics will likely skew toward heavily controlled oligopolies (like telecommunications, or defence contracting). Such an outcome will likely inhibit innovation, with negative downstream consequences for competition and consumer welfare.

But competition policy can move to favour and support innovation by supporting the capabilities that innovation requires. Support for user innovation will help accelerate discovery and development of new use cases and sources of value, which will lower costs of new market entrants, and thus drive competition. Open standards and efforts to place as much of the epochal technology as possible in the commons are desirable goals to support innovation and competition. Advances in innovation governance facilitate better institutional structures of ownership and decision-making, which help align the complex incentives at issue with this technology.

Jason Potts

Citation: Jason Potts, Sources of Innovation in Generative AI, Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker), Network Law Review, Winter 2023.

  • [1] As per concepts such as hyperreality (Baudrillard 1982), hypertext (Berners-Lee 1991), hyperobjects (Morton 2013).
  • [2] On crypto governance, see, e.g. Lumineau et al (2021), Davidson and Potts (2022).

References

  • Agrawal, A., Gans, J., Goldfarb, A. (2023) ‘Artificial intelligence adoption and system‐wide change.’ Journal of Economics & Management Strategy.
  • Baudrillard, J. (1982). Simulacra and Simulation. University of Michigan press.
  • Bommasani, R., Hudson, D., Adeli, E., Altman, R., Arora, S., von Arx, S., … & Liang, P. (2021) ‘On the opportunities and risks of foundation models.’ arXiv:2108.07258.
  • Davidson, S., Potts, J. (2022) ‘Corporate governance in a crypto-world.’ Available at SSRN 4099906.
  • Kealey, T., Ricketts, M. (2014) ‘Modelling science as a contribution good’ Research Policy, 43(6): 1014-1024.
  • Lalley, S., Weyl, E. G. (2018) ‘Quadratic voting: How mechanism design can radicalize democracy. American Economic Review, Papers and Proceedings, 108(1): 33-37.
  • Lumineau, F., Wang, W., Schilke, O. (2021) ‘Blockchain governance: A new way of organizing collaborations?’ Organization Science, 32(2): 500-521.
  • Mollick, E., Euchner, J. (2023) ‘The transformative potential of generativeAI’ Research-Technology Management, 66(4).
  • Morton, T. (2013) Hyperobjects: Philosophy and Ecology after the End of the World. University of Minnesota Press.
  • Petit, N. (2017) ‘Antitrust and artificial intelligence: a research agenda’ Journal of European Competition Law & Practice, 8(6): 361-362.
  • Petit, N., Teece, D. (2021) ‘Innovating big tech firms and competition policy: favoring dynamic over static competition.’ Industrial and Corporate Change, 30(5): 1168-1198.
  • Potts, J. (2019) Innovation Commons: The origin of economic growth. Oxford University Press: Oxford.
  • Potts, J. (2023) ‘von Hippel innovation’ SSRN.
  • Potts, J., Harhoff, D., Torrance, A., von Hippel, E (2024) ‘Profiting from data commons: Theory, evidence, and strategy implications’ Strategy Science.
  • Stigler, G. (1971) ‘The theory of economic regulation’ Bell Journal of Economics and Management Science, 2(1): 3-21.
  • Teece, D. J. (2018) ‘Business models and dynamic capabilities.’ Long Range Planning, 51(1), 40-49
  • Teece, D. J., Pisano, G., Shuen, A. (1997) ‘Dynamic capabilities and strategic management.’ Strategic Management Journal, 18(7): 509-533.
  • von Hippel, E. (2006) Democratizing Innovation. MIT Press.
  • von Hippel, E. (2017) Free Innovation. MIT Press.

Related Posts