Flavio Calvino and Chiara Criscuolo: “Generative AI And Productivity: Challenges, Opportunities And The Role of Policy”

The Network Law Review is pleased to present a symposium entitled “Dynamics of Generative AI,” where lawyers, economists, computer scientists, and social scientists gather their knowledge around a central question: what will define the future of AI ecosystems? To bring all this expertise together, a conference co-hosted by the Weizenbaum Institute and the Amsterdam Law & Technology Institute will be held on March 22, 2024. Be sure to register in order to receive the recording.

This contribution is signed by Flavio Calvino, an Economist at the Directorate for Science, Technology and Innovation of the OECD, and Chiara Criscuolo, head of the Productivity, Innovation and Entrepreneurship Division in the Directorate for Science, Technology and Innovation at the OECD. The entire symposium is edited by Thibault Schrepel (Vrije Universiteit Amsterdam) and Volker Stocker (Weizenbaum Institute).


1. Introduction

The rapid developments of generative artificial intelligence (henceforth, “generative AI”) have the potential to significantly reshaping economies and societies, with pervasive impacts on more and more aspects of everyday life. For instance, text generated by Large Language Models (LLMs) such as OpenAI’s ChatGPT or more recently by multimodal models such as Google’s Gemini appear to bring unprecedented potential for a wide range of tasks and activities.

By generating new content based on training data and response to inputs (e.g., prompts), generative AI can revolutionise industries, reshape the way of working and allow the emergence of new business models, with significant potential for growth and well-being, but this comes with potential significant risks for human rights and democratic values.

Generative AI is indeed at the centre of the economic and policy debates around the world. Issues related to its regulation are at the core of policy discussions, notably including for instance the debates around the EU AI Act, the US Executive Order on AI, the UK AI Safety Summit, and efforts by several international organisations or fora, such as for instance the OECD 2019 AI principles, the UN new high-level advisory body on AI, or the G7 Hiroshima AI process.

Beyond policy discussions, recent research has been increasingly focusing on the impacts of generative AI on economic outcomes, notably focusing on the exposure of occupations, sectors or geographical areas to generative AI, and more broadly on the implications for the labour markets, but also on the links between the use of generative AI and workers’ and firms’ productivity.

In this short paper, we briefly contextualise the most recent evidence on the role of generative AI for productivity, building upon the emerging literature on the economic implications of AI. We then discuss key issues related to the potential of generative AI for productivity growth as well as its risks for workers, businesses and society at large, highlighting how policymakers can play a critical role in fostering an inclusive and sustainable digital transformation in the age of generative AI. We finally point to some areas for possible future policy-relevant research on the economic implications of generative AI, especially focusing on analysis at the microeconomic level.

2. Context and recent evidence

In a recent update,1Additional discussion about the updates to the OECD definition of an AI system is available at: https://oecd.ai/en/wonk/ai-system-definition-update. the OECD defines an AI system as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

An emerging body of literature has been increasingly focusing on the role of AI systems for productivity.2See for instance Brynjolfsson et al. (2021); Calvino and Fontanelli (2023a; 2023b); McElheran et al. (2023); Zolas et al. (2020), and a more detailed review in Calvino et al. (2022). A complementary stream of research has been also focusing on the role of AI in the labour market, see for instance Aghion et al. (2019); Acemoglu et al. (2022a); Babina et al. (2023), or in the workplace, see for instance Lane et al. (2023) for further discussion. As data about the use of AI have recently emerged, a few studies have leveraged information from ICT surveys, online job postings, and intellectual property records (IPRs, notably patents), often linked to firm-level data, to analyse the characteristics of firms using AI and assess the links between AI use and productivity.3See the references in the previous footnote, as well as Alderucci et al. (2021); Damioli et al. (2021); Acemoglu et al. (2022b); Czarnitzki et al. (2022). A more detailed review is also available in Calvino and Fontanelli (2023a).

Key findings from this stream of research include the fact that AI users tend to still represent a small proportion of the overall business population and AI developers are an even smaller part of the population. When analysing AI users, they appear to be larger and to some extent younger firms, they also tend to be ex-ante more productive than other firms, so that observed productivity premia might not be entirely credited to the use of AI (Calvino and Fontanelli, 2023a). In addition, complementary assets, such as ICT skills, digital infrastructure and digital capabilities appear to play a key role in this context, pointing at least to some extent to selection of more digital and productive firms into AI use. This suggests that complementary investments in intangibles, which may take time to fully materialise, are critical to leverage the potential of a general-purpose technology such as AI, consistently with the J-curve hypothesis proposed by Brynjolfsson et al. (2021).

However, few studies4See Calvino and Fontanelli (2023b) focusing on firms that develop AI in house; see also Alderucci et al. (2021) and Damioli et al. (2021) focusing on AI-patenting firms. focusing more closely on AI developers find more significant links between AI and productivity, possibly also beyond the selection dynamics described above, suggesting that they may in fact already be in the position of realising such returns, as they leverage more sophisticated human and technological capital.

Since the boom in generative AI and the release of more sophisticated LLMs, the literature has started to focus more closely on the economic implications of generative AI.5Key references, based on information available at the time of writing, are discussed in the following paragraphs. Generative AI appears indeed to have a great potential to be a general-purpose technology, given its substantial improvements in accuracy over time and its ability to generate language (and beyond) which is relevant to several domains.

In this context, recent research has mapped the extent to which occupations (and tasks within occupations) are exposed to generative AI, using for instance occupational dictionaries (such as O*NET for the United States). In particular, Felten et al. (2023) highlight that telemarketing as well as several education-related occupations are the most exposed to advances in language modelling. Relatedly, Eloundou et al. (2023) highlight that around about 80% of the workforce in the United States could have at least 10% of tasks affected by LLMs, with higher-income jobs facing greater exposure. Focusing on both advanced economies and emerging markets, Pizzinelli et al. (2023) also suggest that women and highly educated workers face greater occupational exposure to AI.

A few recent analyses have focused, in experimental settings, on the impact of generative AI on workers’ productivity. Focusing on customer support agents, Brynjolfsson et al. (2023) show that AI-based conversational assistants significantly increase the number of issues resolved per hour, especially for novice and low-skilled workers, with minimal impact on experienced or high-skilled ones, suggesting – at least in this case – that generative AI could decrease returns (in terms of productivity) to experience and tenure. Based on a mid-level professional writing tasks, Noy and Zhang (2023) show that using ChatGPT substantially increases (individual-level) productivity, reducing tasks’ time and increasing quality. They also highlight higher benefits for lower-ability workers, as well as substitution for workers’ efforts and evidence of tasks moving away from rough-drafting. Focusing on software developers, Peng et al. (2023) suggest that access to GitHub Co-pilot, an AI pair programmer, allows developers – especially the less experienced ones – to complete programming tasks significantly faster.6See also Kreitmeir et al. (2023) for evidence about the effects of a ban of ChatGPT in Italy on developers’ productivity. Based on an experiment on management consultants, Dell’Acqua et al. (2023) highlight instead how AI capabilities create a “jagged technological frontier”, with AI improving productivity for tasks within the capabilities of AI systems, especially at the bottom of the skills distribution, while decreasing the quality of solutions when tasks are outside the current capability of AI.7Taking a different perspective, Eisfeldt et al. (2023) focus on the role of generative AI for firm value, highlighting that US publicly traded companies with higher exposure to generative AI earned higher excess returns, despite substantial heterogeneity across and within sectors.

The early evidence referenced above confirms the pervasiveness of generative AI, with high share of workers – notably those completing cognitive tasks – significantly exposed to this technology. Focusing on selected groups of workers, it also highlights the positive implications of generative AI for individual productivity, especially for lower-skilled and less experienced workers, when tasks carried out are within the capabilities of AI system. This highlights the opportunities that generative AI brings for productivity growth. The next section will rely on the evidence reported here and on evidence from the ongoing digital transformation to discuss potential policy challenges that may affect how generative AI may affect the productivity of the business sector.

3. Challenges, opportunities and the role of policy 

Building upon the evidence presented above and stressing differences between predictive and generative AI, this section provides a more speculative discussion about potential challenges that may prevent fully realising the productivity returns to generative AI and discusses the role of policymakers in this context, distinguishing between AI developers and AI users.

Differently from predictive AI, using generative AI may appear – at least at first sight – to rely less on complementary assets. In fact, a device connected to the internet already allows – with limited additional assets – to generate content leveraging large language models (see also McAfee et al., 2023).

Distinguishing between developers and users of generative AI appears however key, with different policy challenges emerging when distinguishing between firms that train or maintain foundation models (developers) from those that are end users of such models (users).8Although this distinction may become more blurred depending e.g., on the emergence of customised language models or the broader availability of training capabilities, it appears nevertheless relevant at the time of writing.

Focusing on developers, a first key policy challenge is related to data – including privacy and Intellectual Property Rights (IPRs) issues related both to model training and content generation. A second challenge is related to competition, in particular to the tendency of increased concentration in development due to high fixed costs to train and operate models, economies of scope, first-mover advantages and barriers to entry (e.g., related to data, not only its size but increasingly on its uniqueness see further discussion in Vipra and Korinek, 2023 and in Schrepel and Pentland, 20239Schrepel and Pentland (2023) rightly note that while, thanks to technological developments, access to ever larger datasets is becoming increasingly less important, access to unique data remains critical, for two main reasons: first to provide answers to specific questions of users, and second, as they can provide a unique proprietary tool to training of foundation models that can provide a strong comparative advantage to players that own the data and can block access to them. ). A third policy challenge is related to environmental sustainability, considering the high computational power needed to train and operate large language models, and the related emissions (see also OECD, 2022).

Focusing instead on users, IPRs also remain a key challenge when using content created with generative AI, together with issues related to data confidentiality – e.g., in industries that need to comply with specific regulations such as healthcare. Limited reproducibility, opacity and complexity of models, “hallucinations” and uncertainties about accuracy may also be barriers preventing the diffusion of this technology among users.

From a broader policy perspective, in order to leverage the productivity potential of using generative AI, human capital is a key factor. While generative AI may be widely accessible and appears to already increase workers’ productivity (as suggested by the literature surveyed above), critical thinking will remain central to understand when and how to use generative AI as well as for developing and leveraging it for longer-term gains.10Early evidence from online vacancies in the United States suggest that leading AI employers exhibited a higher demand for AI professionals combining technical expertise with leadership, innovation, and problem-solving skills, underscoring the importance of a broad skill mix in this field (Borgonovi et al., 2023). Such skills – both for workers and managers – will be crucial to understand when a certain task is within the capabilities of AI systems, in assessing the outputs provided by generative AI and how those can augment and complement workflows or production processes more generally. Those skills will be also key for the potential of realising not only short-term gains from generative AI, but also leverage the technology limiting deskilling and strengthening absorptive capacity, elements critical to longer-term gains.

Furthermore, the extent to which generative AI may replace tasks that were not previously automatable brings unprecedented challenges for labour markets and inequalities. In this context, an ongoing policy debate is focusing on the extent to which more or less automation is desirable, the differences between automation in some tasks and augmentation in others, the role of policy to steer AI development and the promises and perils of human-like AI (see Agrawal et al., 2023; Acemoglu and Johnson, 2023; Brynjolfsson, 2022), including singularity and existential risks (e.g., Nordhaus, 2021; Jones, 2023).

In this context, policymakers can play a key role to foster an inclusive and sustainable digital transformation in the age of generative AI.11For a discussion of the role of socio-emotional, ethical and STEM skills for AI we refer to OECD (2023b).

Regulation will remain critical in promoting a use of AI that is innovative and trustworthy, that remains ethical, and that respects human rights and democratic values. In this context, since the 2019 Recommendation on AI, the OECD has led efforts on intergovernmental standards on AI, notably with the OECD AI principles (see OECD, 2023a), which focus on responsible stewardship of trustworthy AI, human-centered values and fairness, transparency, accountability, robustness, security, safety and accountability of AI systems.

Based on such principles, key recommendations for policymakers include investing in AI research and development, fostering a digital ecosystem for AI, shaping an enabling policy environment for AI, building human capacity and preparing for labour market transformation, and promoting international co-operation for trustworthy AI (see also Lorenz et al., 2023 for further discussion and policy considerations on generative AI).

Boosting human capital, including through strengthening STEM as well as education systems more broadly, training and ensuring fair and smooth transitions for displaced workers, appear critical elements to building human capacity and preparing for the labour market transformations brought by generative AI.

Focusing on developing a rich set of skills that include technical, socio-emotional and ethical aspects in both the public and private sectors could help provide governments and firms the tools needed to balance ethical and fairness concerns with the economic growth benefits deriving from the rapid pace of change triggered by generative AI.12For a discussion of the role of socio-emotional, ethical and STEM skills for AI we refer to OECD (2023b).

Balancing trade-offs between regulation and innovation, tackling competition and environmental challenges in development, and broader risks of bias, dis- or mis-information, privacy and IPRs infringement will likely remain central areas of policy discussion in the international debate around generative AI.

4. Concluding remarks

As generative AI increasingly reshapes economies and societies, early evidence focusing on selected occupations has pointed to its positive implications for workers’ productivity, especially for lower-skilled workers, when tasks carried out are within the capabilities of AI system.

While generative AI brings substantial opportunities for productivity growth, a number of challenges still remain. Some of the critical ones relate to the risks of generative AI (such as disinformation, bias, or those related to IPRs and surveillance), to human capital, labour markets and inequalities, as well as to the environmental and competition implications of AI development.

While it’s yet hard to predict how generative AI will affect aggregate productivity (see also Brynjolfsson and Unger, 2023) policymakers can play a critical role to foster an inclusive and sustainable digital transformation in the age of generative AI. Key policy areas include investing in AI research and development, fostering a digital ecosystem for AI, shaping an enabling policy environment for AI, building human capacity and preparing for labour market transformation, and promoting international co-operation for trustworthy AI.

In order to better understand the macroeconomic implications of generative AI, focusing on micro-drivers, characterising the patterns of use of generative AI, and its implications, appear relevant.

Better understanding the impacts of generative AI on productivity growth requires indeed exploring the characteristics of firms that develop it or use it, characterise the market in which they operate, assess the links between generative AI and complementary assets, and – if possible – its direct relation with firm productivity. Further attention could also be devoted to the actual role of generative AI for specific occupations, beyond occupational exposure to generative AI.

Future research at the microeconomic level could further assess the extent to which the use of generative AI is associated with longer-term gains, considering the role of absorptive capacity and how specific skills, such as those related to critical thinking, may help understand when to use and how to use generative AI. Further research may also focus on challenges related to skills obsolescence, and on the role managerial skills to redefine business models to reap the benefits of generative AI.


Citation: Flavio Calvino and Chiara Criscuolo, Generative AI And Productivity: Challenges, Opportunities And The Role of Policy, Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker), Network Law Review, Winter 2023.


The views expressed here are those of the authors and cannot be attributed to the OECD or its member countries. Contacts: Flavio.Calvino@oecd.org, Chiara.Criscuolo@oecd.org.


  • Acemoglu, D., D. Autor, J. Hazell and P. Restrepo (2022a), “Artificial intelligence and jobs: Evidence from online vacancies”, Journal of Labor Economics 40(S1), 293–340.
  • Acemoglu, D., G. W. Anderson, D. N. Beede, C. Buffington, E. E. Childress, E. Dinlersoz, L. S. Foster, N. Goldschlag, J. Haltiwanger, Z. Kroff, P. Restrepo, N. Zolas (2022b). “Automation and the workforce: A firm-level view from the 2019 annual business survey”, in Basu, S., L. Eldridge, J. Haltiwanger and E. Strassner (Eds.), Technology, Productivity, and Economic Growth, University of Chicago Press, Chicago, IL.
  • Acemoglu D. and S. Johnson (2023), Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, John Murray Press, London, UK.
  • Aghion, P., C. Antonin and S. Bunel (2019), “Artificial intelligence, growth and employment: The role of policy”, Economie et Statistique / Economics and Statistics (510-511-5), 149–164.
  • Agrawal, A., J. S. Gans and A. Goldfarb (2023), “Do we want less automation?”, Science, 381(6654), 155-158. https://www.science.org/doi/abs/10.1126/science.adh9429.
  • Alderucci, D., L. Branstetter, E. Hovy, A. Runge and N. Zolas (2020), “Quantifying the Impact of AI on Productivity and Labor Demand: Evidence from U.S. Census Microdata”, paper presented at the 2020 ASSA meeting, mimeo.
  • Babina, T., A. Fedyk, A. X. He and J. Hodson (2023). “Artificial intelligence, firm growth, and product innovation”, Journal of Finance (forthcoming), https://doi.org/10.1016/j.jfineco.2023.103745.
  • Borgonovi, F., et al. (2023), “Emerging trends in AI skill demand across 14 OECD countries”, OECD Artificial Intelligence Papers, No. 2, OECD Publishing, Paris, https://doi.org/10.1787/7c691b9a-en.
  • Brynjolfsson, E., D. Rock and C. Syverson (2021), “The Productivity J-Curve: How Intangibles Complement General Purpose Technologies”, American Economic Journal: Macroeconomics, Vol. 13/1, pp. 333-372, https://doi.org/10.1257/mac.20180386.
  • Brynjolfsson, E. (2022), “The Turing trap: The promise and peril of human-like artificial intelligence”, Daedalus, 151(2), 272-287.
  • Brynjolfsson, E., D. Li and L. Raymond (2023), “Generative AI at work”, Working Paper 31161, National Bureau of Economic Research, https://www.nber.org/papers/w31161.
  • Brynjolfsson, E. and G. Unger (2023), “The macroeconomics of Artificial Intelligence”, Finance and Development (F&D), December 2023, https://www.imf.org/en/Publications/fandd/issues/2023/12/Macroeconomics-of-artificial-intelligence-Brynjolfsson-Unger.
  • Calvino, F. and C. Criscuolo (2022), “Gone digital: Technology diffusion in the digital era”, in Z. Qureshi and C. Woo (Eds.), Shifting Paradigms: Growth, Finance, Jobs, and Inequality in the Digital Economy, Brookings Institution Press, Washington, DC.
  • Calvino, F., et al. (2022), “Identifying and characterising AI adopters: A novel approach based on big data’, OECD Science, Technology and Industry Working Papers, No. 2022/06, OECD Publishing, Paris, https://doi.org/10.1787/154981d7-en.
  • Calvino, F. and L. Fontanelli (2023a), “A portrait of AI adopters across countries: Firm characteristics, assets’ complementarities and productivity”, OECD Science, Technology and Industry Working Papers, No. 2023/02, OECD Publishing, Paris, https://doi.org/10.1787/0fb79bb9-en.
  • Calvino, F. and L. Fontanelli (2023), “Artificial intelligence, complementary assets and productivity: evidence from French firms”, LEM Working Paper No. 2023/35, Laboratory of Economics and Management (LEM), Sant’Anna School of Advanced Studies, Pisa, Italy, https://www.lem.sssup.it/WPLem/files/2023-35.pdf.
  • Czarnitzki, D., G. P. Fernández and C. Rammer, C. (2023), “Artificial intelligence and firm-level productivity”, Journal of Economic Behavior & Organization, 211, 188-205.
  • Damioli, G., V. V. Roy, and D. Vertesy (2021), “The impact of artificial intelligence on labor productivity”, Eurasian Business Review, 11(1), 1–25.
  • Dell’Acqua, F., E. and McFowland, E. R. Mollick, H. Lifshitz-Assaf, K. Kellogg, S. Rajendran, L. Krayer, F. Candelon, and K. R. Lakhani, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality”, Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013, Available at SSRN: https://dx.doi.org/10.2139/ssrn.4573321.
  • Eisfeldt, A. L., G. Schubert and M. B. Zhang (2023), “Generative AI and firm values”, NBER Working Paper No. 31222, National Bureau of Economic Research, Cambridge, MA, https://dx.doi.org/10.3386/w31222.
  • Eloundou, T., S. Manning, P. Mishkin and D. Rock (2023), “GPTs are GPTs: An early look at the labor market impact potential of large language models”, arXiv.org (2303.10130), https://arxiv.org/abs/2303.10130.
  • Felten, E. W., M. Raj and R. Seamans (2023), “Occupational heterogeneity in exposure to generative AI”, Available at SSRN, https://dx.doi.org/10.2139/ssrn.4414065.
  • Jones, C. I. (2023), “The AI dilemma: Growth versus existential risk”, NBER Working Paper No. 31837, National Bureau of Economic Research, Cambridge, MA, https://dx.doi.org/10.3386/w31837.
  • Kreitmeir, D. and P .Raschky, “The Unintended Consequences of Censoring Digital Technology – Evidence from Italy’s ChatGPT Ban”, available at SSRN, https://dx.doi.org/10.2139/ssrn.4422548.
  • Lane, M., M. Williams and S. Broecke (2023), “The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers”, OECD Social, Employment and Migration Working Papers, No. 288, OECD Publishing, Paris, https://doi.org/10.1787/ea0a0fe1-en.
  • Lorenz, P., K. Perset and J. Berryhill (2023), “Initial policy considerations for generative artificial intelligence”, OECD Artificial Intelligence Papers, No. 1, OECD Publishing, Paris, https://doi.org/10.1787/fae2d1e6-en.
  • McAfee, A., D. Rock and E. Brynjolfsson (2023), “How to capitalize on generative AI”, Harvard Business Review, November-December 2023, available at https://hbr.org/2023/11/how-to-capitalize-on-generative-ai.
  • McElheran, K. et al. (2023). “AI Adoption in America: Who, What, and Where”, NBER Working Paper No. 31788, National Bureau of Economic Research, Cambridge, MA, https://doi.org/10.3386/w31788.
  • Nordhaus, W. D. (2021), “Are we approaching an economic singularity? Information technology and the future of economic growth”, American Economic Journal: Macroeconomics, 13(1), 299-332.
  • Noy, S. and W. Zhang (2023), “Experimental evidence on the productivity effects of generative artificial intelligence”, Science, 381(6654), 187–192.
  • OECD (2022), “Measuring the environmental impacts of artificial intelligence compute and applications: The AI footprint”, OECD Digital Economy Papers, No. 341, OECD Publishing, Paris, https://doi.org/10.1787/7babf571-en.
  • OECD (2023a), “AI language models: Technological, socio-economic and policy considerations”, OECD Digital Economy Papers, No. 352, OECD Publishing, Paris, https://doi.org/10.1787/13d38f92-en.
  • OECD (2023b), OECD Skills Outlook 2023: Skills for a Resilient Green and Digital Transition, OECD Publishing, Paris, https://doi.org/10.1787/27452f29-en.
  • Peng, S., E. Kalliamvakou, P. Cihon and M. Demirer (2023), “The impact of ai on developer productivity: Evidence from GitHub copilot”, arXiv (2302.06590), https://arxiv.org/pdf/2302.06590.pdf.
  • Pizzinelli, C., A. Panton, M. Mendes Tavares, M. Cazzaniga and L. Li (2023), “Labor Market Exposure to AI: Cross-country Differences and Distributional Implications”, IMF Working Paper 2023(216), A001, https://doi.org/10.5089/9798400254802.001.A001.
  • Schrepel, T. and A. Pentland (2023), “Competition between AI Foundation Models: Dynamics and Policy Recommendations”, MIT Connection Science Working Paper, 1-2003, Available at SSRN: https://ssrn.com/abstract=4493900
  • Vipra, J. and A. Korinek (2023), “Market concentration implications of foundation models”, arXiv preprint arXiv:2311.01550, https://doi.org/10.48550/arXiv.2311.01550.
  • Zolas, N. et al. (2020), “Advanced Technologies Adoption and Use by U.S. Firms: Evidence from the Annual Business Survey”, NBER Working Paper No. 28290, National Bureau of Economic Research, Cambridge, MA, https://doi.org/10.3386/w28290

Related Posts