The small equity research shop Citrini recently sent a panic through financial markets when it outlined a scenario in which artificial intelligence ( AI ) ends most white-collar employment by 2028, with dire consequences for the broader economy. But this forecast is surely too pessimistic in some respects. Outside a few sectors, like software, frictions to adoption and sheer inertia will probably slow the pace of change. This has always been the case. For example, although automated telephone exchanges were possible in the 1920s, the last human telephone operator in the United States was not replaced until the 1980s.
Moreover, the technology itself is always only one variable. There also must be processes and structures around it to assure customers of reliable service. This is where incumbents have an advantage over challengers, even if they do not use the latest technology.
And even if incumbents are displaced, the new opportunities created by AI-induced cost reductions and productivity enhancements need not lead only to more AI. They may also require the work of humans – as with the internet and the rise of influencers.
Still, in some ways, the Citrini post is not pessimistic enough. Even setting aside the possibility that we might all become slaves to some AI overlord, the broader economic outcomes depend on how good AI gets and how fast, the pace of adoption by users, who profits from it and how society reacts. Given all these variables, some extreme scenarios are indeed conceivable.
Consider, for example, a future in which a few differentiated platforms ( say, Anthropic or Meta ) reach a level of generalized AI that allows them to outpace the competition and steadily charge high prices to user firms. These dominant platforms would generate enormous profits, augmenting the incomes of their employees ( who will be few, because AI will cull their ranks ) and their shareholders. At the same time, the many firms relying on their services would be willing to pay, because AI would raise their own productivity, allowing them to shed more white-collar workers.
These unemployed workers would then look for work in adjacent industries where AI has not yet rendered their skills useless. But if those jobs are few, they will join lines for work as gardeners, waiters and shop assistants, further depressing wages for these occupations. Assuming that AI displaces cognitive tasks before skilled physical ones, machinists, plumbers, and masons may still have work until robots become sophisticated enough. But over time, competition for those jobs will also increase as white-collar workers retrain. The pain will spread, and only the AI platforms and their investors will benefit. Or will they?
Before answering that, consider another “competitive” scenario in which no platforms “win” because there is little differentiation between ChatGPT 33.2, Gemini 25 and all the others. Although this scenario may still be devastating for white-collar jobs, prices for AI will be low, and the productivity benefits will flow through the economy, as will the resulting profits. Spared from expending enormous sums on AI, user firms could cut prices and expand production to meet the increased demand, implying more jobs elsewhere. There would be far less pain than in the first scenario, because lower-priced goods and services would allow pre-existing worker savings to go further.
Not only do current trends suggest that this second scenario is more likely than an AI oligopoly, but the government could take steps to ensure that it materializes, for example, through AI price regulations or a refusal to protect AI model builders from those who copy them. Would-be AI oligarchs should not assume that society will defend their enormous profits even as their products cause widespread job losses and hardship.
Of course, AI incumbents will lobby aggressively, corrupting some legislators to block regulation. They will mount public campaigns, using their many channels of influence to argue ( not entirely incorrectly ) that regulation will be ham-handed, harming efficiency and innovation while benefiting geopolitical rivals. But if the AI-induced pain is indeed widespread, the political impetus for intervention will remain strong.
Even if the state fails to ensure competitive AI prices, it can tax oligopolistic AI providers, their employees, and their shareholders to compensate the affected. The difficulty here lies in targeting. How do you identify those with supernormal profits from AI? How do you support those harmed, given how hard it has been to assist trade-affected workers in the past? And how do you distinguish between a technologically displaced worker and a worker laid off because of adverse business conditions or incompetence?
To avoid some of these questions, there will probably be a push for generous unemployment support regardless of the cause – a first step toward an eventual universal basic income. But this raises another problem, because even if fiscally strapped governments can raise sufficient revenues, there will still be many jobs that require human workers. Overly generous unemployment benefits therefore will push up the wages employers have to offer to coax workers out of unemployment, further reducing job creation.
Ultimately, there are no easy public responses to the problem of large-scale but not universal unemployment. Societies will have to experiment creatively, improving the safety net somewhat, while encouraging businesses to create jobs and reskill workers where possible. At the same time, if any of the AI platforms racing to achieve a near-monopoly does reach its goal, government policy reactions will almost certainly impair its profits. How, then, will these companies’ massive and still-growing debts be serviced? Will a financial crisis follow?
The best we can hope for is a Goldilocks scenario where the AI rollout is not so fast that workers cannot learn how to augment their jobs with AI, rather than being displaced; and where the AI industry is not too oligopolistic, so that the benefits accrue to society more broadly. Imaginative commentaries like the Citrini post force us to think about what might happen if the AI story turns out differently. Now is the time to map out the possible scenarios and start preparing for them.
Akhil Rajan also contributed to this commentary.
Raghuram G. Rajan is a professor of finance at the University of Chicago Booth School of Business and a former governor of the Reserve Bank of India and chief economist of the International Monetary Fund.
Copyright: Project Syndicate