AI Is Eliminating Jobs at Scale. Where Is the Safety Net?
The Case for an AI Transition Fund
She is fifty-two years old, has worked as an accountant for twenty-five years, and is good at her job. She knows the software, understands the regulations, has built relationships with clients over decades. She is also, by any reasonable assessment, about to become redundant - not because her employer is struggling, but because a system that costs a few hundred dollars a year can now do most of what she does, faster and without holidays or health insurance.
What happens to her?
In the traditional political narrative, the answer is roughly: she retrains, finds something new, adapts. The economy has always created new jobs to replace the old ones. The industrial worker became the service worker. The typist became the data entry clerk. Change is disruptive but the system self-corrects.
Today, I find this argument increasingly difficult to believe - and I suspect that in a few years, almost no one will be making it with a straight face.
The standard reassurance rests on a historical pattern that may simply not apply this time. Previous waves of automation replaced human muscle or narrow, repetitive cognitive tasks, and the jobs that emerged in their place were typically ones requiring broader human judgment, creativity, or social intelligence. The implicit assumption was that there would always be a residual domain of human capability that machines could not reach.
That assumption is being dismantled, systematically and rapidly. The frontier of what AI and robotics can do is no longer advancing along one narrow track - it is advancing across virtually every domain simultaneously, from legal reasoning to medical diagnosis, from creative work to complex logistics. The question of whether new jobs will emerge to replace those lost is no longer a question about historical patterns; it is a question about whether human cognitive labour will retain any comparative advantage at all in an economy where the cost of artificial intelligence continues to fall at the pace it has fallen over the past decade.
I do not think anyone can answer that question with certainty. But I am personally sceptical that the answer will be reassuring - and I think the consequences of being wrong, if we assume optimism and it turns out to be unjustified, are catastrophic in a way that justifies treating the risk seriously now rather than waiting for the evidence to accumulate.
Because here is what is already clear, even before we reach the more speculative territory: the transition, whatever its ultimate endpoint, is going to be enormously painful for a very large number of people.
The fifty-two year old accountant is not going to retrain as a prompt engineer. The fifty-eight year old paralegal, the forty-five year old journalist, the sixty year old logistics coordinator - these are not people for whom “learn new skills and adapt” is a realistic or dignified response to the elimination of the work they have built their lives around. And they are not a marginal group. They are tens of millions of people across Europe and North America alone, with hundreds of millions more across the Global South who will face the same disruption without even the threadbare safety nets that exist in wealthier countries. And this without considering the robotics wave, coming right behind the AI one.
The social and political consequences of mass displacement without adequate support are not difficult to predict, because we have seen them before. The deindustrialisation of the 1980s, which was far slower and more geographically contained than what is coming, produced decades of political instability, the hollowing out of communities, and ultimately the populist backlash that is still reshaping Western democracies today. What is coming is faster, broader, and hits a wider range of workers across the income distribution.
We are, in other words, building toward a social crisis of considerable magnitude - and doing almost nothing to prepare for it.
What is needed is straightforward in principle, even if politically difficult in practice: an AI Transition Fund, designed to provide meaningful support to workers displaced by automation, funded by those who are capturing the gains from that automation.
The immediate priority is unemployment support - not the thin, time-limited benefits that exist in most countries, but serious income replacement for workers whose professions are being structurally eliminated, on a timescale that reflects the reality of mid-career displacement rather than the assumption of a quick return to employment. Alongside that, hoping for the best, real investment in retraining - not the perfunctory courses that pass for workforce development in most countries today, but substantive programmes built around what the labour market of the next decade will actually require.
And beyond that, as a longer-term horizon that policy needs to begin preparing for now rather than later: if the optimists are wrong about job creation, if the displacement turns out to be structural rather than transitional, then the conversation about universal basic income can no longer be treated as a utopian distraction. It becomes the only serious response to an economy in which human labour has been priced out of large parts of the market by technology. UBI is not the first step here - the immediate crisis demands immediate, easier-to-deploy tools. But any honest account of where this trajectory leads has to acknowledge that it sits at the end of the road if the jobs do not come back.
How do you fund such as Fund? The answer should follow the logic of who benefits.
The companies deploying large language models and AI systems at scale are capturing productivity gains of extraordinary magnitude while externalising the social costs of displacement onto workers, communities, and public budgets. A levy on corporate AI usage - structured around the scale of deployment and the labour displacement it generates - is not a punitive measure but a straightforward application of the polluter-pays principle to a different kind of externality. Those who profit from the disruption should contribute to managing its consequences.
Beyond that, the extraordinary concentration of wealth that AI is accelerating - in the hands of the owners of the most powerful models and the infrastructure on which they run - makes the case for higher taxes on both income and wealth at the very top more urgent, not less. The billionaires being minted by this transformation are not a natural phenomenon; they are the product of specific technological and regulatory choices, and the proceeds of that wealth can and should be partially redirected toward those bearing the costs.
This is not a radical argument. It is the same logic that built the welfare state in the wake of industrial capitalism - the recognition that transformative economic change generates winners and losers, and that a functioning society requires mechanisms to distribute the gains more broadly than the market, left to itself, will do.
The technology is not waiting for politics to catch up. The displacement is happening now, at a pace that will only accelerate, and the window for building the institutional response before the social costs become unmanageable is not infinite.
The question is not whether an AI Transition Fund is necessary. It is whether the political will to build one can be assembled before the crisis makes the absence of it impossible to ignore.
![Andrea Venzon [English]](https://substackcdn.com/image/fetch/$s_!_TTE!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Facd73441-dd62-4692-b623-54f4cf7c2bb7_1231x1231.png)

