Yves here. This VoxEU article on the systemic risk posed by AI comes as even the popular media is getting worried about precisely this type of exposure. However, a new piece at Time bizarrely depicts the real risk as panic, as opposed to key breakdowns that are not or potentially even cannot readily be fixed. Contrast Time’s cheery take with the more comprehensive and as a result more sobering assessment from the VoxEU experts. From Time in The World Is Not Prepared for an AI Emergency:
Picture waking up to find the internet flickering, card payments failing, ambulances heading to the wrong address, and emergency broadcasts you are no longer sure you can trust. Whether caused by a model malfunction, criminal use, or an escalating cyber shock, an AI-driven crisis could move across borders quickly.
In many cases, the first signs of an AI emergency would likely look like a generic outage or security failure. Only later, if at all, would it become clear that AI systems had played a material role.
Some governments and companies have begun to build guardrails to manage the risks of such an emergency. The European Union AI Act, the United States National Institute of Standards and Technology risk framework, the G7 Hiroshima process and international technical standards all aim to prevent harm. Cybersecurity agencies and infrastructure operators also have runbooks for hacking attempts, outages, and routine system failures. What is missing is not the technical playbook for patching servers or restoring networks. It is the plan for preventing social panic and a breakdown in trust, diplomacy, and basic communication if AI sits at the center of a fast-moving crisis.
Preventing an AI emergency is only half the job. The missing half of AI governance is preparedness and response. Who decides that an AI incident has become an international emergency? Who speaks to the public when false messages are flooding their feeds? Who keeps channels open between governments if normal lines are compromised?…
We do not need new, complicated institutions to oversee AI—we simply need governments to plan in advance.
You will see, by contrast, that the article sees AI risks as multi-fronted, often by virtue of the potential to amplify existing hazards, and that much needs to be done in the way of prevention.
By Stephen Cecchetti, Rosen Family Chair in International Finance, Brandeis International Business School Brandeis University; Vice-Chair, Advisory Scientific Committee European Systemic Risk Board, Robin Lumsdaine, Crown Prince of Bahrain Professor of International Finance, Kogod School of Business American University; Professor of Applied Econometrics, Erasmus School of Economics Erasmus University Rotterdam, Tuomas Peltonen, Deputy Head of the Secretariat European Systemic Risk Board, and Antonio Sánchez Serrano, Senior Lead Financial Stability Expert European Systemic Risk Board. Originally published at VoxEU
While artificial intelligence offers substantial benefits to society, including accelerated scientific progress, improved economic growth, better decision making and risk management and enhanced healthcare, it also generates significant concerns regarding risks to the financial system and society. This column discusses how AI can interact with the main sources of systemic risk. The authors then propose a mix of competition and consumer protection policies, complemented by adjustments to prudential regulation and supervision to address these vulnerabilities.
In recent months we have observed sizeable corporate investment in developing large-scale models – those where training requires more than 1023 floating-point operations – such as OpenAI’s ChatGPT, Anthropic’s Claude, Microsoft’s Copilot and Google’s Gemini. While OpenAI does not publish exact numbers, recent reports suggest ChatGPT has roughly 800 million active weekly users. Figure 1 shows the sharp increase in the release of large-scale AI systems since 2020. The fact that people find these tools intuitive to use is surely one reason for their speedy widespread adoption. In part due to the seamless inclusion of these tools in existing day-to-day platforms, companies are working to integrate AI tools into their processes.
Figure 1 Number of large-scale AI systems released per year
Notes: Data for 2025 up to 24 August. The white box in the 2025 bar is the result of extrapolating the data to that date for the full year.
Source: World in Data.
A growing literature examines the implications for financial stability of AI’s rapid development and widespread adoption (see, among others, Financial Stability Board 2024, Aldasoro et al. 2024, Daníelsson and Uthemann 2024, Videgaray et al. 2024, Daníelsson 2025, and Foucault et al. 2025). In a recent report of the Advisory Scientific Committee of the European Systemic Risk Board (Cecchetti et al. 2025), we discuss how the properties of AI can interact with the various sources of systemic risk. Identifying related market failures and externalities, we then consider the implications for financial regulatory policy.
The Development of AI in Our Societies
Artificial intelligence – encompassing both advanced machine-learning models and, more recently, developed large language models – can solve large-scale problems quickly and change how we allocate resources. General uses of AI include knowledge-intensive tasks such as (i) aiding decision making, (ii) simulating large networks, (iii) summarising large bodies of information, (iv) solving complex optimisation problems, and (v) drafting text. There are numerous channels through which AI can create productivity gains, including automation (or deepening existing automation), helping humans complete tasks more quickly and efficiently, and allowing us to complete new tasks (some of which have not yet been imagined). However, current estimates of the overall productivity impact of AI tend to be quite low. In a detailed study of the US economy, Acemoglu (2024) estimates the impact on total factor productivity (TFP) to be in the range of 0.05% to 0.06% per year over the next decade. Since TFP grew on average about 0.9% per year in the US over the past quarter century, this is a very modest improvement.
Estimates suggest a diverse impact across the labour market. For example, Gmyrek et al. (2023) analyse 436 occupations and identify four groups: those least likely to be impacted by AI (mainly composed of manual and unskilled workers), those where AI will augment and complement tasks (occupations such as photographers, primary school teachers or pharmacists), those where it is difficult to predict (amongst others financial advisors, financial analysts and journalists), and those most likely to be replaced by AI (including accounting clerks, word processing operators and bank tellers). Using detailed data, the authors conclude that 24% of clerical tasks are highly exposed to AI, with an additional 58% having medium exposure. For other occupations, they conclude that roughly one-quarter are medium-exposed.
AI and Sources of Systemic Risk
Our report emphasises that AI’s ability to process immense quantities of unstructured data and interact naturally with users allows it to both complement and substitute for human tasks. However, using these tools comes with risks. These include difficulty in detecting AI errors, decisions based on biased results because of the nature of training data, overreliance resulting from excessive trust, and challenges in overseeing systems that may be difficult to monitor.
As with all uses of technology, the issue is not AI itself, but how both firms and individuals choose to develop and use it. In the financial sector, uses of AI by investors and intermediaries can generate externalities and spillovers.
With this in mind, we examine how AI might amplify or alter existing systemic risks in finance, as well as how it might create new ones. We consider five categories of systemic financial risks: liquidity mismatches, common exposures, interconnectedness, lack of substitutability, and leverage. As shown in Table 1, AI’s features that can exacerbate these risks include:
- Monitoring challenges where the complexity of AI systems makes effective oversight difficult for both users and authorities.
- Concentration and entry barriers resulting in a small number of AI providers creating single points of failure and broad interconnectedness.
- Model uniformity in which widespread use of similar AI models can lead to correlated exposures and amplified market reactions.
- Overreliance and excessive trust arising when superior initial performance leads people to place too much trust in AI, increasing risk taking and hindering oversight.
- Speed of transactions, reactions, and enhanced automation that can amplify procyclicality and make it harder to stop self-reinforcing adverse dynamics.
- Opacity and concealment in which AI’s complexity can diminish transparency and facilitate intentional concealment of information.
- Malicious uses where AI can enhance the capacity for fraud, cyber-attacks and market manipulation by malicious actors.
- Hallucinations and misinformation where AI can generate false or misleading information, leading to widespread misinformed decisions and subsequent market instability.
- History constraints where AI’s reliance on past data makes it struggle with unforeseen ‘tail events’, potentially leading to excessive risk taking.
- Untested legal status in which the ambiguity around legal responsibility for AI actions (e.g. the right to use data for training and liability for advice provided) can pose systemic risks if providers or financial institutions face AI-related legal setbacks.
- Complexity makes the system inscrutable so that it is difficult to understand AI’s decision-making processes, which can then trigger runs when users discover flaws or behaviour is unexpected.
Table 1 How current and potential features of AI can amplify or create systemic risk
Notes: Titles of existing features of AI are red if they contribute to four or more sources of systemic risk and orange if they contribute to three. Potential features of AI are coloured orange to show that they are not certain to occur in the future. In the columns, sources of systemic risk are coloured red when they relate to ten or more features of AI and orange if they relate to more than six but fewer than ten features of AI.
Source: Cecchetti et al. (2025).
Capabilities we have not yet seen, such as the creation of a self-aware AI or complete human reliance on AI, could further amplify these risks and create additional challenges arising from a loss of human control and extreme societal dependency. For the time being, these remain hypothetical.
Policy Response
In response to these systemic risks and associated market failures (fixed cost and network effects, information asymmetries, bounded rationality), we believe it is important to engage in a review of competition and consumer protection policies, and macroprudential policies. Regarding the latter, key policy proposals include:
- Regulatory adjustments such as recalibrating capital and liquidity requirements, enhancing circuit breakers, amending regulations addressing insider trading and other types of market abuse, and adjusting central bank liquidity facilities.
- Transparency requirements that include adding labels to financial products to increase transparency about AI use.
- ‘Skin in the game’ and ‘level of sophistication’ requirements so that AI providers and users bear appropriate risk.
- Supervisory enhancements aimed at ensuring adequate IT and staff resources for supervisors, increasing analytical capabilities, strengthening oversight and enforcement and promoting cross-border cooperation.
In every case, it is important that authorities engage in the analysis required to obtain a clearer picture of the impact and channels of influence of AI, as well as the extent of its use in the financial sector.
In the current geopolitical environment, the stakes are particularly high. Should authorities fail to keep up with the use of AI in finance, they would no longer be able to monitor emerging sources of system risk. The result will be more frequent bouts of financial stress that require costly public sector intervention. Finally, we should emphasise that the global nature of AI makes it important that governments cooperate in developing international standards to avoid actions in one jurisdiction creating fragilities in others.
See original post for references
