Time to rethink AI exposure, deployment, and strategy
This week, Yann LeCun, Meta’s recently departed Chief AI Scientist and one of the fathers of modern AI, set out a technically grounded view of the evolving AI risk and opportunity landscape at the UK Parliament’s APPG Artificial Intelligence evidence session. APPG AI is the All-Party Parliamentary Group on Artificial Intelligence. This post is built around Yann LeCun’s testimony to the group, with quotations drawn directly from his remarks.
His remarks are relevant for investment managers because they cut across three domains that capital markets often consider separately, but should not: AI capability, AI control, and AI economics.
The dominant AI risks are no longer centered on who trains the largest model or secures the most advanced accelerators. They are increasingly about who controls the interfaces to AI systems, where information flows reside, and whether the current wave of LLM-centric capital expenditure will generate acceptable returns.
Sovereign AI risk
“This is the biggest risk I see in the future of AI: capture of information by a small number of companies through proprietary systems.”
For states, this is a national security concern. For investment managers and corporates, it is a dependency risk. If research and decision-support workflows are mediated by a narrow set of proprietary platforms, trust, resilience, data confidentiality, and bargaining power weaken over time.
LeCun identified “federated learning” as a partial mitigant. In such systems, centralized models avoid needing to see underlying data for training, relying instead on exchanged model parameters.
In principle, this allows a resulting model to perform “…as if it had been trained on the entire set of data…without the data ever leaving (your domain).”
This is not a lightweight solution, however. Federated learning requires a new type of setup with trusted orchestration between parties and central models, as well as secure cloud infrastructure at national or regional scale. It reduces data-sovereignty risk, but does not remove the need for sovereign cloud capacity, reliable energy supply, or sustained capital investment.
AI Assistants as a Strategic Vulnerability
“We cannot afford to have those AI assistants under the proprietary control of a handful of companies in the US or coming from China.”
AI assistants are unlikely to remain simple productivity tools. They will increasingly mediate everyday information flows, shaping what users see, ask, and decide. LeCun argued that concentration risk at this layer is structural:
“We are going to need a high diversity of AI assistants, for the same reason we need a high diversity of news media.”
The risks are primarily state-level, but they also matter for investment professionals. Beyond obvious misuse scenarios, a narrowing of informational perspectives through a small number of assistants risks reinforcing behavioral biases and homogenizing analysis.
Edge Compute Does Not Remove Cloud Dependence
“Some will run on your local device, but most of it will have to run somewhere in the cloud.”
From a sovereignty perspective, edge deployment may reduce some workloads, but it does not eliminate jurisdictional or control issues:
“There is a real question here about jurisdiction, privacy, and security.”
LLM Capability Is Being Overstated
“We are fooled into thinking these systems are intelligent because they are good at language.”
The issue is not that large language models are useless. It is that fluency is often mistaken for reasoning or world understanding — a critical distinction for agentic systems that rely on LLMs for planning and execution.
“Language is simple. The real world is messy, noisy, high-dimensional, continuous.”
For investors, this raises a familiar question: How much current AI capital expenditure is building durable intelligence, and how much is optimizing user experience around statistical pattern matching?
World Models and the Post-LLM Horizon
“Despite the feats of current language-oriented systems, we are still very far from the kind of intelligence we see in animals or humans.”
LeCun’s concept of world models focuses on learning how the world behaves, not merely how language correlates. Where LLMs optimize for next-token prediction, world models aim to predict consequences. This distinction separates surface-level pattern replication from models that are more causally grounded.
The implication is not that today’s architectures will disappear, but that they may not be the ones that ultimately deliver sustained productivity gains or investment edge.
Meta, Open Platforms Risk
LeCun acknowledged that Meta’s position has changed:
“Meta used to be a leader in providing open-source systems.”
“Over the last year, we’ve lost ground.”
This reflects a broader industry dynamic rather than a simple strategic reversal. While Meta continues to release models under open-weight licenses, competitive pressure, and rapid diffusion of model architectures — highlighted by the emergence of Chinese research groups such as DeepSeek — have reduced the durability of purely architectural advantage.
LeCun’s concern was not framed as a single-firm critique, but as a systemic risk:
“Neither the US nor China should dominate this space.”
As value migrates from model weights to distribution, platforms increasingly favor proprietary systems. From a sovereignty and dependency perspective, this trend warrants attention from investors and policymakers alike.
Agentic AI: Ahead of Governance Maturity
“Agentic systems today have no way of predicting the consequences of their actions before they act.”
“That’s a very bad way of designing systems.”
For investment managers experimenting with agents, this is a clear warning. Premature deployment risks hallucinations propagating through decision chains and poorly governed action loops. While technical progress is rapid, governance frameworks for agentic AI remain underdeveloped relative to professional standards in regulated investment environments.
Regulation: Applications, Not Research
“Do not regulate research and development.”
“You create regulatory capture by big tech.”
LeCun argued that poorly targeted regulation entrenches incumbents and raises barriers to entry. Instead, regulatory focus should fall on deployment outcomes:
“Whenever AI is deployed and may have a big impact on people’s rights, there needs to be regulation.”
Conclusion: Maintain Sovereignty, Avoid Capture
The immediate AI risk is not runaway general intelligence. It is the capture of information and economic value within proprietary, cross-border systems. Sovereignty, at both state and firm level, is central and that means a safety-first approach to deploying LLMs in your organization. A low-trust approach.
LeCun’s testimony shifts attention away from headline model releases and toward who controls data, interfaces, and compute. At the same time, much current AI capital expenditure remains anchored to an LLM-centric paradigm, even as the next phase of AI is likely to look materially different. That combination creates a familiar environment for investors: elevated risk of misallocated capital.
In periods of rapid technological change, the greatest danger is not what technology can do, but where dependency and rents ultimately accrue.
