The AI Trilemma: How to Regulate a Revolutionary Technology

Foreign Affairs
Regulating AI involves navigating a trilemma between national security, economic security, and societal security, requiring prioritized, practical policy choices.

Summary

The initial global push for AI regulation following ChatGPT's release has stalled due to powerful incentives for rapid deployment and geopolitical competition, particularly with China. The author argues that proponents must move past a broad agenda and confront the 'AI Trilemma': the trade-off between pursuing national security (military advantage), economic security (AI competitiveness), and societal security (mitigating harms like job loss and misuse). Pursuing any two objectives compromises the third. The article dismisses the 'singularity' concept as an unlikely basis for policy, as superintelligence development faces significant physical and institutional hurdles, suggesting a gradual emergence instead. Practical policy must focus on achievable goals, avoiding unrealistic measures like freezing technological advancement or completely banning open-weight models due to enforcement difficulties and geopolitical realities. The proposed path forward involves embracing necessary compromises: implementing a 'risk tax' on private AI labs to incentivize crucial safety research (a public good), and empowering a government body, like the proposed CAISI, with resources from a national data repository and the authority to veto the release of dangerous frontier models. While these steps impose minor economic costs, they significantly enhance societal safety and could eventually pave the way for broader international agreements, similar to past nonproliferation efforts.

(Source:Foreign Affairs)