The trap Anthropic built for itself
Summary
Anthropic was blacklisted by the Trump administration from federal contracts, including a potential $200 million deal, after its CEO refused to allow the company's AI to be used for mass domestic surveillance or autonomous lethal weapons. Max Tegmark of the Future of Life Institute argues this crisis is a trap the company, along with rivals like OpenAI and Google DeepMind, built for itself by consistently lobbying against binding safety regulations and instead promising self-governance.
Tegmark points out that all major AI labs have recently abandoned key safety commitments—such as Anthropic dropping its promise not to release powerful systems without certainty of safety—because they successfully lobbied for a regulatory vacuum, resulting in less oversight for AI than for sandwiches. This lack of external rules means governments can demand dangerous uses, as seen with the Pentagon.
He dismisses the 'race with China' argument, noting China is banning certain AI applications due to internal stability concerns. Tegmark concludes that uncontrollable superintelligence is a national security threat, analogous to the nuclear arms race where no one wins. He remains cautiously optimistic that this incident might force the industry to accept external regulation, treating AI companies like others that require safety demonstrations before releasing powerful products.
(Source:TechCrunch)