'Too Dangerous to Release' Is Becoming AI's New Normal

TIME
Leading AI firms are increasingly restricting access to their most powerful models due to concerns over dual-use risks like bioterrorism and cyberattacks.

Summary

Major AI companies like OpenAI and Anthropic are shifting toward restricted release strategies for their most advanced models, such as GPT-Rosalind and Claude Mythos. These models, which excel in high-stakes fields like biology and cybersecurity, are now limited to 'trusted partners' to mitigate risks of misuse. Experts and policymakers are debating whether private corporations or federal governments should dictate these access standards, especially as the dual-use nature of AI makes it difficult to balance security with scientific advancement. Additionally, the potential for open-source AI to eventually replicate these capabilities threatens to bypass current corporate safety controls, highlighting a growing need for formal government oversight.

(Source:TIME)