Introducing the OpenAI Safety Fellowship
Summary
OpenAI is launching the OpenAI Safety Fellowship, a pilot program designed for external researchers, engineers, and practitioners to conduct high-impact research on the safety and alignment of advanced AI systems. The program will run from September 14, 2026, through February 5, 2027, focusing on priority areas such as safety evaluation, ethics, robustness, scalable mitigations, and agentic oversight.
Participants will receive a monthly stipend, compute support, and mentorship from OpenAI experts. While workspace is available in Berkeley at Constellation, fellows may also work remotely. The fellowship aims to produce substantial research outputs, such as papers, benchmarks, or datasets. Applicants from various backgrounds, including computer science and social sciences, are encouraged to apply, with priority placed on research ability and technical judgment.
(Source:OpenAI)