- Miles Brundage, former Head of Policy Research and Senior Advisor for the AGI Readiness team has resigned from OpenAI.
- He wants to work in the non-profit sector for greater freedom to publish. OpenAI has now disbanded the AGI Readiness team.
- Brundage urges OpenAI employees to speak their minds and raise concerns inside the company.
After Mira Murati resigned from OpenAI last month, Miles Brundage has now left the company, in another high-profile exit. Brundage served as the Head of Policy Research at OpenAI and recently, he was made Senior Advisor for the AGI Readiness team. His work was instrumental in safely deploying AI models on ChatGPT and led red teaming efforts at OpenAI. The new “System Card” we see for OpenAI models is thanks to his vision.
In his X post, Brundage says, “I think I’ll have more impact as a policy researcher/advocate in the non-profit sector, where I’ll have more of an ability to publish freely and more independence.“
As I have noted in my piece on OpenAI’s internal conflicts, the company’s shift toward profit-driven products over AI research and safety, is pushing many longtime researchers to leave. OpenAI is also working to make its non-profit board toothless and become a for-profit corporation. This radical shift in OpenAI’s culture is forcing many to quit.
He further mentions, “OpenAI has a lot of difficult decisions ahead, and won’t make the right decisions if we succumb to groupthink,” urging OpenAI employees to raise concerns inside the company.
More crucially, with Miles Brundage’s departure, OpenAI is disbanding the AGI Readiness team. Those members will be absorbed by other teams and some projects will move to the Mission Alignment team. Brundage has shared his thoughts in more detail on Substack.
Apart from that, The New York Times interviewed a former OpenAI researcher, Suchir Balaji, who says the company broke copyright law for AI training. Balaji quit the company in August because “he no longer wanted to contribute to technologies that he believed would bring society more harm than benefit.”
With such high-profile exits, what do you think about OpenAI’s new direction? Is it going to take AI safety seriously or focus more on shipping commercial products? Let us know in the comments below.