- In a recent blog, OpenAI revealed that the company is training its next frontier model.
- OpenAI anticipates that the resulting system will unlock newer AI capabilities, bringing us closer to AGI.
- In view of this, the company has formed a Safety and Security committee that will look into critical safety and security decisions.
OpenAI has been mired in controversies recently. After Ilya Sutskever, chief scientist at OpenAI left the company, Jan Leike, the superalignment head at OpenAI, also resigned after reaching a “breaking point” with the leadership over compute for safety research. Now, to allay the fear over safety concerns, OpenAI has formed a Safety and Security committee.
“This new committee is responsible for making recommendations on critical safety and security decisions for all OpenAI projects“, says OpenAI in its blog. Along with that, OpenAI reveals that it “has recently begun training its next frontier model.”
OpenAI says that the next frontier model (unlikely to be called GPT-5) will redefine the boundary of AI capabilities and the company is anticipating that “the resulting systems to bring us to the next level of capabilities on our path to AGI.”
Prior to this announcement, during the ChatGPT 4o launch, OpenAI CTO, Mira Murati said that the next big thing is coming, at the tail end of the event. And it seems OpenAI is looking to release its next frontier model sometime in 2024 itself.
As for the Safety and Security committee, it includes Sam Altman, Bret Taylor, Adam D’Angelo, and Nicole Seligman. Besides that, technical members from OpenAI will also be part of the committee. Over the next 90 days, the committee shall evaluate the upcoming models and share the recommendations with the OpenAI board.
Besides that, it must be noted that OpenAI recently removed the ‘Sky’ voice from ChatGPT which sounded eerily similar to Scarlett Johansson’s voice in the movie ‘Her’. OpenAI might be looking into a potential lawsuit amid growing concerns over the company’s ethical practices, led by CEO Sam Altman.