Why was OpenAI’s Sam Altman Fired? These New Details Worry Me

For better or worse, Sam Altman-led OpenAI is almost always hitting the headlines. Last year, Altman got fired from the company, only to be appointed back a couple of days later. Recently, there was quite the kerfuffle with the hot AI startup allegedly using actress Scarlett Johansson’s voice for the new conversational mode on GPT-4o without her consent.

While that controversy has still not subsided, OpenAI has taken the internet by storm for all the wrong reasons, all over again. Now, ex-OpenAI board members have brought to light the actual reasons behind Altman’s firing in the past, hinting at why it should have stayed that way.

From Non-Profit to For-Profit?

So, OpenAI started out as a non-profit body, with the vision of making AGI (Artificial General Intelligence) accessible and beneficial to humanity. While it did eventually have a profit-making unit to get the required funding, it was the non-profit nature that dominated the company’s ethos.

However, under Altman’s leadership, the profit-making vision has started taking over instead. That’s what the ex-board members, Helen Toner and Tasha McCauley suggest. A new exclusive interview of Toner on the TED AI Show is making rounds on the internet.

Toner says,

“When ChatGPT came out November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter. Sam didn’t inform the board that he owned the OpenAI startup fund, even though he was constantly claiming to be an independent board member with no financial interest in the company.”

This does hit like a truck, especially since ChatGPT basically was the inflection point of the AI chaos we’re seeing today. Such an important revelation being hidden from the board members themselves is undeniably shady.

She further states that Altman fed the board with “inaccurate information” on “multiple occasions”, masking the safety processes that were at work behind the company’s AI systems. As a result, the OpenAI board was completely oblivious to how well these safety processes even work in the first place. You can listen to the complete podcast here.

ChatGPT-4o

No Safety for the AI Trigger

Building AI responsibly should always be one of the topmost priorities of companies, especially since things can go “horribly wrong”. This is not something I’m saying though, and comes straight from the mouth of Altman, ironically.

Most importantly, this surprisingly falls in line with Musk’s side of the story. Not too long ago, Elon Musk sued OpenAI, claiming that the company had abandoned its original mission and has now become profit-oriented.

In an interview with The Economist, the ex-board members state that their concerns with how Sam Altman’s return led to the departure of safety-focused talent, making OpenAI’s self-governance policies take a serious hit.

They also believe that there should be government intervention for AI to be built responsibly. Following the controversy, OpenAI recently formed a Safety and Security Committee, stating that, “This new committee is responsible for making recommendations on critical safety and security decisions for all OpenAI projects; recommendations in 90 days.”

And, guess what? This vital committee includes Sam Altman too. While I don’t want to believe all the accusations, if they’re true, we’re in serious trouble. I don’t think any of us want Skynet to become a reality.

Besides, a week ago, Jan Leike, the co-head of Superalignment at OpenAI resigned over safety concerns and now he has joined Anthropic, a rival firm. However, he didn’t leave silently and dropped his side of the story in detail on his X handle.

Of all the things he said, “OpenAI must become a safety-first AGI company,” was another hard pill to swallow, for it clearly implicates that the company is currently not on the right trajectory.

He also emphasizes the fact that we really need to buckle up and “figure out how to steer and control AI systems much smarter than us.” However, that’s not all the reason Leike left. He also wrote,

Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.

A Toxic Exit for Employees

While Toner and the other ex-OpenAI folks have been publicly revealing shocking facts about the company lately, they also suggest that they “can’t say everything”.

Last week, a Vox report revealed how former OpenAI employees were forced to sign severe non-disclosure and non-disparagement agreements, a breach of which will lead them to lose all vested equity in the company. We’re talking in millions here, and I don’t think anyone would want to lose that.

Specifically, this agreement prevents former OpenAI employees from criticizing the company and talking to the media. While Altman took to X to say that he didn’t know of this clause in OpenAI’s NDA, I don’t think anyone buys it.

Even if we consider Altman’s point, it goes on to show how disorganized an important body like OpenAI is, which only further proves the point of all those accusations.

Is the Future of AI in the Wrong Hands?

It’s sad that the very board that joined hands with the company’s vision is now against it. While it may or may not have anything to do with Altman firing them upon his return to the company, if these accusations are to be believed, they’re quite frightening.

We have several movies and TV shows that showcase how AI can get out of hand. Moreover, it’s not just OpenAI trying to achieve AGI. Industry giants like Google DeepMind and Microsoft are also injecting AI into almost all of their products and services. This year’s Google I/O even hilariously revealed the number of times AI was stated throughout the event, which is 120+ times.

On-device AI is the next big step forward and we’re already seeing some implementations of it with the Recall feature for the next-gen Copilot Plus PCs. That raised a whole lot of privacy concerns too, since the feature actively takes screenshots of the screen to create a local vector index.

In other words, AI is here to stay, whether you like it or not. However, what truly matters is how responsibly we develop and use AI, ensuring that it serves us rather than governs us. Is the future of AI in the wrong hands? Especially when AI labs are not pulling stops at giving it more power and data, and AI is multimodal now, to remind you.

What do you think of these new revelations? Do they take away your night’s sleep like they did for me? Let us know your opinion in the comments down below.

comment Comments 0
Leave a Reply

Loading comments...