OpenAI board first learned about ChatGPT from Twitter, according to former member

Helen Toner, former OpenAI board member, speaks on stage during the Vox Media 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023.
Larger / Helen Toner, former OpenAI board member, speaks during the Vox Media Code Conference 2023 at The Ritz-Carlton, Laguna Niguel on September 27, 2023.

In a recent interview on “The Ted AI Show” podcast, former OpenAI board member Helen Toner said that the OpenAI board was unaware of ChatGPT’s existence until they saw it on Twitter. It also revealed details about the company’s internal dynamics and the events surrounding CEO Sam Altman’s surprise firing and subsequent rehiring last November.

OpenAI released ChatGPT publicly on November 30, 2022, and its surprise massive popularity set OpenAI on a new trajectory, shifting its focus from being an AI research lab to a more consumer-facing technology company.

“When ChatGPT came out in November 2022, the board was not informed about it in advance. We learned about ChatGPT on Twitter,” Toner said on the podcast.

Toner’s revelation about ChatGPT appears to highlight a significant disconnect between the board and the day-to-day operations of the company, shedding new light on allegations that Altman was “not consistently candid in his communications with the board” following his firing last month. November 17, 2023. Altman and OpenAI’s new board later said the CEO’s mishandling of efforts to remove Toner from OpenAI’s board after her criticism of the company’s release of ChatGPT played a key role in Altman’s firing.

“Sam did not inform the board that he owned the OpenAI seed fund, even though he repeatedly claimed to be an independent board member with no financial interest in the company on multiple occasions,” she said. “He gave us inaccurate information about the small number of formal safety processes the company had in place, meaning it was essentially impossible for the board to know how well those safety processes were working or what could had to change.”

Toner also shed light on the circumstances that led to Altman’s temporary departure. She mentioned that two OpenAI executives had reported cases of “psychological abuse” to the board, providing screenshots and documentation to support their claims. Allegations made by former OpenAI executives, relayed by Toner, suggest that Altman’s leadership style fostered a “toxic atmosphere” at the company:

In October of last year, we had this series of conversations with these executives, where both of them suddenly started telling us about their experiences with Sam that they hadn’t felt comfortable sharing before, but telling us how they didn’t they could. trust him, for the toxic atmosphere he was creating. They use the phrase “psychological abuse”, telling us they didn’t think he was the right person to lead the company, telling us they didn’t believe he could or would change, there’s no point in giving him feedback, there is no point in trying to solve these issues.

Despite the board’s decision to fire Altman, Altman began the process of returning to his position just five days later following a letter to the board signed by over 700 OpenAI employees. Toner attributed this quick turnaround to employees who believed the company would collapse without him, saying they also feared retaliation from Altman if they did not support his return.

“The second thing that I think is really important to know that has really been reported is how scared people are to go against Sam,” Toner said. “They experienced him retaliating against people who retaliated … for past instances of being critical.”

“They were really afraid of what might happen to them,” she continued. “So some employees started saying, you know, wait, I don’t want the company to fall apart. Like, let’s bring Sam back. It was very difficult for those people who had had terrible experiences to actually say that. .. if Sam stayed in power, as he ultimately did, it would make their lives miserable.”

In response to Toner’s statements, current OpenAI board chairman Bret Taylor issued a statement to the podcast: “We are disappointed that Miss Toner continues to review these matters… The review concluded that the previous board’s decision was not based on concerns about the product’s safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

Even given that review, Toner’s main argument is that OpenAI has not been able to police itself despite claims to the contrary. “The OpenAI saga shows that it’s not enough to try to do good and fix yourself,” she said.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *