OpenAI’s new safety team is led by board members, including CEO Sam Altman


There is OpenAI was created new Safety and Security Committee less than two weeks after the company canceled the team tasked with protecting humanity Of the current dangers of AI. This latest iteration of the group responsible for OpenAI’s security guards will include two board members and CEO Sam Altman, raising questions about whether the move is nothing more than self-directed theater. a mad race for profit and dominance along with partner Microsoft.

The Safety and Security Committee, formed by the OpenAI board, will be led by board members Bret Taylor (Chair), Nicole Seligman, Adam D’Angelo and Sam Altman (CEO). A new team follows the high-profile resignations of co-founders Ilya Sutskever and Jan Leyken, raised more than a few eyebrows. Their former was “Superalignment Team”. was created only last July.

Leyke after his resignation he wrote On May 17th, X (Twitter) said that although he believed in the company’s core mission, both sides (product and security) had “reached a breaking point” and left. Leike added that he is “concerned that we are not on a trajectory” to adequately address security issues as AI becomes more intelligent. He wrote that the Superalignment team has recently been “swimming against the wind” within the company and that “safety culture and processes have taken a backseat to brilliant products.”

A safe approach would be that a company focused on “shiny products”—while trying to weather the PR blow of high-profile security moves—could create a new security team led by the same people who are rapidly moving toward those shiny products.

Headshot of Jan Leiken, former head of OpenAI.  He is smiling against a grey-brown background.Headshot of Jan Leiken, former head of OpenAI.  He is smiling against a grey-brown background.

Jan Leike, former head of OpenAI (Jan Leike / X)

The security measures that occurred earlier this month were not recent news from the company. It was also launched (and drew quickly) a new sound model sounded remarkable as two-time Oscar nominee Scarlett Johansson. The Jojo the rabbit the actor later revealed that OpenAI had received permission to use Sam Altman’s voice to train an AI model, but declined.

In a statement to Engadget, Johansson’s team said they were shocked that OpenAI would give him a “pretty creepy-sounding” voice talent after getting permission. The statement added that Johansson’s “closest friends and news outlets could not tell the difference.”

Also OpenAI withdrew from non-offense agreements he demanded that outgoing leaders change their tune to say he would not implement them. Before that, the company forced departing employees to choose between speaking out against the company and keeping their earnings.

The Safety and Security Committee plans to “evaluate and further develop” the company’s processes and safeguards over the next 90 days. The group will then share its recommendations with the entire board. Once the entire leadership team has reviewed its findings, it will “publicly share an update on the recommendations received in a manner consistent with safety and security.”

In a blog post announcing the new Safety and Security Committee, OpenAI confirmed that the company is currently developing its next model to succeed. GPT-4. “While we are proud to build and release models that lead the industry in both capability and safety, we welcome robust discussion at this important time,” the company wrote.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *