The OpenAI team tasked with protecting humanity is no more


In the summer of 2023, OpenAI created the “Superalignment” team, whose goal was to control and manage future artificial intelligence systems that could be so powerful that they could destroy humans. That team died less than a year later.

OpenAI he said Bloomberg said the company is “more deeply integrating the group across research efforts to help the company meet its security goals.” But a series of tweets by Jan Leike, one of the team’s recently resigned leaders, revealed internal tensions between the security team and the larger company.

In the statement Posted in X on Friday, Leike said the Superalignment team was struggling for resources to conduct the investigation. “Creating smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote. “OpenAI has a huge responsibility on behalf of all humanity. But over the past years, safety culture and processes have taken a backseat to brilliant products.” OpenAI did not immediately respond to a request for comment from Engadget.

Jan LeikeJan Leike

X

Leiken’s departure earlier this week comes hours after OpenAI chief scientist Sutskevar announced announced that he had left the company. Sutskevar was not only one of the leaders in the Superalignment team, but also helped to create the company. Sutskevar’s move comes six months after Altman was involved in the decision to fire CEO Sam Altman for being “consistently dishonest” with the board. Altman’s ouster at short notice sparked an internal revolt within the company, with nearly 800 employees signing a letter threatening to quit if Altman was not reinstated. Five days later, Altman is back Sutskevar as CEO of OpenAI after signing a letter expressing remorse for his actions.

When announced With the creation of the Superalignment team, OpenAI said it will dedicate 20 percent of its computing power over the next four years to solving the challenge of managing powerful artificial intelligence systems of the future. “[Getting] This right is critical to achieving our mission,” the company wrote at the time. In X, Leike he wrote Superalignment said the team’s important research on AI security was “computationally struggling and increasingly difficult.” “For the past few months, my team has been sailing against the wind,” he wrote, adding that he had reached a “breaking point” with OpenAI’s leadership over disagreements over the company’s top priorities.

Over the past few months, there have been more people leaving the Superalignment team. OpenAI in April reported dismissed two researchers Leopold Aschenbrenner and Pavel Izmailov for allegedly leaking information.

OpenAI said Bloomberg His future security efforts will be led by another co-founder, John Schulman, whose research focuses on large language models. Director Jakub Pachocki, who led the development of GPT-4, one of OpenAI’s largest language models replace it Sutskevar as chief scientist.

Superalignment wasn’t the only team at OpenAI focusing on AI security. Company in October started a new “preparedness” team to address potential “catastrophic risks” from AI systems, including cybersecurity issues and chemical, nuclear and biological threats.

Update, May 17, 2024, 3:28 PM ET: In response to a request for comment on Leike’s claims, OpenAI’s PR staff referred Engadget to Sam Altman. tweet he said he would say something in the next few days.

This article contains affiliate links; we may earn a commission if you click on such a link and make a purchase.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *