in the summer of 2023 OpenAIestablished the “Super Harmony” team to direct and control artificial intelligence systems that may be powerful enough to lead to the extinction of humanity in the future. However, this team disbanded in less than a year.
OpenAI said in a statement to Bloomberg that they aim to “help the company achieve its security goals by integrating the group more deeply into its research efforts.” But a series of tweets from Jan Leike, one of the team’s leaders who recently resigned, revealed internal tensions between the security team and the rest of the company.
In a statement on X on Friday, Leike said the Super Fit team was scrambling for resources to conduct the research. “Building machines smarter than humans is an inherently dangerous undertaking. OpenAI assumes a tremendous responsibility on behalf of all humanity. But in recent years, security culture and processes have begun to prioritize shiny products,” Leike said.
OpenAIdid not immediately respond to Engadget’s request for comment.
Leike’s resignation earlier this week came just hours after OpenAI chief scientist Sutskevar announced he was leaving the company. Sutskevar was not only one of the leaders of the Superfit team, but also one of the founders of the company. Six months ago, he was also involved in the decision to fire CEO Sam Altman for not being “consistently candid” with the board. Altman’s brief dismissal sparked an outcry within the company, with nearly 800 employees signing a letter vowing to resign if Altman was not brought back. Five days later, Altman returned as CEO of OpenAI after Sutskevar signed a letter expressing regret for his actions.
In response to a request for comment on Leike’s claims, OpenAI’s public relations unit referred Engadget to Sam Altman’s tweet, in which he stated that he would make a statement within the next few days.
While establishing the Super Harmony team OpenAIhas committed to dedicating 20 percent of its computing power over the next four years to solving the problem of controlling future powerful artificial intelligence systems. Leike wrote at “For the past few months, my team has been sailing against the wind,” he said, adding that OpenAI had reached “a breaking point” due to disagreements with its leadership over the company’s core priorities.
There have been more departures from the Super Fit team in recent months. In April, OpenAI reportedly fired two researchers, Leopold Aschenbrenner and Pavel Izmailov, for leaking information.
OpenAI said in a statement to Bloomberg that future security efforts will be led by another co-founder, John Schulman, whose research focuses on large language models. on GPT-4 Jakub Pachocki, an executive who led its development, will replace Sutskevar as chief scientist.
The Superfit team wasn’t OpenAI’s only team focused on AI security. In October, the company launched a new “preparedness” team to prevent potential “catastrophic risks” from AI systems. This team deals with issues such as cybersecurity issues and chemical, nuclear and biological threats.
Source link: https://www.teknolojioku.com/yapay-zeka/openainin-insanligi-koruma-ekibi-artik-yok-6655a0938aa7dd01f401b65f