OpenAI has launched a new committee to oversee critical security and safety decisions regarding its projects and operations. The committee consists of CEO Sam Altman and other senior executives. This raised big question marks.
OpenAI established a security committee with names from within the company
OpenAI’s new Safety and Security Committee includes company CEO Sam Altman, board members Bret Taylor, Adam D’Angelo and Nicole Seligman, chief scientist Jakub Pachocki, staging team leader Aleksander Madry, head of security systems Lilian Weng, head of security Matt Knight and chief compliance science officer John Schulman.
This committee will evaluate OpenAI’s security processes and measures over the next 90 days. Once the evaluations are concluded, findings and recommendations will be presented to the board of directors and some recommendations will be made public.
OpenAI has experienced several high-profile departures from its security team in the past few months. Former employees questioned the company’s priorities on AI security. Daniel Kokotajlo resigned in April after losing faith that the company would use increasingly capable AI responsibly.
In May, OpenAI co-founder and chief scientist Ilya Sutskever left the company due to disagreements with CEO Sam Altman. It was claimed that Altman’s effort to quickly launch artificial intelligence products, neglecting security studies, played a role in Sutskever’s departure.
More recently, Jan Leike, a former DeepMind researcher who was involved in the development of ChatGPT and InstructGPT, left his position because he felt that OpenAI was not properly addressing security and safety issues.
AI policy researcher Gretchen Krueger also left the company with similar concerns and called on OpenAI to increase its accountability and transparency.
While OpenAI has been advocating for AI regulation, it has also sought to shape those regulations. The company allocated serious resources to lobbying activities for this purpose. In addition, Altman is among the members who will take part in the newly established Artificial Intelligence Security and Safety Board of the US Department of Homeland Security.
OpenAI announced that it will also employ external experts in order to balance its ethically criticized committee structure. These experts include cybersecurity expert Rob Joyce and former U.S. Department of Justice official John Carlin. However, details were not provided about the size of this external expert group and its influence on the committee.
Bloomberg writer Parmy Olson noted that internal audit boards often do not serve as true oversight. While OpenAI has said it plans to address valid criticisms through the committee, whether those criticisms are valid is debatable.
In a statement made in 2016, CEO Sam Altman stated that external representatives would be given an important role in the governance of OpenAI. However, this plan never materialized and currently seems unlikely to happen.
Source link: https://shiftdelete.net/openai-guvenlik-komitesi