OpenAI stands out as one of the leading names in the race to develop artificial intelligence equivalent to human intelligence. However, employees at the company’s $80 billion research laboratory are not shy about publicly voicing serious concerns about security. In a recent report in The Washington Post, an anonymous source claimed that OpenAI was rushing security tests and celebrating before ensuring the security of its products, making these concerns even more evident.
An employee within the company said, “They planned the post-launch party without knowing whether the launch was safe or not. “We could not complete the process successfully.” He summarized the situation by saying. Such claims reveal that OpenAI has serious vulnerabilities in security issues.
OpenAI’s security issues are not limited to claims from anonymous sources. Current and former employees of the company signed an open letter demanding better security and transparency practices following the recent disbandment of its security team. This letter came just after the departure of co-founder Ilya Sutskever. Jan Leike, a leading OpenAI researcher, also resigned soon after, stating that the company’s security culture and processes took a backseat to flashy products.
OpenAI’s security policies and criticisms
OpenAI’s charter includes security as a core element, stating that if AGI (Artificial Generative Intelligence) is developed by another competitor, OpenAI will assist other organizations to advance security. The company claims that it is committed to solving the security problems inherent in a large and complex system. However, the warnings show that security is no longer a priority in the company’s culture and structure.
OpenAI’s current situation is critical, but public relations efforts alone will not be enough to protect society. “We’re proud of our track record of delivering the most capable and safest AI systems, and we believe in our scientific approach to addressing risk,” company spokesperson Taya Christianson told The Verge. said. “Given the importance of this technology, rigorous discussion is critical, and we will continue to engage with governments, civil society, and other communities around the world in line with our mission.”
According to OpenAI and others studying this emerging technology, the risks to security are significant. “Current frontier AI development poses immediate and increasing risks to national security,” a report by the US State Department said. “The rise of advanced AI and AGI (artificial general intelligence) has the potential to destabilize global security, similar to the introduction of nuclear weapons.”
Internal discussions and future plans at OpenAI
Alarm bells ringing at OpenAI following a boardroom coup last year that briefly ousted CEO Sam Altman were not enough to bolster staff confidence. The board announced that Altman was dismissed for “consistent dishonesty in his communications,” leading to an investigation that brought little relief to staff.
OpenAI spokesperson Lindsey Held told The Washington Post that the GPT-4o launch “does not cut corners” on security. However, another unnamed company representative acknowledged that the vetting timeline was compressed to one week. “We are rethinking our entire method, realizing that this is not the best way to do this,” this representative said.
OpenAI announced this week that it will collaborate with Los Alamos National Laboratory to explore how advanced AI models like GPT-4o can safely aid bioscientific research. Los Alamos’ own safety record was repeatedly highlighted in the same announcement. It was also stated that OpenAI has created an internal scale to track the progress of large language models towards artificial general intelligence.
This week’s security-focused announcements appear to be defensive window dressing in the face of mounting criticism of OpenAI’s security practices. Clearly, OpenAI is in a difficult situation right now, but public relations efforts alone will not be enough to protect society. What really matters is the potential impact on people beyond the Silicon Valley bubble if OpenAI doesn’t continue developing AI with strict security protocols. The average person doesn’t get to have a say in the development of customized AGI, and they don’t get to make a choice about how protected they are from OpenAI’s creations.
“AI tools could be revolutionary,” U.S. FTC chairwoman Lina Khan told Bloomberg last November. he said. But he noted that “currently,” there are concerns that critical inputs to these tools are “controlled by a relatively small number of companies.”
If the numerous allegations against security protocols are true, this raises serious questions about how well-suited OpenAI is for its role as custodian of AGI. Allowing a group in San Francisco to control potentially society-changing technology is alarming, and now more than ever there is a demand for transparency and security.
Source link: https://www.teknoblog.com/openai-seffaflik-guvenlik/