Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Open AI's Safety and Security Committee is mostly OpenAI evaluating itself

OpenAI has recently come under even more scrutiny after its Board recently agreed to create a Safety and Security Committee composed of board members (Altman included) and employees. The Committee will provide recommendations to the Board, which has the final word on their implementation.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Open AI's Safety and Security Committee is mostly OpenAI evaluating itself
Photo by Solen Feyissa / Unsplash

OpenAI recently announced the creation of its Safety and Security Committee led by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO). The committee members who are not also board members are all OpenAI insiders: Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist). The Committee's first task will be to "evaluate and further develop OpenAI’s processes and safeguards over the next 90 days." After that period, the Committee will present its recommendations to the full Board, which will then decide on the recommendations that will be implemented "in a manner that is consistent with safety and security."

The blog post announcing the Safety and Security Committee's creation mentions that OpenAI plans to retain and consult with external experts like cybersecurity expert Rob Joyce, and former U.S. Department of Justice official John Carlin. Additional details of this external expert group's exact nature, size, and influence are sparse, making it hard to determine their role in OpenAI's search for recommendations "on critical safety and security decisions for OpenAI projects and operations." In an X post, Bloomberg columnist Parmy Olson noted that the Safety and Security Committee seems to be another instance of "a tried and tested approach to self-regulation in tech that does virtually nothing in the way of actual oversight."

OpenAI has recently come under fire following some of its actions related to rights protection and safety and security oversight. Last week, the company published a blog post detailing the casting process behind the voices for ChatGPT's Voice Mode, emphasizing that none were cast for their similarity to a celebrity's voice. This was partly motivated by Scarlett Johansson's public claims that OpenAI had looked to imitate her voice after the actress declined several invitations to have her voice featured. OpenAI has since removed the voice from its service. The debate has gone on regardless, with many agreeing that the incident has showcased how little consideration big tech firms have for upholding copyright and intellectual rights protections.

On the security and oversight front, the Safety and Security Committee comes after a series of departures of essential members from OpenAI's safety team, with many voicing concerns about the direction that OpenAI is heading. Daniel Kokotajlo (ex-governance team member), Ilya Sutskever (co-founder and former chief scientist), and Jan Leike (former safety researcher) have all resigned after disagreeing with the company's decisions that prioritize profit over safety and security. Likewise, AI policy researcher Gretchen Krueger and at least five other OpenAI employees have recently left the company, including OpenAI board members Helen Toner and Tasha McCauley, who have publicly stated that they do not trust AI companies, including OpenAI, to govern themselves.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Updated

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More