OpenAI Establishes Safety and Security Committee to Oversee Development of Next Generation AI Models

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustrat
Jonathan Raa/NurPhoto via Getty Images

In a move to address the growing concerns surrounding AI safety and security, OpenAI has announced the formation of a new governance body tasked with overseeing the development of its future AI models, including the successor to GPT-4.

CIO reports that OpenAI, one of the leading artificial intelligence research companies, is trying to quell concerns about its responsible development of AI by establishing a Safety and Security Committee within its board of directors. The committee’s primary objective is to evaluate the processes and safeguards implemented by the company as it embarks on the development of its next frontier model, which is expected to bring the company closer to achieving artificial general intelligence (AGI) – an AI system with capabilities matching or exceeding the human brain in a wide range of tasks.

Sam Altman, chief executive officer of OpenAI Inc., speaks with members of the media during the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, US, on Wednesday, July 12, 2023. The summit is typically a hotbed for etching out mergers over handshakes, but could take on a much different tone this year against the backdrop of lackluster deal volume, inflation and higher interest rates. Photographer: David Paul Morris/Bloomberg via Getty Images

Sam Altman, chief executive officer of OpenAI Inc., speaks with members of the media during the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, US, on Wednesday, July 12, 2023. The summit is typically a hotbed for etching out mergers over handshakes, but could take on a much different tone this year against the backdrop of lackluster deal volume, inflation and higher interest rates. Photographer: David Paul Morris/Bloomberg via Getty Images

The creation of the safety committee comes on the heels of several high-profile departures and controversies surrounding OpenAI’s approach to AI safety. Notably, the company’s former chief scientist, Ilya Sutskever, who led the “superalignment” team focused on long-term risks, recently left the company, followed by Jan Leike, who co-led the same team.

The Safety and Security Committee will be led by OpenAI CEO Sam Altman, chairman Bret Taylor, and board members Adam D’Angelo and Nicole Seligman. Other key members include Aleksander Madry, Head of Preparedness; Lilian Weng, Head of Safety Systems; John Schulman, Head of Alignment Science; Matt Knight, Head of Security; and Jakub Pachocki, the newly appointed chief scientist.

Within 90 days, the committee will share its recommendations on how the company is handling AI risks in its model development with the full board of directors. OpenAI has indicated that it may later reveal any adopted recommendations “in a manner that is consistent with safety and security.”

The formation of the committee demonstrates OpenAI’s recognition of the ongoing concerns expressed by the industry and the general public regarding AI safety. As stated in their blog post, “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”

OpenAI’s progress on the next version of GPT comes as competition in the AI space intensifies. Elon Musk’s xAI recently announced a $6 billion fundraising effort with a $24 billion valuation, as the Tesla leader aims to challenge the startup he once championed. Meanwhile, OpenAI faced controversy when it released a virtual assistant with a voice resembling that of actress Scarlett Johansson without her consent.

Read more at CIO here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Authored by Lucas Nolan via Breitbart May 29th 2024