Skip to main content
Tech

OpenAI forms safety committee ahead its next model release

May 28

Share

Media Landscape

See who else is reporting on this story and which side of the political spectrum they lean. To read other sources, click on the plus signs below.

Learn more about this data

Left 39%

Center 52%

Right 10%

Bias Distribution Powered by Ground News

Just weeks after OpenAI got rid of a team focused on AI safety, the company established a new committee aimed at enhancing safety and security. The company also announced on Tuesday, May 28, that it has begun training its next AI model.

In a blog post, OpenAI said the new committee will be led by CEO Sam Altman, Chair Bret Taylor, Adam D’Angelo, and Nicole Seligman.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

One of the first things the Safety and Security Committee will do is evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. After that, the group will share recommendations with OpenAI’s full board, which will then decide how to move forward with those recommendations.

This follows May’s earlier exits of OpenAI safety executive Jan Leike and company co-founder Ilya Sutskever, who were both on the company’s Superalignment team, which was dedicated to foreseeing and stemming potential issues caused by advanced AI.

In a thread on X about his departure, Leike criticized the company, saying he “reached a breaking point” and “over the past years, safety culture and processes have taken a backseat to shiny products.” 

OpenAI said the Safety and Security Committee will also work with the company’s technical and policy experts and other cybersecurity officials.

Also in Tuesday’s blog post, OpenAI confirmed it has started training its next big language model, which will be the successor to its current GPT-4. That is expected to be unveiled later this year. 

“While we are proud to build and release models that are industry-leading on both capabilities and safety,” OpenAI’s post said, “we welcome a robust debate at this important moment.”

Tags: , ,

[KARAH RUCKER]

OPEN A-I IS STARTING A NEW COMMITTEE – AIMED AT ENHANCING SAFETY AND SECURITY.

IN A BLOG POST TUESDAY… OPEN A-I SAYING THE NEW COMMITTEE WILL BE LED BY C-E-O SAM ALTMAN AND OTHER COMPANY EXECUTIVES.
AMONG THE COMMITTEE’S TOP PRIORITIES: EVALUATING AND FURTHER DEVELOPING OPEN A-I’S PROCESSES AND SAFEGUARDS.
THAT’LL TAKE PLACE OVER THE NEXT 90 DAYS.
AFTER THAT, IT’LL SHARE RECOMMENDATIONS WITH OPENAI’S FULL BOARD… WHICH WILL THEN DECIDE HOW TO MOVE FORWARD WITH THOSE RECOMMENDATIONS.

THIS COMES JUST WEEKS AFTER OPENAI GOT RID OF ITS “SUPER ALIGNMENT” TEAM FOCUSED ON A-I SAFETY AND FOLLOWS THE EXITS EARLIER THIS MONTH OF OPEN A-I SAFETY EXECUTIVE JAN (YAHN) LEIKE (LIE-kuh) AND COMPANY CO-FOUNDER ILYA (ILL-EE-YAH) SUTSKEVER (SOOT-SKEH-VER).

IN A THREAD ON X ABOUT HIS DEPARTURE — LEIKE (LIE-kuh) CRITICIZED THE COMPANY, SAYING, “OVER THE PAST YEARS, SAFETY CULTURE AND PROCESSES HAVE TAKEN A BACKSEAT TO SHINY PRODUCTS.”ALSO IN ITS BLOG POST — OPEN A-I HAS CONFIRMED IT HAS STARTED TRAINING ITS NEXT BIG LANGUAGE MODEL, WHICH WILL BE THE SUCCESSOR TO ITS CURRENT “G-P-T 4” TECHNOLOGY WHICH DRIVES THE COMPANY’S CHATBOT – CHAT-G-P-T.

YOU CAN FIND OUR LATEST STORIES ON AI BY DOWNLOADING THE STRAIGHT ARROW NEWS APP TO YOUR MOBILE DEVICE.