OpenAI is training a new AI system to succeed GPT-4 as the company looks to score a reputational reboot. The model will be preceded by a safety and security committee led by CEO Sam Altman and three other board directors.
The San Francisco-based company is under pressure to demonstrate its commitment to safety. Recently resigned risk researchers Jan Leike and Gretchen Krueger exposed OpenAI’s alleged poor safety culture on X, formerly Twitter.
OpenAI Trains ‘Next Frontier Model’
In a blog post published on Tuesday, the company said:
“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to artificial general intelligence (AGI).”
OpenAI did not say when the GPT-4 successor would be released or what it could do. The company, valued at $80 billion, took a first-mover gambit with its hugely popular AI chatbot, ChatGPT, at a time when favorites like Google took a back seat to assess reputational risk.
While OpenAI is a frontrunner in the generative artificial intelligence race that is closely contested by Anthropic, Google, Microsoft and others, the firm has not always held up to ethical scrutiny.
In a post announcing his resignation from OpenAI on X, Leike revealed that “over the past years, safety culture and processes have taken a backseat to shiny products.”
Krueger wrote that the company needs “to do more to improve foundational things like decision-making processes, accountability, transparency, policy enforcement, and the care with which we use our own technology.”
Last week, a European Union task force rubbed in the criticism, reporting that ChatGPT, the company’s flagship product, falls short of its accuracy standards.
Firm Pitches AI Safety as Key Selling Point
Meanwhile, OpenAI’s recent update to its GPT-4 model instantly ran into a high-profile spat, with actress Scarlett Johannson accusing the company of unauthorized use of her voice.
Now, the company is taking its fast-track lead to the laundromat as it pitches safety as the selling point of its upcoming AI program. The outfit is also keen to be seen by regulators as developing responsible AI. In its blog post, OpenAI added:
“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”
The new safety and security committee will be led by directors Altman, Bret Taylor (chair), Adam D’Angelo and Nicole Seligman. The firm’s head of preparedness, head of safety systems, head of alignment science, head of security, and chief scientist will also be on the committee.
OpenAI said the committee will over the next three months “evaluate and further develop processes and safeguards” and report back to three other directors.
The firm dissolved its safety team earlier this month following the departure of team leader and OpenAI co-founder Ilya Sutskever.
Cryptopolitan Reporting by Jeffrey Gogo