OpenAI’s new “Preparedness Framework” aims to help protect against “catastrophic risks” when developing high-level AI systems, according to the company.

Developing high-level AI systems, according to the company

OpenAI, an artificial intelligence (AI) developer, has announced the implementation of its “Preparedness Framework,” which includes the formation of a special team to evaluate and predict risks.

On December 18, the company announced in a blog post that its new “Preparedness team” will serve as a link between OpenAI’s safety and policy teams.

It claims that these teams, which act as a sort of checks-and-balances system, will help protect against “catastrophic risks” posed by increasingly powerful models. OpenAI has stated that it will only deploy its technology if it is deemed safe.

The upgraded plan calls for the safety reports to be reviewed by the new advisory group before being forwarded to the OpenAI board and company executives.

The new plan gives the board the authority to overturn safety decisions, even though the executives are still in charge of making the final calls.

This follows a flurry of changes at OpenAI in November, including Sam Altman’s sudden resignation and reappointment as CEO. Following Altman’s return, the company issued a statement introducing its new board, which now consists of Larry Summers, Adam D’Angelo, and Bret Taylor as chair.

OpenAI launched ChatGPT available to the public in November 2022, and there has been a surge of interest in AI since then, but there are also concerns about the dangers it may pose to society.

In July, the Frontier Model Forum was established by the leading AI developers, including OpenAI, Microsoft, Google, and Anthropic, to monitor the self-regulation of the creation of responsible AI.

In October, US President Joe Biden signed an executive order outlining new AI safety guidelines for businesses creating high-level models and putting them into practice.

Prominent AI developers, such as OpenAI, were invited to the White House before Biden’s executive order, where they pledged to create safe and transparent AI models.


OpenAI’s creation of the Preparedness Team and the board’s empowered role mark a crucial step towards responsible AI development. While questions and challenges remain, these actions demonstrate a commitment to addressing the potential risks of powerful AI technologies and prioritizing safety before progress. It is important to monitor the team’s efforts and the industry’s response to assess the long-term impact of these measures on shaping the future of AI development.