# OpenAI Updates AI Safety Framework to Better Track Catastrophic Risks
OpenAI announced an update to its Preparedness Framework, the company's system for identifying and mitigating severe risks from advanced AI systems.
The framework serves as OpenAI's internal rulebook for measuring dangerous capabilities in frontier AI models before they're deployed. It focuses on catastrophic risks in areas like cybersecurity, biological threats, persuasion, and model autonomy.
While OpenAI didn't detail specific changes in the tweet, the update signals the company's ongoing effort to stay ahead of emerging risks as AI systems become more powerful. The original framework, introduced in December 2023, established risk thresholds and required safety testing before releasing new models.
The timing is significant as OpenAI continues developing increasingly capable systems. The framework requires the company to track when models cross certain danger thresholds and implement corresponding safeguards.
**Why it matters:**