# OpenAI Backs New Framework to Verify AI Safety Claims
OpenAI has announced its contribution to a comprehensive report aimed at making AI development more transparent and verifiable. The initiative brings together 58 experts from 30 organizations, including leading research institutions like the Centre for the Future of Intelligence, Mila, and the Center for Security and Emerging Technologies.
The report outlines 10 specific mechanisms that AI developers can use to prove their systems meet safety, security, fairness, and privacy standards. Rather than simply claiming their AI is safe, companies would provide concrete evidence that can be independently verified.
This matters because AI systems are increasingly making important decisions affecting people's lives, yet their inner workings often remain opaque. The framework gives policymakers, users, and advocacy groups practical tools to evaluate whether AI developers are following through on their promises.
The multi-stakeholder approach is significantâit's not just tech companies setting their own