# OpenAI Expands Independent Safety Testing for AI Systems
OpenAI announced it is strengthening its approach to AI safety by working with independent experts to evaluate its frontier AI systems through third-party testing.
The AI company, known for ChatGPT and GPT-4, shared that external evaluations will help validate the safeguards built into its models while increasing transparency around how it assesses potential capabilities and risks.
This move represents a shift toward more open scrutiny of advanced AI systems. Rather than relying solely on internal assessments, OpenAI is inviting outside experts to probe its models for potential safety issues before and after deployment.
**Why it matters:** As AI systems become more powerful, concerns about their potential risks have intensified among researchers, policymakers, and the public. Independent testing addresses a key criticism of AI labsâthat they operate with insufficient external oversight.
Third-party evaluations can identify vulnerabilities that internal teams