AI Digest
← Back to all articles
OpenAI
·OpenAI·1 min read

# OpenAI Launches SafetyKit Powered by GPT-5 for Advanced Content Moderation

OpenAI announced SafetyKit, a new safety system that uses the company's latest GPT-5 model to improve content moderation and compliance enforcement at scale.

The tool represents a significant upgrade from previous safety systems, offering what OpenAI describes as "greater accuracy" in identifying and managing risky content. SafetyKit is designed to help organizations automate their content moderation workflows while maintaining higher standards than legacy systems.

**What's New**

SafetyKit marks the first major application of OpenAI's GPT-5 model, which the company positions as its "most capable" AI to date. The system focuses on scaling "risk agents" – automated systems that can identify problematic content, enforce platform policies, and ensure regulatory compliance.

**Why It Matters**

As AI-generated content prolif