# OpenAI Launches Multimodal Moderation API Built on GPT-4o
OpenAI announced a major upgrade to its Moderation API, introducing a new model powered by GPT-4o that can detect harmful content across both text and images.
The enhanced moderation system marks a significant improvement in accuracy for identifying problematic content, giving developers more powerful tools to keep their platforms safe. Unlike the previous text-only version, this multimodal approach allows the API to analyze images alongside text, providing more comprehensive content screening.
This upgrade matters for any developer building user-generated content platforms, social networks, or community features. With more accurate detection of harmful material, companies can better protect users while reducing false positives that might incorrectly flag legitimate content.
The GPT-4o foundation means the moderation model benefits from OpenAI's latest language understanding capabilities, potentially catching nuanced forms of harmful content that earlier systems might have