# OpenAI Shifts AI Safety Strategy from Blocking to Guided Responses
OpenAI announced a major change in how its AI models handle sensitive requests, moving away from outright refusals toward what it calls "safe-completions" in GPT-5.
The company shared that its new approach focuses on providing nuanced, helpful responses rather than simply blocking requests that might have both legitimate and harmful uses. This "output-centric safety training" aims to better handle so-called dual-use promptsâquestions that could serve both beneficial and potentially dangerous purposes.
Previously, AI models like ChatGPT would often issue "hard refusals," completely declining to answer certain queries to err on the side of caution. While this protected against misuse, it also frustrated users with legitimate needs and made the AI less helpful overall.
The new safe-completions method trains the model to understand context better and provide carefully crafted responses that maintain