# OpenAI Releases Safety Documentation for GPT-5.1-Codex-Max
OpenAI has published a system card detailing the safety measures built into GPT-5.1-Codex-Max, its latest code-generation AI model.
The documentation outlines a two-tier approach to keeping the powerful coding assistant safe. At the model level, OpenAI has implemented specialized training to prevent the AI from performing harmful tasks and to resist prompt injection attacksâa common exploit where malicious users try to override the AI's safety guidelines.
At the product level, the company has added technical safeguards including agent sandboxing, which isolates the AI's operations to prevent unintended system access, and configurable network access controls that limit what the model can reach online.
System cards have become standard practice in AI development, providing transparency about how companies test and secure their models before release. This is particularly important for code-