# OpenAI Introduces Security Features to Protect ChatGPT from Prompt Injection Attacks
OpenAI announced two new security features for ChatGPT: Lockdown Mode and Elevated Risk labels, designed to help organizations defend against emerging AI security threats.
The features specifically target prompt injection attacks and AI-driven data exfiltrationâtwo growing concerns as businesses integrate AI chatbots into their workflows. Prompt injection occurs when malicious actors manipulate AI systems through carefully crafted inputs to bypass safety measures or extract sensitive information.
Lockdown Mode appears to provide enhanced security controls for enterprise users, while Elevated Risk labels likely warn users when interactions may pose security concerns. These tools give organizations better visibility and control over how employees interact with AI systems.
The announcement comes as companies increasingly worry about sensitive data leaking through AI conversations. Employees might inadvertently share confidential information with chatbots, or attackers could use sophisticated