# OpenAI Tackles Prompt Injection Attacks on AI Systems
OpenAI has published new guidance on prompt injections, describing them as a "frontier security challenge" for artificial intelligence systems.
Prompt injections are attacks where malicious actors manipulate AI models by embedding hidden instructions in content the AI processes. For example, an attacker could hide commands in a webpage or document that trick an AI assistant into ignoring its safety guidelines or leaking sensitive information.
The company announced it is addressing this vulnerability through three main approaches: advancing research into detection methods, training models to be more resistant to manipulation, and building protective safeguards directly into their products.
**Why it matters:** As AI assistants become more integrated into daily workflowsâreading emails, browsing websites, and accessing personal dataâprompt injection attacks pose serious privacy and security risks. Unlike traditional software vulnerabilities that can be patched, prompt injections exploit the fundamental way language models interpret instructions,