# OpenAI Explores Trading Computing Power for Better AI Security
OpenAI has announced research on a new approach to making AI systems more resistant to adversarial attacks by using additional computing power during inference.
The concept, shared via the company's official Twitter account, focuses on "trading inference-time compute for adversarial robustness." In simpler terms, this means AI models can become more secure against malicious inputs by spending more processing time when generating responses.
Adversarial attacks are a significant concern in AI security, where carefully crafted inputs can trick models into producing incorrect or harmful outputs. Traditional approaches try to build robustness into the model during training, but OpenAI's research suggests an alternative path.
By allocating more computational resources at inference timeâwhen the model is actually being usedâsystems can better detect and defend against these attacks. This trade-off is particularly relevant as AI systems are deployed in sensitive applications where security is paramount