# OpenAI Shows Self-Play Training Enables AI to Learn Complex Physical Skills Naturally
OpenAI announced a breakthrough in AI training methodology where artificial intelligence systems learn sophisticated physical movements through competitive self-play, without requiring explicitly programmed instructions for each skill.
The research team discovered that when AI agents compete against themselves in simulated environments, they spontaneously develop complex abilities including tackling, ducking, faking, kicking, catching, and diving for balls. This approach eliminates the need for researchers to manually design training scenarios for each specific skill.
The key advantage of self-play is its automatic difficulty adjustmentâthe AI always faces an opponent at precisely the right skill level to promote continuous improvement. As the agent gets better, so does its opponent, creating an endless learning curve.
This finding builds on OpenAI's previous success using self-play to master Dota 2, a complex multiplayer video game. The consistency of results across both