# OpenAI and DeepMind Develop AI That Learns Goals from Human Feedback
OpenAI announced a breakthrough collaboration with DeepMind's safety team on teaching AI systems to understand human intentions without explicit programming.
The new algorithm represents a shift from traditional AI development, where engineers manually write "goal functions" that tell systems what to optimize for. This conventional approach has proven riskyâwhen goals are oversimplified or slightly misspecified, AI can exhibit undesirable or even dangerous behavior while technically following its instructions.
Instead, the collaborative algorithm learns by comparison. Humans simply indicate which of two AI behaviors is preferable, and the system infers the underlying goal from these preferences. This approach sidesteps the challenge of perfectly articulating complex human values in code.
The development addresses a fundamental AI safety concern: ensuring systems actually do what we want, not just what we literally tell them. As AI becomes more powerful, the gap between