# OpenAI Outlines Two-Pronged Strategy for AI Alignment Research
OpenAI has shared its core approach to ensuring artificial intelligence systems remain aligned with human values and intentions.
The AI research company announced it is focusing on two key areas: improving how AI systems learn from human feedback, and developing AI tools that can help humans better evaluate AI outputs. The ultimate goal is ambitious yet practicalâcreating an aligned AI system capable enough to help researchers solve all remaining AI alignment challenges.
This strategy represents a bootstrapping approach to one of AI's most critical problems. Rather than attempting to solve every alignment issue manually, OpenAI aims to build an AI assistant that becomes a partner in alignment research itself. The company is essentially working to create a tool that can help verify its own safety and that of future systems.
The announcement comes as AI capabilities rapidly advance, making alignment research increasingly urgent. By improving human feedback mechanisms, OpenAI hopes to maintain meaningful human oversight even