# OpenAI Says Social Scientists Critical for AI Safety Research
OpenAI has published a paper calling for social scientists to join AI safety efforts, marking a significant shift in how the company approaches alignment research.
The organization argues that technical AI alignment algorithms cannot succeed without expertise in human psychology, rationality, emotion, and cognitive biases. While engineers can build systems that theoretically align with human values, social scientists are needed to understand how real people actually think and behave when interacting with AI.
This represents an important acknowledgment that AI safety is not purely a technical problem. Advanced AI systems must work with humans as they areācomplete with irrationality, emotional responses, and systematic biasesānot idealized versions of human decision-making.
OpenAI plans to hire social scientists for full-time positions focused on this work, aiming to foster deeper collaboration between machine learning researchers and experts in psychology, sociology, and related fields.
The move signals growing recognition