# OpenAI Warns of AI Language Models Being Weaponized for Disinformation
OpenAI announced a collaborative research effort examining how large language models could be exploited to spread disinformation at scale.
The AI company partnered with Georgetown University's Center for Security and Emerging Technology and Stanford Internet Observatory on the year-long study. The collaboration brought together 30 experts including disinformation researchers, machine learning specialists, and policy analysts at an October 2021 workshop.
The resulting report identifies specific threats that language models pose to the information environment. These AI systems could potentially be misused to generate convincing fake content, automate disinformation campaigns, or create personalized misleading narratives at unprecedented scale.
**Why it matters:** As language models become more sophisticated and accessible, bad actors could leverage them to flood social media and news platforms with AI-generated propaganda. The report introduces a framework for analyzing and implementing safeguards against