# OpenAI Teaches AI Models to Say "I Don't Know"
OpenAI announced a new development in making AI systems more honest about what they don't know. The company is now teaching its language models to express uncertainty using plain language.
Instead of confidently stating incorrect information, these updated models can now indicate when they're unsure about an answer. This addresses one of the biggest problems in AI: hallucinations, where models generate false information while appearing completely confident.
The change means AI assistants might respond with phrases like "I'm not certain about this" or "This information may not be accurate" when they lack confidence in their knowledge. This transparency helps users make better decisions about whether to trust or verify the AI's responses.
**Why it matters:** This development could significantly reduce the spread of AI-generated misinformation. When models can communicate their own limitations, users are less likely to be misled by convincing but incorrect answers. It