# OpenAI's AI Can Now Help Humans Spot Mistakes in AI-Generated Content
OpenAI announced a breakthrough in AI oversight: they've developed models that can identify and describe flaws in AI-generated summaries, making it significantly easier for humans to catch errors.
The company trained specialized "critique-writing" models that point out problems in text summaries. When human reviewers were shown these AI-generated critiques, they found substantially more flaws than when reviewing the content alone.
A key finding: larger AI models are better at critiquing their own work than smaller ones. Interestingly, the research shows that increasing model size improves critique-writing ability more than it improves the original summary-writing task itself.
**Why it matters:** As AI systems become more capable and handle increasingly complex tasks, human oversight becomes harder. This research suggests a promising path forwardâusing AI to help humans supervise AI. Rather than humans