# AI Models Get Worse Before They Get Better, OpenAI Research Reveals
OpenAI has documented a puzzling phenomenon called "double descent" that challenges conventional wisdom about training artificial intelligence models.
The research, shared by OpenAI, shows that AI performance doesn't improve in a straight line as expected. Instead, models follow a surprising pattern: they get better, then worse, then better again as they grow larger, train on more data, or train for longer periods.
This counterintuitive behavior appears across multiple AI architectures, including convolutional neural networks (CNNs), ResNets, and transformersâthe technology behind ChatGPT and similar systems. The effect is typically masked in production systems through careful regularization techniques that smooth out the performance dip.
**Why it matters:** This discovery has significant implications for AI development. It suggests that abandoning a model during its "worse" phase could mean missing out on breakthrough performance