# OpenAI Adds Vision Fine-Tuning to GPT-4o API
OpenAI announced today that developers can now fine-tune GPT-4o using both images and text through its fine-tuning API, marking a significant expansion of customization options for the company's flagship multimodal model.
Previously, fine-tuning was limited to text-only training data. This update allows developers to train GPT-4o on custom image datasets alongside text, enabling the model to better understand specific visual contexts relevant to their applications.
The enhancement matters for businesses and developers building specialized AI applications that require visual understanding. Companies can now teach GPT-4o to recognize industry-specific images, interpret custom diagrams, or analyze visual data unique to their domainâall while maintaining the model's powerful language capabilities.
Potential use cases include medical imaging analysis, retail product recognition, manufacturing quality control, and document processing with complex layouts. By training