# OpenAI Enables GPT-4o Vision Fine-Tuning for Smarter Map Applications
OpenAI announced that developers can now fine-tune GPT-4o's vision capabilities to build more intelligent mapping applications, marking a significant expansion of the company's customization options for its flagship multimodal AI model.
The announcement, shared on OpenAI's official Twitter account, highlights that the vision fine-tuning feature allows developers to train GPT-4o to better understand and interpret visual map data. This means companies building navigation apps, geographic information systems, or location-based services can now customize the model to recognize specific map features, landmarks, or spatial patterns relevant to their use cases.
Previously, GPT-4o offered vision capabilities out of the box, but fine-tuning was limited primarily to text-based tasks. This update enables organizations to adapt the model's visual understanding to their specific needs, potentially improving accuracy for specialized