One of the most notable announcements made during the 2025 GTC was Mistral Forge: a platform designed by Mistral AI to help enterprises train AI models on their own proprietary data, so their responses can become context-aware, and thus, more helpful in enterprise use cases dependent on an organization's internal (and private) knowledge base.

Like other AI labs, such as Anthropic and OpenAI, Mistral is operating under the assumption that AI models fail in enterprise settings mostly because they are shipped (and deployed) without an adequate understanding of a company's internal terminology, workflows, or institutional knowledge. Forge aims to fix that by letting organizations train models from scratch using their own documentation, codebases, compliance policies, and operational records.

Common approaches to enrich AI models with private data, such as retrieval-augmented generation (RAG) or fine-tuning, start out with a fully trained model and merely "layer" the new information on top of the model's previous knowledge. While effective, these methods do not offer granular control, and may result in models that still display unwanted behaviors. By enabling full model training using internal data, Mistral Forge grants companies deeper control over model behavior and reduce dependence on third-party providers. In Mistral's words: "this approach allows organizations to treat AI models not simply as external tools, but as strategic assets that evolve alongside their knowledge, processes, and expertise."

Mistral Forge seems like a natural step for a company that has been primarily focused on developing AI solutions tailored to enterprise clients, rather than to widespread adoption by the general public. This is a strategy that seems to be paying of8f for Mistral, as CEO Arthur Mensch says the company is on track to surpass $1 billion in annual recurring revenue this year,