
Mistral’s Magistral release features a dual-model strategy—an enterprise-grade proprietary model and a 24B-parameter open-source version—highlighting its push to lead both high-performance AI markets and the open innovation community simultaneously
French artificial intelligence startup Mistral has launched a new family of large language models (LLMs) called Magistral, marking its formal entry into the fast-growing domain of reasoning-capable AI. This new generation of models is designed to perform not just basic language tasks but also tackle more complex cognitive challenges through deliberate thought and reflection.
The Magistral release includes two key offerings: Magistral Medium, a proprietary model intended for enterprise use, and Magistral Small, a 24-billion parameter model released as open source under the permissive Apache 2.0 license. This dual-pronged strategy underscores Mistral’s ambition to dominate both the open innovation community and the high-performance enterprise AI market.
Magistral Small represents a return to Mistral’s open-source roots after facing criticism for leaning into closed models like its Medium 3, which launched in May 2025 as a fully proprietary product. By adopting the Apache 2.0 license for Magistral Small, Mistral has opened the doors for unrestricted commercial and non-commercial use. The license allows developers and companies to modify, integrate, and deploy the model freely, without licensing costs or vendor lock-in. This move is likely to restore goodwill within the developer ecosystem and renew trust among those concerned that Mistral was drifting toward a more closed business model, similar to that of OpenAI or Anthropic.
Enterprise-ready AI with precision
Mistral also demonstrated that its latest models can compete with some of the biggest players in the AI field. On the AIME-24 benchmark, which tests mathematical reasoning, Magistral Medium scored 73.6% accuracy. When enhanced with majority voting—a method where multiple outputs are generated and the most common answer is selected—the score increased dramatically to 90%. The model also performed strongly on other rigorous tests, including GPQA Diamond, which focuses on graduate-level question answering, and LiveCodeBench, which evaluates programming tasks. These results position Magistral Medium as a top-tier model in the reasoning category, capable of going head-to-head with offerings from Deepseek and other major AI labs.
While Magistral Small caters to open-source developers, Magistral Medium is targeted squarely at enterprises with mission-critical demands. It is now accessible through Mistral’s Le Chat interface and La Plateforme API, and is being made available on Amazon SageMaker, with support for Google Cloud, Azure AI, and IBM WatsonX on the way. In terms of pricing, Magistral Medium is positioned as a premium product. It charges $2 per million input tokens and $5 per million output tokens, making it more expensive than Mistral’s previous models but still significantly more affordable than competitors like Claude Opus 4 from Anthropic, which costs considerably more for equivalent output.
Transparent, fast, and multilingual AI
In a market where many AI models operate as “black boxes,” Mistral is prioritizing transparency with the Magistral line. The models are designed to show their chain of reasoning, making their logical steps visible and traceable. This is particularly valuable in industries like law, healthcare, and finance, where interpretability and auditability are essential. Furthermore, Mistral emphasized the multilingual proficiency of Magistral, highlighting strong performance across several major global languages, including French, German, Spanish, Arabic, Russian, Italian, and Simplified Chinese.
Performance is another critical selling point. With the introduction of “Think mode” and “Flash Answers,” Magistral Medium offers token throughput speeds up to ten times faster than competing models, enabling real-time interactions even for complex reasoning tasks. These advancements significantly enhance the usability of the models in fast-paced environments such as live customer service, trading desks, and real-time data analysis.
Magistral is not just about raw performance—it also supports a broad array of use cases, from scientific simulations and legal research to creative writing and software development. In demonstrations, the model has successfully generated complex physics simulations and produced creative content with both coherence and imaginative flair. Through Magistral, Mistral is signaling that the future of AI reasoning will not only be powerful but also more open, versatile, and inclusive than ever before.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.