
Mistral Unveils New Line of AI Models
French AI startup Mistral unveils Mistral 3 models with open weights.
French AI startup Mistral has introduced a new line of Mistral 3 models with open weights.
Introducing the Mistral 3 family of models: Frontier intelligence at all sizes. Apache 2.0. Details in 🧵 pic.twitter.com/lsrDmhW78u
— Mistral AI (@MistralAI) December 2, 2025
The series of 10 neural networks includes one large advanced LLM with multimodal and multilingual capabilities. The other nine are smaller, capable of operating autonomously, and tailored for specific tasks.
“Our clients sometimes happily start with a very large model that requires no tuning. After integration, they realize it is expensive and slow, so they turn to us for smaller solutions,” noted Mistral co-founder and chief scientist Guillaume Lamplé in an interview.
Founded by former DeepMind and Meta employees, the startup positions itself as a key European competitor to AI giants from the US and China. The company has raised a total of $2.7 billion with a valuation of $13.7 billion.
According to Lamplé, initial tests may give the impression that Mistral’s smaller models lag behind their counterparts. However, he called such conclusions misleading: large neural networks indeed show high metrics “out of the box,” but the true effectiveness of solutions is revealed after fine-tuning.
“In many cases, results comparable to closed-source models can be achieved, or even surpassed,” he said.
Swarm of Experts
The flagship Large 3 is built on the Granular Mixture of Experts architecture. Unlike the classic Mixture of Experts approach, which uses a small number of “experts,” the new technology divides the neural network into many small specialized modules.
Out of 675 billion total model parameters, 41 billion are active. Combined with a context window of 256,000 tokens, this configuration ensures high speed in handling large documents and efficiency in performing agent tasks.
The remaining nine neural networks are models of three sizes — 14 billion, 8 billion, and 3 billion parameters. There are three configuration options:
- Base — a pre-trained base model;
- Instruct — optimized for chat, conversations, and workflows;
- Reasoning — tuned for complex logical and analytical tasks.
This range provides developers and enterprises with flexibility in selecting models according to their needs.
According to Mistral, Ministral 3 matches or even surpasses competitors with open weights. It is also more efficient and generates fewer tokens for equivalent tasks.
Lamplé stated that the model can operate on a single graphics processor. This allows it to run on accessible equipment: from local servers and laptops to robots and edge devices.
This is important for enterprises that store data on their own servers, remote robotics teams, and students working offline.
“Part of our mission is to ensure AI accessibility for everyone, especially people without internet. We do not want the technology to be controlled by just a few large labs,” emphasized Lamplé.
Physical AI
Mistral is increasingly focusing on physical AI. Since the beginning of the year, the company has been integrating compact models into robots, drones, and vehicles.
The firm collaborates with:
- Singapore’s Home Team Science and Technology Agency on specialized models for bots, cyber, and fire safety systems;
- German defense technology startup Helsing for integrating technologies into drones;
- Corporation Stellantis to enhance automotive AI assistants.
Back in July, SAP SE CEO Christian Klein stated that Europe does not need a surge of new data centers to compete in the AI field.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!