Chinese AI startup DeepSeek has unveiled its V3.2 and V3.2-Speciale models, claiming performance comparable to frontier systems like GPT-5 and Gemini 3 Pro while maintaining lower costs and full accessibility through open-source licensing.
Chinese artificial intelligence company DeepSeek has introduced two new models—V3.2 and V3.2-Speciale—marking its latest bid to challenge global AI leaders. The company says the upgraded models deliver performance on par with systems such as GPT-5, Claude Sonnet 4.5 and Gemini 3 Pro across tasks including coding, tool use and reasoning, while remaining openly accessible for developers.
The V3.2-Speciale variant recorded particularly strong results, securing gold-medal level scores in the 2025 International Math Olympiad and Informatics Olympiad evaluations, signalling major strides in advanced problem-solving capabilities.
Three technical breakthroughs power the V3.2 lineup
According to DeepSeek, the enhanced performance stems from three core innovations underpinning the V3.2 architecture: DeepSeek Sparse Attention (DSA), a scalable reinforcement learning framework and a large-scale agentic task-synthesis pipeline.
The company claims DSA dramatically lowers computational load while maintaining accuracy, particularly for long-context tasks. The attention mechanism works by splitting attention into two components, enabling more efficient processing.
Both models use DeepSeek’s Mixture-of-Experts (MoE) transformer architecture, incorporating 671 billion total parameters with 37 billion active per token. DeepSeek noted that Sparse Attention is the only structural modification introduced during continued pre-training, allowing most of the original architecture to remain intact.
The V3.2 release also updates the model’s chat template with improved tool-calling protocols and a new “thinking with tools” feature designed to enhance reasoning workflows.
DeepSeek builds on momentum from earlier releases
The company first captured global attention in January with its V3 and R1 models, which gained traction for achieving high-tier performance while remaining open source—an approach that allowed developers to freely build, customise and deploy the models.
With the V3.2 series, DeepSeek aims to strengthen its position as a credible competitor to premium, proprietary AI systems, while advancing its strategy of delivering cutting-edge capabilities at significantly reduced computational cost.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



