Site icon aivancity blog

Open source on the rise: DeepSeek R1-0528 aims to rival the most advanced artificial intelligence systems

On May 28, 2025, DeepSeek, an emerging AI player in China, unveiled DeepSeek R1-0528, a powerful update to its open-source model, which aims to compete with top-performing proprietary models such as GPT-4.5 (OpenAI) and Gemini 2.5 Pro (Google). Based on a Mixture-of-Experts (MoE) architecture, the R1-0528 model features 236 billion parameters, with routing that activates 13 billion per query, enabling efficiency comparable to large models while keeping inference costs under control1.

This version was pre-trained on a massive corpus of 6 trillion tokens, including multilingual, scientific, and technical data. DeepSeek is clearly committed to a powerful, high-performance, and open-source foundational AI that can be adapted for vertical, industrial, or educational applications.

DeepSeek R1-0528 stands out in several public benchmarks, including:

These results demonstrate a high level of technical sophistication for an open-source model, which is often perceived as less capable than proprietary solutions. By leveraging the MoE architecture, DeepSeek optimizes adaptability, scalability, and inference speed.

The flexibility of the DeepSeek R1-0528 opens up new possibilities in several sectors:

Startups in Asia and Europe are already testing the model on cloud platforms such as Hugging Face and vLLM, where it is available for free download.

The open-source release of DeepSeek R1-0528 is part of a broader strategy for technological self-reliance. Against a backdrop of AI models being concentrated in the hands of a few major U.S. companies, this type of initiative strengthens the sovereign and interoperable development capabilities of public and private sector actors.

Like Mistral AI and Falcon LLM, DeepSeek contributes to an ecosystem in which open standards and transparent models are set to play a central role in the democratization and regulation of AI.

DeepSeek reaffirms its commitment to responsible AI:

Rather than pitting innovation against regulation, DeepSeek advocates for open cooperation, in which technological performance operates within a transparent, well-documented framework that complies with the growing requirements of the European AI Act3.

The arrival of DeepSeek R1-0528 reflects a broader trend: the rise of high-performance open-source models. Far from undermining the industry giants, these alternatives foster innovation, healthy competition, and the acceleration of industry standards.

As Meta prepares LLaMA 3 and Mistral announces a future multilingual MoE, open source is emerging as a credible and strategic path for companies seeking to integrate generative AI into their processes while maintaining control and transparency.

1. DeepSeek. (2025).
https://github.com/deepseek-ai/DeepSeek-V2

2. DeepSeek. (2025).
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard

3. European Commission. (2024).
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Exit mobile version