An unexpected move: OpenAI returns to open source
Since 2019, OpenAI had moved away from its initial commitment to open source, opting instead for a more commercial and controlled strategy centered on its proprietary models (GPT-3, GPT-4, and later GPT-4o). It was therefore with some surprise that the AI community welcomed, in July 2025, the launch of GPT-OSS, a family of models released under an open-source license1
Against a backdrop marked by the rise of open-source competitors such as Mistral, Meta (LLaMA), and DeepSeek, this decision marks a tactical shift for OpenAI and reignites the debate over the role of open source in the development of advanced artificial intelligence.
GPT-OSS: What exactly is it?
The GPT-OSS family includes two models:
- GPT-OSS-1.0B, a small model with 1 billion parameters
- GPT-OSS-3.6B, a mid-sized model with 3.6 billion parameters
These models were trained on filtered multilingual corpora, with a particular focus on data diversity and the safety of the generated content. Although compact, they achieve competitive results on several standard evaluation benchmarks2 :
| Benchmark | GPT-OSS-3.6B | Reference (equivalent size) |
| MMLU | 61,2 % | Mistral-7B (63.9%) |
| ARC (logic puzzles) | 74,4 % | LLaMA-3-8B (75.1%) |
| GSM8K (Mathematics) | 66,0 % | DeepSeek Coder-6B (68.5%) |
Results for July 2025, independent evaluations ( Eleuther AI, LMSys)3
The models are available on GitHub and Hugging Face, along with comprehensive documentation and integration examples using LangChain’s open-source API.
A model of measured openness
Although released under an MIT license, the GPT-OSS models are not intended to compete with GPT-4o. They do not include vision, speech generation, or advanced fine-tuning capabilities. These are base models focused on text understanding and generation.
OpenAI notes that these models are provided for research, prototyping, and security testing purposes, particularly to promote auditability and responsible experimentation. The weight files are available without restriction, but large-scale use requires explicit attribution.
This strategic decision reflects a commitment to balancing minimal transparency with risk management, amid intense regulatory pressure.
Implications for the AI community
The release of GPT-OSS opens up several concrete possibilities:
- Access to compact yet powerful models, suitable for embedded or on-premises use cases
- Reproducibility of academic experiments, thanks to a fully documented model
- Easy integration into open-source processing pipelines (transformers, NLP tools, specialized chatbots)
For many researchers and developers, these models offer an acceptable balance between performance, size, and flexibility. GPT-OSS could therefore be used as a foundation for educational projects, industry-specific assistants, or robustness testing.
Ethical and Strategic Issues
Open source in AI is not neutral. It creates tensions between:
- the need for scientific transparency, often cited by the academic community
- the risk of misuse or abuse, particularly for generating spam, manipulative content, or systems that are difficult to audit
- competitive pressure, in which closed-off players may appear opaque or inaccessible
In this context, OpenAI appears to be seeking to rebalance its image by offering limited access without compromising the monetization of its flagship models.
This move can also be seen as an indirect response to calls from institutions and regulators for more auditable AI, in line with the European AI Act.
A tactical move in a global race?
The launch of GPT-OSS comes just a few weeks after Meta released the LLaMA 3 models, Mistral expanded access to Mixtral, and DeepSeek made its Coder and Flow variants available. Against this backdrop, OpenAI’s lack of an open-source offering was becoming increasingly difficult to justify.
GPT-OSS is therefore part of a hybrid strategy:
- Basic open-source templates for researchers and developers
- proprietary models (GPT-4, GPT-4o) for the commercial API and premium services
- targeted partnerships with cloud and infrastructure providers (Microsoft, Azure)
This phased rollout model could signal a lasting coexistence between closed-source and open-source AI, each catering to different needs.
Learn more
See also: Meta invests €14.8 billion in Scale AI: a decisive step toward superintelligence.
This article examines how Meta is funding an ambitious data infrastructure that complements open-source approaches such as GPT-OSS.
References
1. OpenAI. (2025). Introducing GPT-OSS Models.
https://openai.com/research/gpt-oss
2. LMSys (2025) Open LLM Benchmark Leaderboard.
https://huggingface.co/spaces/lmsys/chatbot-arena
3. Eleuther AI. (2025). GPT OSS Evaluation and Comparison.
https://www.eleuther.ai

