Site icon aivancity blog

Perplexity AI offers a revenue-sharing model with media outlets: toward a new model of collaboration

The rise of generative artificial intelligence has profoundly transformed the way information is produced, disseminated, and accessed. Tools such as ChatGPT, Gemini, and Perplexity allow users to obtain synthetic responses to complex queries, often based on journalistic or encyclopedic content. This development is causing growing tensions with news publishers, who condemn the use of their articles without authorization or compensation.

Several lawsuits are already underway between media groups and major AI companies. The New York Times, for example, has filed a lawsuit against OpenAI and Microsoft, alleging that they used its archives to train proprietary models without compensation1. Against this backdrop of conflict, the startup Perplexity AI stands out by adopting a novel strategy: offering revenue sharing with content publishers.

Founded in 2022, Perplexity AI is developing a conversational search engine based on advanced language models. Unlike other AI assistants, Perplexity consistently cites its sources and links to the original articles. This focus on transparency and verifiability has won over a segment of discerning users, particularly in academic and journalistic circles.

In June 2025, the company announced its intention to take things a step further by offering a revenue-sharing model based on the advertising revenue generated by clicks on its AI-generated responses. The goal is clear: to establish a formal contractual relationship with publishers, recognizing the value of their content in the responses generated by the engine.

According to information provided by Perplexity, the system operates on the following principle:

This system draws inspiration from the revenue-sharing and redistribution models already used by platforms such as YouTube and Spotify. It aims to create a win-win relationship between generative AI and the media, breaking away from the model of one-sided exploitation.

This model opens up interesting possibilities, particularly for:

But several limitations also become apparent:

This debate stems from the ongoing legal uncertainty surrounding the use of journalistic data by language models. Several key questions remain unanswered:

These issues are all the more critical given that AI models are set to play an increasingly significant role in shaping public opinion, providing access to information, and shaping democratic discourse.

Perplexity AI’s proposal is neither a one-size-fits-all model nor a definitive solution. It does, however, represent a concrete attempt to move beyond the stark opposition between AI and the media by introducing a framework of collaboration and economic coexistence.

By taking a proactive approach, the startup also aims to set itself apart from industry giants, which are often perceived as opaque or out of touch with the realities of the publishing world. This approach could inspire other players, such as Anthropic, Mistral, or even Meta, in their search for more responsible models.

Against a backdrop of increasing regulation of generative AI (the AI Act, content licensing, digital regulation), the ability to establish value-sharing mechanisms could become a criterion for technological and social legitimacy.

To further explore the impact of generative AI on our society, check out this insightful article: “Regulating Without Stifling Innovation: The Dilemma Facing Emerging Countries Amid the Rapid Expansion of AI”
It examines how AI regulation can be designed to promote innovation and inclusion while preventing abuses—a key challenge when considering revenue sharing between AI and the media.

1. The New York Times Company. (2024). Complaint against OpenAI and Microsoft for copyright infringement.
https://nytimes.com/legal/openai-lawsuit

Exit mobile version