How is Artificial Intelligence transforming the process of composing music?
Google is strengthening its position at the intersection of artificial intelligence and artistic creation with the release of Lyria 2 and the Music AI Sandbox platform. The goal is clear: to offer powerful generative tools while respecting artists’ creative intent. As generative artificial intelligence permeates every aspect of cultural production, music is becoming a new frontier for algorithmic experimentation. But unlike autonomous approaches, Google is focusing here on a model of assisted co-creation, which preserves human agency at the heart of the value chain.
A generative AI platform designed for musicians
The Music AI Sandbox is an experimental platform available to a select group of artists. It offers a series of modules designed to enrich, transform, or restructure an existing track using artificial intelligence. Features range from automatic harmonization to stylistic variation, including the generation of interludes and the modification of tempo and instrumentation. The platform is based on Lyria 2, an audio model developed by DeepMind and Google Research, capable of generating high-fidelity music with a level of temporal and structural precision rarely achieved1.
Unlike previous models, Lyria 2 can handle complex polyphony, vocal synchronization, and the preservation of musical intent. It is no longer just about producing plausible sound, but about co-creating a musical work that is intelligible, coherent, and emotionally resonant.
A technological leap forward: Lyria 2 and the mastery of musical structure
One of the main challenges in musical AI is medium- and long-term coherence: the ability to structure a piece with recurring motifs, gradual variations, and thematic continuity. This is where Lyria 2 marks a major breakthrough. Thanks to “transformer” architectures optimized for audio data and hierarchical musical representations, the model is able to generate long sequences without losing direction2.
According to Google, Lyria 2 is capable of generating up to 90 seconds of continuous music without structural breaks, while incorporating specific user-defined parameters such as style, instrument, mood, or intensity. This conditional generation allows artists to balance creative freedom with expressive control.
Augmented creation and co-creation with AI
The first artists to experiment with the platform explored a variety of uses. Some used AI to enhance song sketches, while others used it to generate instrumental loops over which to improvise. Among the documented cases:
- Creating vocal harmonies based on a manually recorded melody line.
- Stylistic adaptation of a pop song into a jazz or neoclassical version, while preserving the harmonic structure.
- Automatic composition of a musical bridge, integrated into an existing verse-chorus structure.
This type of experiment shows that AI is not viewed here as a threat to creativity, but rather as a tool that speeds up prototyping, capable of quickly generating viable musical alternatives.
Toward the democratization of advanced musical tools
Google positions its platform as an inclusive technology. By making advanced production tools accessible without the need for extensive musical training, Music AI Sandbox is helping to foster a new generation of digital creators. Much like the no-code movement in software development, this trend reflects a profound shift in the way artistic content is created.
According to a survey conducted by MIDiA Research, more than 35% of independent musicians had incorporated at least one AI tool into their creative process by 2024, compared to just 12% in 20223. This surge reflects the growing interest in flexible, fast-paced platforms geared toward sound experimentation.
Ethical Framework and Intellectual Property Issues
The widespread adoption of music-generation tools inevitably raises questions about regulation and ethics. Google states that Lyria 2 was trained exclusively on licensed or public-domain data. Additionally, an inaudible watermark is embedded in the generated files to detect the use of AI, even after modifications4. While this process improves traceability, it does not yet provide a definitive solution to the debates over copyright ownership.
Who owns a song co-produced by AI? The artist who provided the parameters? The company that developed the model? These gray areas require a new legal framework that keeps pace with rapidly evolving practices.
A new era for sound design?
With Lyria 2 and Music AI Sandbox, Google is positioning music creation within a framework of symbiosis between artificial intelligence and human intuition. The proposed algorithmic assistance model does not eliminate creativity, but rather provides it with new avenues of expression. This technical shift nevertheless raises fundamental questions about aesthetics, authenticity, and artistic responsibility. Will the future of music be collective, dialogic, shared between humans and machines? Or will we have to relearn how to distinguish spontaneous inspiration from calculated suggestion?
References
1. Google DeepMind. (2024). Introducing Lyria: Google DeepMind’s music generation model.
https://deepmind.google/discover/blog/introducing-lyria
2. Greshake Tzovaras. (2024). How Google’s AI is composing musical structures with Lyria 2.
https://www.technologyreview.com/2024/04/15/ai-google-lyria
3. MIDiA Research. (2024). AI in Music: Tracking the Rise of Artificial Creativity.
https://www.midiaresearch.com/reports/ai-in-music-2024
4. Google Research Blog. (2024). Music watermarking and ethical AI in generative models.
https://blog.google/technology/research/music-ai-sandbox-watermark

