Making Music with AI

The realm of music composition has not been left untouched as Gen AI took the world by the storm. Making music with AI is the next big thing. With new AI tools for content generation, the world witnessed a wave of innovation, pushing the boundaries of creativity.

Among the myriad applications of AI, music generation has emerged as a particularly fascinating area. A slew of popular AI music generators such as AIVA, Soundraw, and Amper Music popped up on the scene. Let us look at some of these AI tools used to generate music.

Making Music with AI

Making Music with AIVA:


  • Sophisticated Compositional Capabilities: It excels in generating complex and sophisticated musical compositions across various genres. Composers can access rich and diverse musical material to work with.
  • User-Friendly Interface: Itoffers an intuitive and user-friendly interface, making it accessible to both experienced composers and novices in music composition. Its straightforward controls allow for seamless interaction and customization.
  • Adaptive Learning: It leverages adaptive learning algorithms to refine its compositions over time based on user feedback. This ensures continual improvement and adaptation to individual preferences and styles.


  • Limited Customization Options: While it offers a wide range of musical styles and genres, its customization options may be somewhat limited compared to other AI music generators. It restricts composers’ ability to fine-tune compositions to their exact specifications.
  • Costly Subscription Model: It operates on a subscription-based model. This may pose a financial barrier for some users, particularly independent musicians or hobbyists who may prefer one-time purchase options.
  • Dependency on Internet Connectivity: It relies on internet connectivity for its operation. This may pose challenges in environments with limited or unreliable internet access. This hinders composers’ ability to utilize the tool effectively in all settings.

Pianist and a robot



  • Real-Time Collaboration: It facilitates real-time collaboration between multiple users, allowing composers to work together remotely on musical projects and exchange ideas seamlessly.
  • Versatility in Musical Styles: It offers a diverse range of musical styles and genres, catering to the preferences and creative inclinations of a wide spectrum of composers and musicians.
  • Integration with Existing Software: It seamlessly integrates with existing music production software and platforms, enabling composers to incorporate AI-generated compositions into their existing workflows with ease.


  • Complex Learning Curve: Its interface and functionality may present a steep learning curve for new users, requiring significant time and effort to master its full potential and capabilities.
  • Limited Customization Features: While it offers a variety of musical styles, composers may find its customization features somewhat limited, restricting their ability to tailor compositions to specific preferences or requirements.
  • Subscription-Based Pricing: It operates on a subscription-based pricing model, which may deter some users who prefer one-time purchase options or free alternatives.

Robot playing piano

AI-Generated Tunes with Amper Music:


  • Quick and Easy Composition: It enables composers to generate high-quality musical compositions quickly and effortlessly. This makes it ideal for time-sensitive projects or creative experimentation.
  • Personalized Recommendations: It provides personalized recommendations and suggestions based on user preferences and input. This helps composers refine and enhance their compositions with tailored guidance.
  • Seamless Integration: Like Soundraw, it seamlessly integrates with popular music production software and platforms, allowing composers to incorporate AI-generated compositions into their existing workflows seamlessly.


  • Limited Creative Control: While it offers convenience and efficiency in composition, composers may find its level of creative control somewhat limited compared to traditional composition methods. This stands true particularly in intricate or nuanced musical arrangements.
  • Costly Subscription Model: It operates on a subscription-based pricing model, which may pose financial challenges for independent musicians or hobbyists.
  • Dependency on AI Algorithms: Composers relying solely on Amper Music may become overly dependent on AI algorithms, potentially limiting their exploration of traditional composition techniques and creative expression.

These AI music generators have showcased the potential for AI in music creation. What they lack, however, is the granularity and control desired by composers. Users typically input a prompt, hit a button, and hope for the best. This relinquishes a degree of creative agency in the process. This brought about the need for a new tool, the Anticipatory Music Transformer.

AI-Generated Music

What is the Anticipatory Music Transformer?

The AMT is a groundbreaking tool that seeks to redefine the collaborative dynamic between composers and AI in music composition. It offers composers unprecedented control and ownership over the creative process, particularly in the realm of symbolic music.

Unlike traditional AI music generators, the Anticipatory Music Transformer operates on the principle of anticipation. It enables composers to predict and influence upcoming musical elements.

This unique approach facilitates a co-creation process. It allows composers to iteratively collaborate with the AI model. The user can dictate which parts of the composition they wish to craft themselves and which they delegate to AI.

Music Generation

What does the founder of AMT say about Making Music with AI?

John Thickstun, the man behind the Anticipatory Music Transformer, describes it as a “composer’s helper”. He emphasises its role in augmenting rather than replacing human creativity.

As a former cellist with a passion for music theory, Thickstun had ideas. He envisioned a tool that would empower composers to harness AI’s capabilities while retaining control over the artistic direction.

“I’m intrigued by the possibilities that tools like this could open up for more people to get involved in music composition.”

John Thickstun

The Anticipatory Music Transformer was developed by a team of experts. This includes the Stanford postdoctoral scholar John Thickstun, Stanford HAI Research Engineering Lead David Hall, Carnegie Mellon Assistant Professor of CS Chris Donahue, and Center for Research on Foundation Models Director Percy Liang.

Making Music with AI

How does it work?

Built upon the generative pre-trained Transformer architecture (GPT), the Anticipatory Music Transformer enables composers to shape the composition process interactively, guiding the AI model towards desired musical outcomes. By focusing on symbolic music rather than audio, the model ensures greater controllability and interactivity, opening up new possibilities for musical expression.

While the Anticipatory Music Transformer represents a significant leap forward in AI-assisted music composition, its creators acknowledge that there is still work to be done to seamlessly integrate the tool into existing music sequencing software. Nevertheless, they remain committed to realizing the vision of democratizing music composition and making it more accessible to aspiring musicians and composers.