Adobe introduced a new artificial intelligence program, Project Music GenAI Control, which can be used to generate the sound of text prompts and also offers editing functions in the same environment to customize the results.

Adobe’s latest generative AI experiment aims to help people create and customize music without professional audio experience.

Users start by entering a text description that creates a certain style of music, such as “happy dance” or “sad jazz.” The program’s integrated editing controls allow users to customize these results by changing repetitive patterns, tempo, intensity or structure. Music clips can be re-mixed and audio can be generated in a repeating loop for people who need things like background tracks or background music to create content.

According to Adobe, the program can adjust the audio you create “based on the cue melody” and extend the length of audio clips if you want to make the track long enough for, say, fixed animation or podcast segments. A user interface for editing the generated audio has not yet been disclosed.

Adobe says the public domain content was uploaded for a public demo of Project Music GenAI Control, but it’s unclear whether the tool allows any audio to be uploaded directly as reference material to the tool or how long clips can be extended.

Google MusicLM and Meta’s open-source AudioCraft currently offer similar functionality, allowing users to create audio using only text prompts, with no support for editing the music output. This means that you have to keep creating the audio from scratch until you get the results you want or make those changes yourself using audio editing software.

Adobe’s new tool is not yet available to the public and a release date is not yet known.

Source: The Verge