“人工智能写歌的前景和自动驾驶汽车类似。”创业家、Creative Labs的联合创始人莱昂纳多·布洛迪说，Creative Labs是和Creative Artists Agency共同创建的合资企业，主营业务是向帮助音频创作者把作品呈现给公众的项目进行投资。“第一级是艺术家使用机器作为辅助。第二级是机器编写音乐，人来演奏。第三级是自始自终都由机器完成。”
西尔弗斯坦并非唯一的一个持这种观点的人。大型科技公司也推出了以AI为基础的音乐创作工具和服务，包括IBM的Watson Beat、谷歌Magenta项目的NSynth、索尼的Flow Machines、Spotify的Creator Technology Research Lab等。这些产品服务的目标客户是艺术家和唱片公司，它们利用算法对歌曲库和销售排行榜加以分析，预测哪些歌曲（在什么时候）最有可能上榜。
持相近看法的艺术家认为人工智能辅助写歌是福利而非威胁。歌手、美国偶像参赛选手塔瑞安·萨顿去年发行了首张专辑《I Am AI》，专辑中八首歌的编写都是用Amper、Watson Beat以及其它软件辅以人工制作完成的。
“IT’S CHEATING.” That’s the response you’ll hear from self-proclaimed music purists talking about technological innovation in song creation. Sampling, synthesizers, drum machines, Auto-Tune—all have been derided as lazy ways to make chart-topping hits because they take away the human element. (With apologies to Vanilla Ice, Gary Numan, Prince, and T-Pain.)
The new argument among fans and musicians will be about the use of artificial intelligence in songwriting. According to several estimates, in the next decade, between 20% and 30% of the top 40 singles will be written partially or totally with machine-learning software. Today, recording pros can use A.I.-powered programs to cue an array of instrumentation (from full orchestral arrangements to hip-hop beats), then alter it by mood, tempo, or genre (from heavy metal to bluegrass). (See more ways A.I. is changing how people work on page 96.)
“It’s like the future of self-driving cars,” says Leonard Brody, entrepreneur and cofounder of Creative Labs, a joint venture with Creative Artists Agency that invests in programs to help audio creators get their works delivered to the public. “Level 1 is an artist using a machine to assist them. Level 2 is where the music is crafted by a machine but performed by a human. Level 3 is where the whole thing is machines.”
A.I. claiming ownership of a third of the top 40 may be surprising to the casual listener, but it’s a low bar for Drew Silverstein, CEO of Amper, an A.I.-based music composition software company in New York City. Amper’s product allows musicians to create and download “stems”—unique portions of a track like a guitar riff or a hi-hat cymbal pattern—and rework them. Silverstein sees predictive tools as an evolution in the process of music creation. “Starting from quill and parchment centuries ago, then moving into analog and tape and mobile [devices]—A.I. is really just the next step,” he says.
Silverstein isn’t the only one with that view. Large technology companies also offer A.I.- powered tools and services for music-making. Among them: IBM Watson Beat, Google Magenta’s NSynth, Sony’s Flow Machines, and Spotify’s Creator Technology Research Lab. The resources, intended for use by artists and labels, use algorithms to analyze libraries of songs and sales charts to predict what may have the best chance of charting (and when).
Though the latest developments in A.I. are helping fuel its use in popular music, it’s not really a new idea. More than two decades ago, David Bowie helped create the Verbasizer, a program for Apple’s Mac that randomized portions of his inputted text sentences to create new ones with new meanings and moods—an advanced version of a cut-up technique he used, writing out ideas, then physically slicing and rearranging them to see what stuck. Bowie made use of the Verbasizer for his 1995 album Outside. “What you end up with is a real kaleidoscope of meanings and topic and nouns and verbs all sort of slamming into each other,” said the influential pop star in a 1997 documentary featuring the tool.
Like-minded artists insist A.I.-assisted songwriting is a boon, not a threat. Taryn Southern, a singer and former American Idol contestant who released her debut album, I Am AI, last year, composed the eight-song work with Amper, Watson Beat, and other software, plus human help.
“A person who’s been trained on guitar since they were 8 years old is going to be masterful,” says Southern. “It would take them an hour to bang out a song. For people who don’t have that skill set, it could take weeks.” As with arguments against synths and samples, “It’s not putting anyone out of work, just making them work differently,” she says.
Producer, songwriter, and Black Eyed Peas member Will.i.am has another take: There’s nothing artificial about music created by A.I. “When you say ‘artificial intelligence’ to compose music, what part of it is helping creative songwriters? Is the A.I. helping you compose? Distribute? Who’s listening? How much money will it make? No, bro. That’s a new machine-learning tool”—and nothing more.
For artists and their reps, money—for everything from production costs to copyright and royalties—is a key issue. Southern, for example, shares writing credits on her album with Amper. But the software allowed her to use funds that would have been conventionally spent on human songwriters, session musicians, and studio time for a management team, publicists, and videographers—other essential components for the modern professional entertainer.
Or, as Will.i.am puts it: “Michael Jackson, Quincy Jones, Luther Vandross—think about all of those composers. Microphones, engineers, and tape cost money.” In other words, A.I. can’t replicate the innate talent of those songwriters, let alone the complicated recording processes they used to create their bes-known works.
So don’t expect artificial intelligence to write the next “Space Oddity” anytime soon. But an artist with the right chops and ingenuity might get there faster with A.I.—even if, as Luddites and futurists surely agree, this universe will never see another David Bowie.
This article originally appeared in the November 1, 2018 issue of Fortune.