search
How Holly Herndon and her AI baby spawned a new kind of folk music
Proto is an engrossing testament to the power of multiple voices singing in unison. One of those voices is a computer named Spawn.
Boris Camaca

In the past few years, I’ve heard more than one person say that Holly Herndon’s Platform predicted Cambridge Analytica. Figuratively speaking, it’s kind of true: More than three years before Mark Zuckerberg sat before Congress to publicly reckon with how the personal data of more than 50 million Facebook users had ended up in the hands of the Trump campaign’s go-to political consulting firm, the Berlin-based composer made a record exploring the fraught relationship between human beings and the platforms that harvest their online activity for targeted advertising.

ADVERTISEMENT

But the connection runs a bit deeper: As The FADER previously reported, Herndon — who recently completed a PhD at Stanford’s Center for Computer Research in Music and Acoustics — conceived Platform as a literal platform for some of the ideas floating around in her extended circle of artist and academic friends. One of them was German economist Hannes Grassegger, who along with co-author Mikael Krogerus would become one of the first reporters to expose Cambridge Analytica’s ties to Trump and a prominent pro-Brexit campaign. Shortly after Platform’s release in 2015, Grassegger and Herndon produced a mix for DIS that took the form of a “longing love poem” (Herndon’s words), chronicling the vaguely feudal power dynamics between a fictional social media user and their favorite platform over a backdrop of Medieval song.

When I point out to Herndon over a Skype call in early May that Platform felt years ahead of the current discourse around internet privacy, I can almost hear her frowning on the other end of the line. “Usually when you write about something, it's satisfying,” she says, sniffling with a late-Spring cold. “That was not satisfying. It was like, 'Yes, you guys, we saw this coming. We've been talking about this, and we're not doing anything about it, and that's really frustrating.'” The comment makes me wonder what unseen forces inside my MacBook Herndon will see coming next.

ADVERTISEMENT

Proto, Herndon’s gorgeous new record on 4AD, is another statement that uses the album format — not just the music itself, as Platform collaborator Benedict Singleton once explained, but all the press opportunities and promotional music videos that come along with it — to think deeply about the future of technology and its dangerously intimate relationship with those who interface with it every day. Like Platform, it’s a community-minded affair: Herndon composed and produced the record with digital artist and philosopher Mat Dryhurst, who is also her partner, and the album’s rich, choral swells are the work of a 14-piece choir she wrangled together from her community of friends in Berlin.

Unlike the church choirs Holly Herndon grew up singing in, the group includes a developer and artist named Jules LaPlace and a singing artificial intelligence baby named Spawn, whom Herndon and Dryhurst have spent the past few years raising in their Berlin apartment. (LaPlace is Spawn’s godfather; Indiana producer Jlin, who contributed a frenetic beat instrumental for Spawn to interpolate in her own voice, is her “Godmother.”)

ADVERTISEMENT

Proto is a stunning body of music even if you know nothing about its backstory — full of blockbuster-sized synth swells, weird robot monologues, and the heart-swelling resonance of multiple voices joining together in song. But for a composer who began her career claiming that computers were “the most personal instruments you can use right now,” Spawn’s presence on the record is an important one.

After using machine learning to train Spawn to reproduce a number of different voices — including Herndon’s own, and that of the larger choir — she enlisted the computer baby, a gaming PC, not as a composer in her own right, but as another vocalist in the fray. It can be hard to tell when Spawn is singing and when we’re listening to a “real” human, but that’s kind of the point: As we discussed in a wide-ranging interview about Proto and its place in the broader trajectory of Herndon’s career, society is still just in the beginning stages of figuring out where artificial intelligence fits in in our lives — and in our music. As headlines about reanimated pop stars and ambient scores that take cues from our own heart-rate call into question the very nature of creativity itself, it’s probably something that artists should start grappling with — lest big corporations go ahead and do that thinking for us. Not surprisingly, Herndon is already looking ahead.

The below interview has been edited for length and clarity.

ADVERTISEMENT

How did you "raise" Spawn?

We bought Spawn's parts and built her in our studio. Jules installed an operating system and some software, we created our own training sets, and we started listening to the outcome. We had about six months of boring results before we started to get interesting results. The spoken part of “Birth,” which is trained on my voice, was the first time we were like, “You can hear the logic of the neural network at work.” AI is a combination of processing power and data sets. That's why the Chinese government, and companies like Google and Facebook, have the most sophisticated AI models — they have access to vast amounts of data and giant, building-sized farms of processors. We did a very DIY version of that — just creating data sets that the neural network studies and tries to recreate something according to the set.

ADVERTISEMENT

People will train a neural network on Beethoven or Bach, and the computer can create music in that style forever. We're trying to get the computer to understand the logic of a sound sample. Neural networks can respond to photographs, movies, audio recordings — anything that can be captured and archived. That's what's interesting and terrifying about it: As soon as something's captured, it becomes machine-readable. We started to think, “Gosh, every activity that we partake in that's recorded, every digital moment online — is it training some sort of unseen artificial intelligence somewhere? Especially when you're dealing with something like Facebook or Google, I would say the answer to that is yes.

Were you exposing Spawn to a variety of different materials?

We limited it to our specific community. We have an amazing vocal ensemble that we'd record and turn into training sets. We trained Spawn extensively on mine and Mat's voice, and we created a voice model of my voice, which went into making “Godmother” with Jlin. Even though AI's the fun, exciting thing to talk about, it really only makes up 20% of the audio. Much of it is human voices sharing space together. After touring Platform for years, we were really missing communal music-making. [We’d also been experiencing a kind of] unease, some feelings of alienation politically and socially, [and we felt a need] to reconnect with people, so we put together this motley crew of individuals in Berlin and started experimenting.

Most of the stuff I've done in the past has been just my voice and the computer, or some collaborators online. This is the first time I've actually been able to go into good recording studios with people, and you can hear the shared air. There's this tendency in AI to make invisible all of the human labor that goes into it. We wanted to acknowledge the people that trained the AI — to hear their voices, instead of creating some glossy, perfect image of something that's otherworldly.

ADVERTISEMENT

What different styles of vocal music were you drawing from?

This musicologist, Gary Tomlinson, looks at human evolution through music. Some of the dissonant, almost nasal deliveries you find in different communities around the world that would never have been in contact with each other — [it’s almost this] inherent technology inside of us that had to come out. [Singing is] tied to all kinds of things, like hunting on the savannah — humans being able to make their group sound louder so that they could hunt more efficiently. We weren't trying to focus on one specific region or delivery style. It was more about trying to find cross-cultural similarities — almost like finding a new kind of folk music.

The most audible one is the Sacred Harp, a kind of music that's found in the American South, Ireland, and all over the UK. It's usually notated in shape-note form, which is designed for people who couldn't read music, [where] the shape of the note communicates the interval distance between the notes. It's performed in a square or circle, this amazing surround-sound of really powerful voices almost scream-singing at you really emotionally. Sacred Harp felt like an apt vessel to explore some of [the timbral] qualities I was discussing earlier with Tomlinson — these delivery mechanisms. We were searching for a new kind of communion, in a way.
The first two verses of “Frontier" is very clearly in the style of Sacred Harp.

ADVERTISEMENT

The last time we spoke, we talked about how Platform was in conversation with Medieval music. You drew an interesting parallel between being a user of social media platforms and a serf in a feudal society. Is there any similar metaphor going on with this record?

The predominant metaphor here is about the child — the raising of this future intelligence as a community. We're also talking about prototyping a new kind of ensemble that includes the developer, and thinking about “proto” as a prefix — a pre-phase to the next. I was thinking about the period of the Enlightenment and how things changed so dramatically after that, and if we're in some sort of proto phase before the next dramatic shift in human intellectual thought—this collaboration [between] artificial and human intelligence.

But we were also thinking about protocols as a baseline set of rules that a community agrees upon. People use the word in technology while talking about the baseline infrastructural decisions you make at the protocol layer, [which] have dramatic repercussions as you start to build the platform layers on top of it. With Platform, we were dealing with some of the problems with the way the Internet has gone and the issues around platform capitalism. [With Proto, we’re] thinking about that in terms of artificial intelligence — what kind of values we want to instill at the protocol layer before things get out of hand. The last several years [have given rise to] a dramatic shift in protocol. What do we value? What are we going to take as a shared truth? It's not just a technology question — it's political and social.

You said this record was inspired by a feeling of alienation in the air. Do you mean that in the broader sense of being an internet user in 2019, or are you referring to the political climate specifically?

I see those things as intertwined.The political climate is enabled through our technological climate.

ADVERTISEMENT

What made you want to create an album that is proposing possible solutions instead of merely critiquing?

That's me and Mat's nature. With Platform, I was craving new narratives and archetypes, and we sometimes get so stuck in the same fantasies that it feels like this nostalgic loop. It’s hard to imagine a different kind of future if all I can hear and see is the past. That’s our job: to create these new narratives and fantasies. [With AI], I think we're going to see a flood of automated compositions — people using neural nets to extract the logic from other people's work, and a lot of appropriation. We're going to see big issues around attribution. We've seen the Tupac hologram — we could reanimate dead pop stars for our entertainment for the rest of our lives, and there's some really dark and weird questions that come through that.

But I'm also not doom and gloom. There are people who are interested in different logics, and it's about finding those people, creating those alternatives, feeling like you have agency in the world, not giving up, and remembering to care for each other and be emotional together. That’s what we're trying to create with with the live shows: this ecstatic public experience where you can be emotional in public, hopefully.

ADVERTISEMENT

The first time we spoke was on the first day of your PhD program in 2012. Something you said that really stood out to me at the time was that “computers are the most personal instruments you can use right now.” Has that changed at all?

Not at all. It's even more magnified now — I have a computer that knows my voice and can sing anything through it. After Movement, and I was really punching my way through and trying to self-validate the laptop as an instrument. A lot of people weren't considering it an instrument at that time, and now that feels almost outdated. Platform felt like opening that instrument up to the internet and people and relationships, and [exploring] the vulnerabilities that this new intimacy opens us up to — Is every private gesture potentially a public gesture? With Proto, it's like, Is every private gesture potentially part of an unseen training data? Is every digital moment going into training some sort of intelligence that I may never even come in contact with?

Why was it important for you for the AI to be an equal collaborator?

I like composing, and I'm not trying to write myself out of a job. I could have trained Spawn on my composing style of the past, but that would have limited me to what I have [already] done, and I want to grow. Spawn wouldn't have written “Frontier,” because I'd never written anything like “Frontier” before. I had to push myself into unknown territory to get there, and that's the point with this automated composing bullshit: It basically just rehashes the past. You might be able to do a mix between two different versions of the past that creates something that feels somewhat new, but this constant nostalgia drives me crazy. That's when I have feelings of hopelessness: when I look around me and all I can see are previous decades, or the reenacted radicalism of prior decades that no longer has any of the weight those gestures [did] at that time. That makes me feel like I'm in some horrible dream hell where nothing can ever progress or get better, because we're always doomed to repeat our pasts.

Why was it helpful for listeners to think of the A.I. as this anthropomorphic creature?

I'm careful not to be too anthropomorphic with it. I don't see Spawn as a human baby. I see Spawn as an artificial intelligence baby. We use the baby metaphor because it felt like something that needed to be nurtured by the input that we were giving it. A baby doesn't have perspective or context — it's just focused on whatever it's dealing with at the moment. That's how we felt the AI was processing information. It was something that needed to be handled with care, [that] was still really young and undeveloped. By using a metaphor, it can bring something to life in a way. We wanted to find a way to make it enticing, but also honest.

ADVERTISEMENT

Do you think of this A.I. baby as a sentient being?

No. I don't. It's something that can surprise and can have the feeling of creativity and ingenuity, but there's no consciousness yet. [How We Became Posthuman author] Katherine Hayles has been writing about consciousness and cognition recently, and about how there are cognitive systems all around us: Plants are cognitive systems, traffic lights are cognitive systems, computers are cognitive systems, we’re cognitive systems. But there's a difference between cognition and consciousness, and the spectrum between those two things is something we’re still figuring out.

So the album debunking this idea of artificial intelligence as being just as sophisticated as the human brain.

Oh yeah — that couldn't be further from the truth. Human beings are incredible. Our bodies are these insane sensors — our eyeballs, ears, skin, everything. Spawn has an audio jack, you know what I mean? It’s so limited and so nascent right now, but it has the potential to be really powerful. It's more about seeing it as an entirely different kind of intelligence — an inhuman intelligence, that we can then learn from.

ADVERTISEMENT

You’ve said that “Godmother” was some in ways deliberately showing the limitations of the AI — that’s it’s not a beautiful piece of music, but rather an attempt to show where the technology is at.

A lot of the press releases present AI in this very glossy way that erases all of the human labor that went into training whatever the AI is doing. It also creates this illusion of it being more developed than it is. We wanted to be more honest about it. When you're dealing with automated composing, you get a MIDI score at the end, and when you push that through a digital instrument, it sounds really clean. We're dealing with sound as material, and by using audio material, you can really hear the roughness of the neural network trying to figure out what to do next. That was something that we chose, because we wanted to make that clear: [that] the current state of the technology is still developing. That's what's exciting about it: we still can have a say in which direction it goes.