Skip to main content
interview
Open this photo in gallery:

Emily St. John MandelJiaHao Peng/Supplied

Emily St. John Mandel is the celebrated Canadian novelist and essayist whose six books include her latest, Sea of Tranquility, and Station Eleven, which has been translated into 26 languages and adapted into a hit TV show for HBO Max.

Mandel spoke with The Globe for an episode of Globe and Mail podcast, Machines Like Us.

So much of our present technology has been imagined in science fiction before. I wonder if that’s something you think about, and what you think, because AI really is that now, right? We’re building this thing, that we are imagining what it could be, but that imagining is not done by us, it was done by science-fiction writers. How do you think about that connection?

It’s really interesting, because science fiction can absolutely be a blueprint. What’s fascinating is the way that’s kind of a neutral, like it’s not always a positive or always a negative. I’ve lived in the United States, you know, since my early 20s. The political situation in the United States is so dark, like I cannot over express how bleak it feels sometimes. And there’s this new kind of terrible strain following the fall of Roe v. Wade, which was the American Supreme Court decision that protected abortion across the country, where it feels to me sometimes like the far fringes of the Republican Party read The Handmaid’s Tale and they were like, “We want that. Gilead sounds awesome.” So that’s kind of the darkest possible vein where it feels like an entire political party was inspired by a dystopia.

On the positive side, I feel that AI is something of a neutral where I understand the practical implications, which I think could be incredibly beneficial. If AI could read test results in a more accurate way, so that cancers are caught sooner, that would be worth almost anything. At the same time, I do some TV writing. My union, the WGA (Writers Guild of America), was on strike for much of last year, and AI was the sticking point.

And we’ve all experienced the implications.

[laughs] Exactly. The current TV season is rough.

It’s horrible.

It’s horrible out there. One of the major sticking points had to do with the minimum size of a writer’s room. And at first I didn’t get it. And then someone explained to me, no, the point is AI, that for these studios to save money, there was this kind of not-very secret line of thinking among studios and streamers that perhaps the ideal size for a writer’s room wouldn’t be five or seven or 10 people, which is what it is, generally speaking, it might just be AI plus a showrunner.

And do they need a physical space at all?

Could be Zoom, right? You know, it could be ChatGPT plus a guy in his basement. And look, I’m sure you could create some mediocre TV with that formula. Maybe it’s more that I can’t trust the originality. It’s like, whose words am I actually reading? It’s just all the stolen work that’s kind of been fed into the machine.

Do you think there’s a limit then, to, like, the quality of things that can be produced with these technologies?

At the moment? I think, yeah, I do, on their own. You know, showrunners, [...] they’re typically very experienced writers. So I fully believe that if an AI were to produce a very mediocre piece of writing. A showrunner could probably make it good, yeah, but there’s a human cost here. It’s just increasing wealth disparities. It means that a hypothetical seven or eight or nine writers are out of a job. It’s like the thing that nobody asked for except the billionaires, is how it feels like in my industry,

In Sea of Tranquility, a character is talking about what it’s like to live on a moon colony, and says that it was neither dystopian nor utopian. It just sort of was. So do you think we’d spend too much time thinking about these radical utopian or dystopian futures?

For me, that’s a very plausible kind of future, just this idea that it’s probably not amazing all the time or horrible all the time, but it’s mostly in this kind of in-between, and with that as the backdrop, then it kind of becomes about how we respond to these technologies. And you know, what it’s like travelling through time, for example, or living in the sort of simulated atmosphere of a moon colony.

In Sea of Tranquility, you raise the possibility that we’re living in a simulation. And one of the characters says, a life lived in a simulation is still a life. What did you mean by that?

Well, what is a simulation? It would be crazy to say that the lives we live in these cities are somehow less real than the lives that our ancestors lived kind of in in wilderness. So then, if you extrapolate that forward, if a life lived in New York or Toronto or Montreal is not less real than a life lived in the country, then, I think a life lived in the artificial bubble of a moon colony is not less real than, you know, a life lived in Montreal or Toronto or New York. I think a life lived in a simulation is still a life. I think we’re still us. And you know, as I was thinking about that, just what I found myself thinking about a lot is: what are the practical implications of all of our reality being a simulation or not? And I eventually came to the same conclusion my character does, which is that I don’t think it matters.

And I think part of simulation theory is that it’s binary. [...] What I worry a bit more about is some of our transition to virtual things, whether it’s more of our lives being mediated on social media, or more and more of our engaging with AIs and artificial life forms, and I feel that slow transition doesn’t necessarily give us a chance to think about what we might be losing in the process. And is that something you worry about here, too?

So that occurred to me as something that I guess now we have to start worrying about. You know, if we’re in a Zoom meeting, are we actually speaking to people, or are we speaking to their AI avatars at this point?

I think we’re still probably in a place where we can generally know....But we’re right on the edge of losing that capacity, and that might just be a moment we’re in, right?

Yeah, so five years from now, it could be absolutely impossible to tell the difference, and oh my God, the loneliness of that. Like, imagine realizing you’re the only human in the Zoom meeting and nobody else is there. That’s just kind of devastating, actually [laughs].

This interview has been condensed and edited.

Taylor Owen is the host of Machines Like Us, founding director of The Center for Media, Technology and Democracy, and an Associate Professor in the Max Bell School of Public Policy at McGill University.

Listen to the full episode on Machines Like Us

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe