Daniel Kwan Has a Plan to Tackle AI’s Hollywood Takeover and It Requires “Unprecedented” Action

By admin
14 Min Read

Daniel Kwan Has a Plan to Tackle AI’s Hollywood Takeover and It Requires “Unprecedented” Action

Daniel Kwan, one half of the Oscar-winning filmmaking duo behind Everything Everywhere All at Once, joined technology futurist Jaron Lanier for a rich discussion presented by Nicolas Berggruen’s Studio B in West Hollywood on Friday evening. The hourlong conversation centered on a single question — Can human storytelling survive the algorithm? — and the dialogue helped surface many of the hot-button issues facing Hollywood (and humanity) amid the rise (and threat) of artificial intelligence technology.

The most powerful moment of the night came when Kwan presented a passionate answer to another question that neither he or Lanier asked but everyone in the industry is wrestling with at this critical moment: How should Hollywood respond to AI? When he finished his answer, Kwan got an enthusiastic round of applause from the 100 or so seated inside a private residence at Sierra Towers, among them filmmaker Michael Mann, Kwan’s Everything Everywhere All at Once Oscar-winning pal and producer Jonathan Wang, producer Lawrence Bender, manager-producer David Unger and actress turned Oscar whisperer Colleen Camp.

“One of the experiences I’ve been having in Hollywood while talking to people from all different levels, is that there’s this feeling that this tech is inevitable. It’s coming and it’s infiltrating our industry and there’s nothing we can do about it,” explained Kwan of conversations that he said are permeated by a sense of fear because people are scared to talk about something they know less about than those who created it. “But I want to remind the people here who are filmmakers, you are experts. You’re experts in storytelling. You’re experts in filmmaking. You’re experts for your own industry. If you’re a doctor, you’re an expert in [medicine]. If you’re a teacher, you’re an expert in education. We cannot allow the tech industry to set the terms for our industries.”

Instead, Kwan argued that Hollywood as a whole should help set the terms of adoption. To accomplish that will require “unprecedented” action by having every facet of the industry join hands — from studios and agencies to unions and the Academy of Motion Picture Arts & Sciences — and form a unified front to combat the takeover. Now is the time “to put your hands on the steering wheel, because if you don’t, they will,” he said.

“We have to understand that AI is fundamentally incompatible with our institutions, both on a global scale but also within our industry. AI is going to blow through our systems and our institutions in a way that they cannot handle, and so we have to collectively upgrade those institutions to systems. For the film industry, that means we have to do something unprecedented, which is we have to bring together the studios, the unions, the Academy, agencies, basically everyone, as a unified front against the tech industry,” he continued. “We’re putting a line in the ground against another industry that is an invasive species. We have to basically say, ‘Look, if you want us to adopt your technology into our pipelines, you’re going to have to meet us on our terms, and that means you’re going to have to help us upgrade our institutions.’”

The process would also require the tech companies to help Hollywood navigate what many see as an inevitable yet potentially catastrophic casualty — “job loss and job transition” — as well as consulting the industry through “data dignity, creative, copyright and intellectual property.” Then Kwan turned his attention toward something he said was his most important point: “What are we going to do about deepfakes and misinformation and the stuff that you are polluting our consensus truth with?”

Kwan cited fake sex tapes featuring politicians as one example, though there are countless options he could have mentioned. Because the technology is so advanced and the damage to society so potentially harmful, Kwan offered a “controversial” suggestion that some of the AI tools should require a license or registration in order to use, much like a gun. “You can do real damage with a photorealistic image or video,” he added.

Because of the harmful implications, Kwan said he’s already drawn a line in the sand when it comes to his own career. “I would love to use this generative AI, I think it’s so fun. It’s so interesting. I played with it, but I will not use it for my career until we do something about this technology,” he said. “We are just silently letting them take over and not even putting up a fight, not just for our industry but for the world.”

Kwan called this moment “the tip of the spear,” and suggested that if Hollywood can take action, then many other industries like education could follow suit. “I feel like this is what we need to be doing right now before it gets too late. Once it gets integrated, once it takes over everything, we will be in the same place we are with social media, which that it’s is so entrenched in our economy and we can’t regulate it without ruining our economy. I just want to give you guys power back again and remember, you are the experts of your industry and you should take that to heart,” he said.

Kwan and Lanier made for interesting conversation partners. Kwan, of course, is a veteran and visionary auteur alongside longtime collaborator Daniel Scheinert (hence the nickname The Daniels). Together, they won three Oscars for their A24 hit, Everything Everywhere All at Once, including best picture. Kwan is known to have been connecting with a lot of tech-world insiders over the past couple years to learn more about AI.

Lanier, on the other hand, hails from Silicon Valley and spoke on behalf of the tech industry. He’s a man who holds many titles including virtual reality pioneer, musician, author, tech futurist, etc., and he currently works as the prime unifying scientist at Microsoft’s office of the chief technology officer. Lanier made it clear during the conversation that he was there to share personal views and not those of Microsoft. However, Lanier is dabbling in Hollywood. As he mentioned during the conversation, he’s developing an AI project with Natasha Lyonne and Asteria, an AI venture she founded with partner Bryn Mooser. It’s called Uncanny Valley, which Lyonne will direct from a script she wrote with Brit Marling. Both are on board to star. The film follows a teenage girl who becomes unmoored by a hugely popular AR video game in a parallel present. Lanier is helping develop the game elements, and doing some writing on it, he said.

“I always say it’s like I’m a tourist in Vanuatu,” Lanier quipped of his trip to Hollywood. “I know it’s about to be underwater, but I just want to enjoy it.”

The news is flooded with headlines about AI on a daily basis as advancements are being made with what feels like lightning speed. Days before Kwan and Lanier sat down, President Donald Trump and his White House administration unveiled America’s AI action plan, dubbed “Winning the AI Race” and organized into three pillars: accelerating innovation, building American AI infrastructure, and leading in international diplomacy and security. Critics were quick to point out that the plan seems to favor tech companies over AI safety and potential job loss as it rolls back regulations. In response, close to 100 organizations (including the Writers Guild of America East) banded together to create the People’s Action Plan to lobby on behalf of “public well-being, shared prosperity, a sustainable future and security for all.” The coalition mirrors what Kwan mentioned in his comments.

In addition to data dignity, algorithm regulations and the controversies around First Amendment protections for the use of AI, Kwan and Lanier discussed verbiage. Specifically, how AI is commonly referred to as a noun. “I really object to the usual framing of the conversation because it goes astray right at the start when somebody says, ‘Will AI destroy us? Will it eat us? Will AI save us? Will it cure all diseases? What will AI do?’ But the thing is, AI isn’t a thing out there. It is a collaboration of us. It’s trained on your data,” Lanier explained. “These things are not out there as alien entities, like, ‘We wonder what they’ll do and we have to pray to them and we have to hope that they’ll be good to us.’ No, no, no. It’s our thing. There’s nothing there but people. The moment you treat it as a noun, you’ve already lost.”

Friday’s event marked the end of the third edition of Vaster Than Empires (VTE), a speculative storytelling retreat hosted by Berggruen’s Studio B and its Future Humans program. The weeklong retreat hosted a roster of 10 emerging screenwriters as they explored another existential question: What will life become? Studio B — the in-house media arm of the Berggruen Institute, founded by the philanthropist and investor who has been referred to as the “homeless billionaire” and a “philosopher king” per The New York Times — is led by founders Alex Gardels and Nathalia Ramos An, with Nick Goddard serving as producer and head of development. 

Like the Kwan and Lanier pairing, Studio B has backed similar thought conversations in a salon series that previously paired Eric Schmidt with Ashton Kutcher, Reid Hoffman with J.J. Abrams, and Yuval Noah Harari with Joseph Gordon-Levitt. But unlike those, Friday night’s ended with a surprise musical performance that featured Lanier on the khaen, a free-reed mouth organ from Laos. Before Lanier blew into the bamboo pipes, Kwan offered final (profound) words to close the conversation.

“To learn about [AI] is to basically have to say goodbye to the future you thought was going to happen. And that requires a mourning process. I just want to acknowledge that for some of you this is normal and fine, you’ve heard all this stuff and it’s not a big deal. But for others, this is really hard stuff to handle,” Kwan said. “The fact that Hollywood might not happen [to] exist anymore, the fact that all these institutions might be replaced by something else, we’re saying goodbye to the future, and there’s a little bit of grief in that. Who knows if it’s going to be good? Who knows if it’s going to be bad? But it is fundamentally going to change. Give yourself the space.”

Kwan said it took him “many months” to absorb it himself, and that anyone who has worked in the AI space has been forced to go through the five stages of grief on their own. “In order to move to action, you have to move to that last stage of acceptance. I want to offer that up to you, even if it’s not hitting you right now. It might hit you tomorrow. It might hit you next year, who knows?”

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version