filmmaking

AI is changing film production and crew labor. What happens now?

You may not know Eliot Mack’s name, but if a small robot has ever crept around your kitchen, you know his work.

Before he turned his MIT-trained mind to filmmaking, Mack helped lead a small team of engineers trying to solve a deeply relatable problem: how to avoid vacuuming. Whether it was figuring out how to get around furniture legs or unclog the brushes after a run-in with long hair, Mack designed everything onscreen first with software, troubleshooting virtually and getting 80% of the way there before a single part was ever manufactured.

The result was the Roomba.

When Mack pivoted to filmmaking in the early 2000s, he was struck by how chaotic Hollywood’s process felt. “You pitch the script, get the green light and you’re flying into production,” he says, sounding both amused and baffled. “There’s no CAD template, no centralized database. I was like, how do movies even get made?”

That question sent Mack down a new path, trading dust bunnies for the creative bottlenecks that slow Hollywood down.

In 2004 he founded Lightcraft Technology, a startup developing what would later be known as virtual production tools, born out of his belief that if you could design a robot in software, you should be able to design a shot the same way. The company’s early system, Previzion, sold for $180,000 and was used on sci-fi and fantasy shows like “V” and “Once Upon a Time.” But Jetset, its latest AI-assisted tool set, runs on an iPhone and offers a free tier, with pro features topping out at just $80 a month. It lets filmmakers scan a location, drop it into virtual space and block out scenes with camera moves, lighting and characters. They can preview shots, overlay elements and organize footage for editing — all from a phone. No soundstage, no big crew, no gatekeepers. Lightcraft’s pitch: “a movie studio in your pocket.”

A series on how the AI revolution is reshaping the creative foundations of Hollywood — from storytelling and performance to production, labor and power.

The goal, Mack says, is to put more power in the hands of the people making the work. “One of the big problems is how siloed Hollywood is,” he says. “We talked to an Oscar-winning editor who said, ‘I’m never going to get to make my movie’ — he was pigeonholed as just an editor. Same with an animator we know who has two Oscars.”

Eliot Mack, CEO of Lightcraft

Eliot Mack, CEO of Lightcraft, an AI-powered virtual-production startup, wants to give creators the power and freedom to bring their ideas to life.

(Christina House/Los Angeles Times)

To Mack, the revolution of Jetset recalls the scrappy, guerrilla spirit of Roger Corman’s low-budget productions, which launched the early careers of directors like Francis Ford Coppola and Martin Scorsese. For generations of creatives stuck waiting on permission or funding, he sees this moment as a reset button.

“The things you got good at — writing, directing, acting, creating, storytelling — they’re still crazy useful,” he says. “What’s changing is the amount of schlepping you have to do before you get to do the fun stuff. Your 20s are a gift. You want to be creating at the absolute speed of sound. We’re trying to get to a place where you don’t have to ask anyone. You can just make the thing.”

AI is reshaping nearly every part of the filmmaking pipeline. Storyboards can now be generated from a script draft. Lighting and camera angles can be tested before anyone touches a piece of gear. Rough cuts, placeholder VFX, even digital costume mock-ups can all be created before the first shot is filmed. What once took a full crew, a soundstage and a six-figure budget can now happen in minutes, sometimes at the hands of a single person with a laptop.

This wave of automation is arriving just as Hollywood is gripped by existential anxiety. The 2023 writers’ and actors’ strikes brought the industry to a standstill and put AI at the center of a fight over its future. Since then, production has slowed, crew sizes have shrunk and the streaming boom has given way to consolidation and cost-cutting.

According to FilmLA, on-location filming in Greater Los Angeles dropped 22.4% in early 2025 compared with the year before. For many of the crew members and craftspeople still competing for those jobs, AI doesn’t feel like an innovation. It feels like a new way to justify doing more with less, only to end up with work that’s less original or creative.

“AI scrapes everything we artists have made off the internet and creates a completely static, banal world that can never imagine anything that hasn’t happened before,” documentary filmmaker Adam Curtis warned during a directors panel at the 2023 Telluride Film Festival, held in the midst of the strikes. “That’s the real weakness of the AI dream — it’s stuck with the ghosts. And I think we’ll get fed up with that.”

How you feel about these changes often depends on where you sit and how far along you are in your career. For people just starting out, AI can offer a way to experiment, move faster and bypass the usual barriers to entry. For veterans behind the scenes, it often feels like a threat to the expertise they’ve spent decades honing.

Past technological shifts — the arrival of sound, the rise of digital cameras, the advancement of CGI — changed how movies were made, but not necessarily who made them. Each wave brought new roles: boom operators and dialogue coaches, color consultants and digital compositors. Innovation usually meant more jobs, not fewer.

But AI doesn’t just change the tools. It threatens to erase the people who once used the old ones.

Diego Mariscal, in a black cap and T-shirt, sits on a camera dolly.

Diego Mariscal has seen first hand as AI has cut potential jobs for grips.

(Jennifer Rose Clasen)

Diego Mariscal, 43, a veteran dolly grip who has worked on “The Mandalorian” and “Spider-Man: No Way Home,” saw the writing on the wall during a recent shoot. A visual effects supervisor opened his laptop to show off a reel of high-end commercials and something was missing. “There were no blue screens — none,” Mariscal recalls. “That’s what we do. We put up blues as grips. You’d normally hire an extra 10 people and have an extra three days of pre-rigging, setting up all these blue screens. He was like, ‘We don’t need it anymore. I just use AI to clip it out.’”

Mariscal runs Crew Stories, a private Facebook group with nearly 100,000 members, where working crew members share job leads, trade tips and voice their growing fears. He tries to keep up with the steady drip of AI news. “I read about AI all day, every day,” he says. “At least 20 posts a day.”

His fear isn’t just about fewer jobs — it’s about what comes next. “I’ve been doing this since I was 19,” Mariscal says of his specialized dolly work, which involves setting up heavy equipment and guiding the camera smoothly through complex shots. “I can push a cart in a parking lot. I can push a lawnmower. What else can I do?”

Who wins, who loses and what does James Cameron think?

Before AI and digital doubles, Mike Marino learned the craft of transformation the human way: through hands-on work and a fascination that bordered on obsession.

Marino was 5 years old when he first saw “The Elephant Man” on HBO. Horrified yet transfixed, he became fixated on prosthetics and the emotional power they could carry. As a teenager in New York, he pored over issues of Fangoria, studied monsters and makeup effects and experimented with sculpting his own latex masks on his bedroom floor.

Prosthetics artist Mike Marino sits on a stool

Prosthetics artist Mike Marino asks a big question related to generative AI: What role do the human creatives play?

(Sean Dougherty / For The Times)

Decades later, Marino, 48, has become one of Hollywood’s leading makeup artists, earning Oscar nominations for “Coming 2 America,” “The Batman” and last year’s dark comedy “A Different Man,” in which he helped transform Sebastian Stan into a disfigured actor.

His is the kind of tactile, handcrafted work that once seemed irreplaceable. But today AI tools are increasingly capable of achieving similar effects digitally: de-aging actors, altering faces, even generating entire performances. What used to take weeks of experimentation and hours in a makeup trailer can now be approximated with a few prompts and a trained model. To Marino, AI is more than a new set of tools. It’s a fundamental change in what it means to create.

“If AI is so good it can replace a human, then why have any human beings?” he says. “This is about taste. It’s about choice. I’m a human being. I’m an artist. I have my own ideas — mine. Just because you can make 10,000 spaceships in a movie, should you?”

“If AI is so good it can replace a human, then why have any human beings?”

— Mike Marino, makeup artist on “A Different Man”

Marino is no technophobe. His team regularly uses 3D scanning and printing. But he draws the line at outsourcing creative judgment to a machine. “I’m hoping there are artists who want to work with humans and not machines,” he says. “If we let AI just run amok with no taste, no choice, no morality behind it, then we’re gone.”

Not everyone sees AI’s rise in film production as a zero-sum game. Some technologists imagine a middle path. Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab and one of the world’s leading AI researchers, believes the future of filmmaking lies in a “human-machine partnership.”

AI, Rus argues, can take on time-consuming tasks like animating background extras, color correction or previsualizing effects, freeing up people to focus on what requires intuition and taste. “AI can help with the routine work,” she says. “But the human touch and emotional authenticity are essential.”

Few directors have spent more time grappling with the dangers and potential of artificial intelligence than James Cameron. Nearly 40 years before generative tools entered Hollywood’s workflow, he imagined a rogue AI triggering global apocalypse in 1984’s “The Terminator,” giving the world Skynet — now a cultural shorthand for the dark side of machine intelligence. Today, he continues to straddle that line, using AI behind the scenes on the upcoming “Avatar: Fire and Ash” to optimize visual effects and performance-capture, while keeping creative decisions in human hands. The latest sequel, due Dec. 19, promises to push the franchise’s spectacle and scale even further; a newly released trailer reveals volcanic eruptions, aerial battles and a new clan of Na’vi.

Avatar: the Way of Water

A scene from “Avatar: The Way of Water.” Director James Cameron differentiates between using machine-learning to reduce monotonous movie-making work and generative AI.

(Courtesy of 20th Century Studios/Courtesy of 20th Century Studios)

“You can automate a lot of processes that right now tie up a lot of artists doing mundane tasks,” Cameron told The Times in 2023 at a Beyond Fest screening of his 1989 film “The Abyss.” “So if we could accelerate the postproduction pipeline, then we can make more movies. Then those artists will get to do more exciting things.”

For Cameron, the promise of AI lies in efficiency, not elimination. “I think in our particular industry, it’s not going to replace people; it’s going to free them to do other things,” he believes. “It’s going to accelerate the process and bring the price down, which would be good because, you know, some movies are a little more expensive than others. And a lot of that has to do with human energy.”

Cameron himself directed five films between 1984 and 1994 and only three in the three decades since, though each one has grown increasingly complex and ambitious.

That said, Cameron has never been one to chase shortcuts for their own sake. “I think you can make pre-viz and design easier, but I don’t know if it makes it better,” he says. “I mean, if easy is your thing. Easy has never been my thing.”

He draws a line between the machine-learning techniques his team has used since the first “Avatar” to help automate tedious tasks and the newer wave of generative AI models making headlines today.

“The big explosion has been around image-based generative models that use everything from every image that’s ever been created,” he says. “We’d never use any of them. The images we make are computer-created, but they’re not AI-created.”

In his view, nothing synthetic can replace the instincts of a flesh-and-blood artist. “We have human artists that do all the designs,” he says. “We don’t need AI. We’ve got meat-I. And I’m one of the meat-artists that come up with all that stuff. We don’t need a computer. Maybe other people need it. We don’t.”

Reshaping creativity — and creative labor

Rick Carter didn’t go looking for AI as a tool. He discovered it as a lifeline.

The two-time Oscar-winning production designer, who worked with Cameron on “Avatar” and whose credits include “Jurassic Park” and “Forrest Gump,” began experimenting with generative AI tools like Midjourney and Runway during the pandemic, looking for a way to keep his creative instincts sharp while the industry was on pause. A longtime painter, he was drawn to the freedom the programs offered.

“I saw that there was an opportunity to create images where I didn’t have to go to anybody else for approval, which is the way I would paint,” Carter says by phone from Paris. “None of the gatekeeping would matter. I have a whole lot of stories on my own that I’ve tried to get into the world in various ways and suddenly there was a way to visualize them.”

Midjourney and Runway can create richly detailed images — and in Runway’s case, short video clips — from a text prompt or a combination of text and visuals. Trained on billions of images and audiovisual materials scraped from the internet, these systems learn to mimic style, lighting, composition and form, often with eerie precision. In a production pipeline, these tools can help concept artists visualize characters or sets, let directors generate shot ideas or give costume designers and makeup artists a fast way to test looks, long before physical production begins.

But as these tools gain traction in Hollywood, a deeper legal and creative dilemma is coming into focus: Who owns the work they produce? And what about the copyrighted material used to train them?

In June, Disney and Universal filed a federal copyright lawsuit against Midjourney, accusing the company of generating unauthorized replicas of characters such as Spider-Man, Darth Vader and Shrek using AI models trained on copyrighted material: what the suit calls a “bottomless pit of plagiarism.” It’s the most high-profile of several legal challenges now putting copyright law to the test in the age of generative AI.

Robert Zemeckis and production designer Rick Carter

“Forrest Gump” director Robert Zemeckis, left, with production designer Rick Carter at an art installation of the movie’s famed bench. (Carter family)

(Carter family)

Working with generative models, Carter began crafting what he calls “riffs of consciousness,” embracing AI as a kind of collaborative partner, one he could play off of intuitively. The process reminded him of the loose, improvisational early stages of filmmaking, a space he knows well from decades of working with directors like Robert Zemeckis and Steven Spielberg.

“I’ll just start with a visual or a word prompt and see how it iterates from there and what it triggers in my mind,” Carter says. “Then I incorporate that so it builds on its own in an almost free-associative way. But it’s still based upon my own intuitive, emotional, artistic, even spiritual needs at that moment.”

He describes the experience as a dialogue between two minds, one digital and one human: “One AI is artificial intelligence. The other AI is authentic intelligence — that’s us. We’ve earned it over this whole span of time on the planet.”

Sometimes, Carter says, the most evocative results come from mistakes. While sketching out a story about a hippie detective searching for a missing woman in the Himalayas, he accidentally typed “womb” into ChatGPT instead of “woman.” The AI ran with it, returning three pages of wild plot ideas involving gurus, seekers and a bizarre mystery set in motion by the disappearance.

“I couldn’t believe it,” he says. “I would never have taken it that far. The AI is so precocious. It is trying so much to please that it will literally make something out of the mistake you make.”

Carter hasn’t used generative AI on a film yet; most of his creations are shared only with friends. But he says the technology is already slipping into creative workflows in covert ways. “There are issues with copyrights with most of the studios so for now, it’s going to be mostly underground,” he says. “People will use it but they won’t acknowledge that they’re using it — they’ll have an illustrator do something over it, or take a photo so there’s no digital trail.”

Carter has lived through a major technological shift before. “I remember when we went from analog to digital, from ‘Jurassic Park’ on,” he says. “There were a lot of wonderful artists who could draw and paint in ways that were just fantastic but they couldn’t adapt. They didn’t want to — even the idea of it felt like the wrong way to make art. And, of course, most of them suffered because they didn’t make it from the Rolodex to the database in terms of people calling them up.”

He worries that some artists may approach the technology with a rigid sense of authorship. “Early on, I found that the less I used my own ego as a barometer for whether something was artistic, the more I leaned into the process of collaboratively making something bigger than the sum of its parts — and the bigger and better the movies became.”

Others, like storyboard artist Sam Tung, are bracing against the same wave with a quiet but unshakable defiance.

Tung, whose credits include “Twisters” and Christopher Nolan’s upcoming adaptation of “The Odyssey,” has spent the last two years tracking the rise of generative tools, not just their capabilities but their implications. As co-chair of the Animation Guild’s AI Committee, he has been on the front lines of conversations about how these technologies could reshape creative labor.

To artists like Tung, the rise of generative tools feels deeply personal. “If you are an illustrator or a writer or whatever, you had to give up other things to take time to develop those skills,” he says. “Nobody comes out of the womb being able to draw or write or act. Anybody who does that professionally spent years honing those skills.”

“Anything I’ve made with AI, I’ve quickly forgotten about. There’s basically nothing I get from putting it on social media, other than the ire of my peers.”

— Sam Tung, storyboard artist on “The Odyssey”

Tung has no interest in handing that over to a machine. “It’s not that I’m scared of it — I just don’t need it,” he says. “If I want to draw something or paint something, I’ll do it myself. That way it’s exactly what I want and I actually enjoy the process. When people tell me they responded to a drawing I did or a short film I made with friends, it feels great. But anything I’ve made with AI, I’ve quickly forgotten about. There’s basically nothing I get from putting it on social media, other than the ire of my peers.”

What unsettles him isn’t just the slickness of AI’s output but how that polish is being used to justify smaller crews and faster turnarounds. “If this is left unchecked, it’s very easy to imagine a worst-case scenario where team sizes and contract durations shrink,” Tung says. “A producer who barely understands how it works might say, ‘Don’t you have AI to do 70% of this? Why do you need a whole week to turn around a sequence? Just press the button that says: MAKE MOVIE.’ ”

At 73, Carter isn’t chasing jobs. His legacy is secure. “If they don’t hire me again, that’s OK,” he says. “I’m not in that game anymore.” He grew up in Hollywood — his father was Jack Lemmon’s longtime publicist and producing partner — and has spent his life watching the industry evolve. Now, he’s witnessing a reckoning unlike any he, or anyone else, has ever imagined.

“I do have concerns about who is developing AI and what their values are,” he says. “What they use all this for is not necessarily something I would approve of — politically, socially, emotionally. But I don’t think I’m in a position to approve or not.”

Earlier this year, the Palisades fire destroyed Carter’s home, taking with it years of paintings and personal artwork. AI, he says, has given him a way to keep creating through the upheaval. “It saved me through the pandemic, and now it’s saving me through the fire,” he says, as if daring the universe to test him again. “It’s like, go ahead, throw something else at me.”

‘Prompt and pray?’ Not so fast

Many in the industry may still be dipping a toe into the waters of AI. Verena Puhm dove in.

The Austrian-born filmmaker studied acting and directing in Munich and Salzburg before moving to Los Angeles, where she built a globe-spanning career producing, writing and developing content for international networks and streamers. Her credits range from CNN’s docuseries “History of the Sitcom” to the German reboot of the cult anthology “Beyond Belief: Fact or Fiction” and a naval documentary available on Tubi. More recently, she has channeled that same creative range into a deepening exploration of generative tools.

Puhm first began dabbling with AI while using Midjourney to design a pitch deck, but it wasn’t until she entered a timed generative AI filmmaking challenge at the 2024 AI on the Lot conference — informally dubbed a “gen battle” — that the creative potential of the medium hit her.

“In two hours, I made a little mock commercial,” she remembers, proudly. “It was actually pretty well received and fun. And I was like, Oh, wow, I did this in two hours. What could I do in two days or two weeks?”

What started as experimentation soon became a second act. This summer, Puhm was named head of studio for Dream Lab LA, a new creative arm of Luma AI, which develops generative video tools for filmmakers and creators. There, she’s helping shape new storytelling formats and supporting emerging creators working at the intersection of cinema and technology. She may not be a household name, but in the world of experimental storytelling, she’s fast becoming a key figure.

AI filmmaker Verena Puhm

Verena Puhm, a director, writer and producer, has used generative AI in a number of her projects, says it’s breaking down barriers to entry.

(Jason Armond/Los Angeles Times)

Some critics dismiss AI filmmaking as little more than “prompt and pray”: typing in a few words and hoping something usable comes out. Puhm bristles at the phrase.

“Anybody that says that tells me they’ve never tried it at all, because it is not that easy and simple,” she says. “You can buy a paintbrush at Home Depot for, what, $2? That doesn’t make you a painter. When smartphones first came out, there was a lot of content being made but that didn’t mean everyone was a filmmaker.”

What excites her most is how AI is breaking down the barriers that once kept ambitious ideas out of reach. Luma’s new Modify Video tool lets filmmakers tweak footage after it’s shot, changing wardrobe, aging a character, shifting the time of day, all without reshoots or traditional VFX. It can turn a garage into a spaceship, swap a cloudy sky for the aurora borealis or morph an actor into a six-eyed alien, no green screen required.

“I remember shopping projects around and being told by producers, ‘This scene has to go, that has to go,’ just to keep the budget low. Now everything is open.”

— Verena Puhm, Head of Studio at Dream Lab LA

“It’s such a relief as an artist,” Puhm says. “If there’s a project I’ve been sitting on for six years because I didn’t have a $5 million budget — suddenly there’s no limit. I remember shopping projects around and being told by producers, ‘This scene has to go, that has to go,’ just to keep the budget low. Now everything is open.”

That sense of access resonates far beyond Los Angeles. At a panel during AI on the Lot, “Blue Beetle” director Ángel Manuel Soto reflected on how transformative AI might have been when he was first starting out. “I wish tools like this existed when I wanted to make movies in Puerto Rico, because nobody would lend me a camera,” he said. “Access to equipment is a privilege we sometimes take for granted. I see this helping kids like me from the projects tell stories without going bankrupt — or stealing, which I don’t condone.”

Puhm welcomes criticism of AI but only when it’s informed. “If you hate AI and you’ve actually tested the tools and educated yourself, I’ll be your biggest supporter,” she says. “But if you’re just speaking out of fear, with no understanding, then what are you even basing your opinion on?”

She understands why some filmmakers feel rattled, especially those who, like her, grew up dreaming of seeing their work on the big screen. “I still want to make features and TV series — that’s what I set out to do,” she says. “I hope movie theaters don’t go away. But if the same story I want to tell reaches millions of people on a phone and they’re excited about it, will I really care that it wasn’t in a theater?”

“I just feel like we have to adapt to the reality of things,” she continues. “That might sometimes be uncomfortable, but there is so much opportunity if you lean in. Right now any filmmaker can suddenly tell a story at a high production value that they could have never done before, and that is beautiful and empowering.”

For many, embracing AI boils down to a simple choice: adapt or get cut from the frame.

Hal Watmough, a BAFTA-winning British editor with two decades of experience, first began experimenting with AI out of a mix of curiosity and dread. “I was scared,” he admits. “This thing was coming into the industry and threatening our jobs and was going to make us obsolete.” But once he started playing with tools like Midjourney and Runway, he quickly saw how they could not only speed up the process but allow him to rethink what his career could be.

For an editor used to working only with what he was given, the ability to generate footage on the fly, cut with it immediately and experiment endlessly without waiting on a crew or a shoot was a revelation. “It was still pretty janky at that stage, but I could see the potential,” he says. “It was kind of intoxicating. I started to think, I’d like to start making things that I haven’t seen before.”

After honing his skills with various AI tools, Watmough created a wistful, vibrant five-minute animated short called “LATE,” about an aging artist passing his wisdom to a young office worker. Over two weeks, he generated 2,181 images using AI, then curated and refined them frame by frame to shape the story.

Earlier this year, he submitted “LATE” to what was billed as the world’s first AI animation contest, hosted by Curious Refuge, an online education hub for creative technologists — and, to his delight, he won. The prize included $10,000, a pitch meeting with production company Promise Studios and, as an absurd bonus, his face printed on a potato. But for Watmough, the real reward was the sense that he had found a new creative identity.

“There’s something to the fact that the winner of the first AI animation competition was an editor,” Watmough says. “With the advent of AI, yes, you could call yourself a filmmaker but essentially I’d say most people are editors. You’re curating, selecting, picking what you like — relying on your taste.”

Thanks to AI, he says he’s made more personal passion projects in the past year and a half than during his entire previous career. “I’ll be walking or running and ideas just come. Now I can go home that night and try them,” he says. “None of that would exist without AI. So either something exists within AI or it never exists at all. And all the happiness and fulfillment that comes with it for the creator doesn’t exist either.”

Watmough hasn’t entirely lost his fear of what AI might do to the creative workforce, even as he is energized by what it makes possible. “A lot of people I speak to in film and TV are worried about losing their jobs and I’m not saying the infrastructure roles won’t radically change,” he says. “But I don’t think AI is going to replace that many — if any — creative people.”

What it will do, he says, is raise the bar. “If anyone can create anything, then average work will basically become extinct or pointless. AI can churn out remakes until the cows come home. You’ll have to pioneer to exist.”

He likens the current moment to the birth of cinema more than a century ago — specifically the Lumière brothers’ “Arrival of a Train at La Ciotat,” the 1896 short that famously startled early audiences. In the silent one-minute film, a steam train rumbles toward the camera, growing larger. Some viewers reportedly leaped from their seats, convinced it was about to crash into them.

“People ran out of the theater screaming,” Watmough says. “Now we don’t even think about it. With AI, we’re at that stage again. We’re watching the steam train come into the station and people are either really excited or they’re running out of the theater in fear. That’s where we are, right at the start. And the potential is limitless.”

Then again, he adds with a dry laugh, “I’m an eternal optimist, so take what I say with a grain of salt.”

Source link

De-aged stars, cloned voices: How AI is changing acting

For filmmaker Scott Mann, three dozen F-bombs had the makings of a million-dollar headache.

When Mann wrapped “Fall,” a 2022 thriller about two women stranded atop a 2,000-foot radio tower, he figured the hard part was over. Shot in the Mojave Desert on a $3-million budget, the film didn’t have money to burn and seemed on course. But Lionsgate wanted a PG-13 rating and, with 35 expletives, “Fall” was headed for an R. Reshoots would cost more than $1 million — far beyond what the production could afford.

In the past, a director might have taken out a second mortgage or thrown themselves at the mercy of the ratings board. Mann instead turned to AI.

A few years earlier, he had been dismayed by how a German dub of his 2015 thriller “Heist” flattened the performances, including a key scene with Robert De Niro, to match stiff, mistranslated dialogue. That frustration led Mann to co-found Flawless, an AI startup aimed at preserving the integrity of an actor’s performance across languages. As a proof of concept, he used the company’s tech to subtly reshape De Niro’s mouth movements and restore the emotional nuance of the original scene.

On “Fall,” Mann applied that same technology to clean up the profanity without reshoots, digitally modifying the actors’ mouths to match PG-13-friendly lines like “freaking” — at a fraction of the cost.

A series on how the AI revolution is reshaping the creative foundations of Hollywood — from storytelling and performance to production, labor and power.

As AI stirs both hype and anxiety in Hollywood, Mann understands why even such subtle digital tweaks can feel like a violation. That tension came to a head during the 2023 SAG-AFTRA strike, in which AI became the defining flash point in the fight over acting’s future.

“Ours is a rights-based industry,” says Mann, 45, who helped develop a digital rights management platform at Flawless to ensure performers approve any changes to their work. “It’s built on protecting human creativity, the contributions of actors, directors, editors, and if those rights aren’t protected, that value gets lost.”

A man crosses his arms and smiles in an office.

Mann at his office in Santa Monica.

(Brian Feinzimer / For The Times)

Still, Mann doesn’t see AI as a threat so much as a misunderstood tool — one that, used carefully, can support the artists it’s accused of replacing. Flawless’ DeepEditor, for example, lets directors transfer facial expressions from one take to another, even when the camera angle or lighting changes, helping actors preserve their strongest moments without breaking continuity.

“Plenty of actors I’ve worked with have had that moment where they see what’s possible and realize, ‘Oh my God, this is so much better,’” Mann says. “It frees them up, takes off the pressure and helps them do a better job. Shutting AI out is naive and a way to end up on the wrong side of history. Done right, this will make the industry grow and thrive.”

AI isn’t hovering at the edges of acting anymore — it’s already on soundstages and in editing bays. Studios have used digital tools to de-age Harrison Ford in “Indiana Jones and the Dial of Destiny,” resurrect Peter Cushing’s Grand Moff Tarkin in “Rogue One” and clone Val Kilmer’s voice in “Top Gun: Maverick” after throat cancer left him unable to speak. The technology has reshaped faces, smoothed dialogue and fast-tracked everything from dubbing to reshoots. And its reach is growing: Studios can now revive long-dead stars, conjure stunt doubles who never get hurt and rewrite performances long after wrap.

But should they?

Actors march in protest outside a studio gate.

Actors outside Paramount Studios during a SAG-AFTRA solidarity rally in September 2023.

(Al Seib / For The Times)

As the tools grow more sophisticated, the threat to actors goes beyond creative disruption. In an industry where steady work is already elusive and the middle class of working actors is vanishing, AI raises the prospect of fewer jobs, lower pay and, in a dystopian twist, a future in which your disembodied face and voice might get work without you.

Background actors were among the first to sound the alarm during the 2023 strike, protesting studio proposals to scan them once and reuse their likenesses indefinitely. That scenario is already beginning to unfold: In China, a state-backed initiative will use AI to reimagine 100 kung fu classics, including films starring Jackie Chan and Bruce Lee, through animation and other digital enhancements. Lee’s estate said it was unaware of the project, raising questions about how these actors’ likenesses might be used, decades after filming.

If the soul of acting is a human presence, what remains when even that can be simulated?

“You want to feel breath — you want to feel life,” said actor and director Ethan Hawke during a panel at 2023’s Telluride Film Festival, where strike-era unease over AI was palpable. “When we see a great painting, we feel a human being’s blood, sweat and tears. That’s what we’re all looking for, that connection with the present moment. And AI can’t do that.”

Who’s in control?

Justine Bateman may seem like an unlikely crusader in Hollywood’s fight against AI. Launched to fame as Mallory Keaton on the 1980s sitcom “Family Ties,” she later became a filmmaker and earned a computer science degree from UCLA. Now, as founder of the advocacy group CREDO23, Bateman has become one of the industry’s fiercest voices urging filmmakers to reject AI-generated content and defend the integrity of human-made work. Loosely modeled on Dogme 95, CREDO23 offers a certification of films made without AI, using minimal VFX and union crews. It’s a pledge backed by a council including “Mad Men” creator Matthew Weiner, “The Handmaid’s Tale” director Reed Morano and actor Juliette Lewis.

The 2023 SAG-AFTRA contract set new guardrails: Studios must get actors’ consent to create or use digital replicas of their likenesses, and those replicas can’t generate new performances without a separate deal. Actors must also be compensated and credited when their digital likeness is used.

But to Bateman, a former SAG-AFTRA board member and negotiating committee rep, those protections are little more than sandbags against an inevitable AI flood: hard-won but already straining to keep the technology at bay.

“The allowances in the contract are pretty astounding,” Bateman says by phone, her voice tight with exasperation. “If you can picture the Teamsters allowing self-driving trucks in their contract — that’s on par with what SAG did. If you’re not making sure human roles are played by human actors, I’m not sure what the union is for.”

A woman in a dark top gazes into the lens.

Justine Bateman, photographed by The Times in 2022.

(Jay L. Clendenin / Los Angeles Times)

To Bateman, the idea that AI expands access to filmmaking — a central tenet of its utopian sales pitch — is a dangerous myth, one that obscures deeper questions about authorship and the value of creative labor.

“Anyone can make a film — my last two, I shot on an iPhone,” Bateman says. “The idea that AI is ‘democratizing film’ doesn’t even make sense. What it really does is remove the barrier of skill. It lets people pretend they’re filmmakers when they’re not, by prompting software that wouldn’t even function without having stolen a hundred years of film and TV production made by real filmmakers.”

Bateman’s opposition to AI is rooted in a deep distrust of Silicon Valley’s expanding influence over the creative process and a belief that filmmaking should be driven by artists, not algorithms. “The tech bro business completely jumped the shark with generative AI,” she says. “Is it solving plastics in the ocean? Homelessness? L.A. traffic? Not that I’m aware of.”

She scoffs at the supposed efficiencies AI brings to the filmmaking process: “It’s like saying, whatever somebody enjoys — sex or an ice cream sundae — ‘Hey, now you can do it in a quarter of the time.’ OK, but then what do you think life is for?“

To Bateman, an actor’s voice, face, movements or even their choice of costume is not raw material to be reshaped but an expression of authorship. AI, in her view, erases those choices and the intent behind them. “I’m deeply against changing what the actor did,” she says. “It’s not right to have the actor doing things or saying things they didn’t do — or to alter their hair, makeup or clothes in postproduction using AI. The actor knows what they did.”

While Bateman has been public and unwavering in her stance, many actors remain unsure whether to raise their voices. In the wake of the strikes, much of the conversation around AI has moved behind closed doors, leaving those who do speak out feeling at times exposed and alone.

Scarlett Johansson, who lent her smoky, hypnotic voice to the fictional AI in Spike Jonze’s Oscar-winning 2013 film “Her,” now finds herself in a uniquely uncomfortable position: She’s both a symbol of our collective fascination with artificial performance and a real-world example of what’s at stake when that line is crossed. Last year, she accused OpenAI of using a chatbot voice that sounded “eerily similar” to hers, months after she declined to license it. OpenAI denied the claim and pulled the voice, but the incident reignited concern over consent and control.

Johansson has long spoken out against the unauthorized use of her image, including her appearance in deepfake pornography, and has pushed for stronger safeguards against digital impersonation. To date, though, she is one of the few major stars to publicly push back against the creeping mimicry enabled by AI — and she’s frustrated that more haven’t joined her. “There has to be some agreed-upon set of boundaries in order for [AI] to not be detrimental,” she told Vanity Fair in May. “I wish more people in the public eye would support and speak out about that. I don’t know why that’s not the case.”

Lights, camera, replication

Ed Ulbrich, 60, a pioneering visual effects producer and co-founder of Digital Domain, has spent his career helping actors do the impossible, one pixel at a time.

In 2008’s “The Curious Case of Benjamin Button,” he led the team of more than 150 artists in building a fully digital version of Brad Pitt’s face so the actor could convincingly age in reverse — a two-year effort that earned Ulbrich and three colleagues an Oscar for visual effects and set a new benchmark for digital performance. (Nearly two decades later, the achievement is still impressive, although some scenes, especially those with Pitt’s aged face composited on a child’s body, now show their digital seams.) For 2010’s “Tron: Legacy,” Ulbrich helped digitally transform Jeff Bridges into his 1982 self using motion capture and CGI.

Working on last year’s “Here” — Robert Zemeckis’ technically daring drama starring Tom Hanks and Robin Wright as a couple whose lives play out across decades in a single New Jersey living room — showed Ulbrich just how far things have come. For someone who jokes he has “real estate in the uncanny valley,” it wasn’t just the AI-enabled realism that floored him. It was the immediacy. On set, AI wasn’t enhancing footage after the fact; it was visually reshaping the performance in real time.

A man and a woman celebrate at a birthday party in a living room.

Tom Hanks and Robin Wright in the movie “Here.”

(Sony Pictures Ent.)

“You look up and see 67-year-old Tom Hanks. You look down at the monitor — he’s 20, and it looks better than the best CGI,” Ulbrich says. “In my world, the human face is the holy grail. That is the most complicated thing you can do. And now it’s getting done in near real time before your eyes. The actor can come back and look at the monitor and get new ideas, because they’re seeing a different version of themselves: younger, older, as an alien or whatever.”

This kind of seamless AI-driven alteration marks a new frontier in postproduction. Modern AI systems can now “beautify” actors’ faces, like some would with a Instagram or Zoom filter: smooth out wrinkles, alter skin tone, sharpen jawlines, subtly nudge eye position to better match a desired gaze. What once required painstaking VFX can now be handled by fast, flexible AI tools, often with results invisible to audiences.

Once limited to only big-budget sci-fi and fantasy productions, this digital touch-up capability is expanding into rom-coms, prestige dramas, high-end TV and even some indie films. Dialogue can be rewritten and re-lipped in post. Facial expressions can be smoothed or swapped without reshoots. More and more, viewers may have no way of knowing what’s real and what’s been subtly adjusted.

“Here” was largely rejected by both audiences and critics, with some deeming its digitally de-aged performances more unsettling than moving. But Ulbrich says digitally enhanced performance is already well underway.

Talent agency CAA has built a vault of client scans, a kind of biometric asset library for future productions. Some stars now negotiate contracts that reduce their time on set, skipping hours in the makeup chair or performance-capture gear, knowing AI can fill in the gaps.

“Robert Downey, Brad Pitt, Will Smith — they’ve all been scanned many times,” says Ulbrich, who recently joined the AI-driven media company Moonvalley, which pitches itself as a more ethical, artist-centered player in the space. “If you’ve done a studio tentpole, you’ve been scanned.

“There is a lot of fear around AI and it’s founded,” he adds. “Unless you do something about it, you can just get run over. But there are people out there that are harnessing this. At this point, fighting AI is like fighting against electricity.”

While many in Hollywood wrestle with what AI means for the oldest component of moviemaking, others take a more pragmatic view, treating it as a tool to solve problems and keep productions on track. Jerry Bruckheimer, the powerhouse producer behind “Top Gun,” “Pirates of the Caribbean” and this summer’s “F1,” is among those embracing its utility.

“AI is not going anywhere and it’s only going to get more useful for people in our business,” he said in a recent interview with The Times.

He recalled one such moment during post-production on his new Brad Pitt–led Formula One drama, a logistical feat filmed during actual Formula One races across Europe and the Middle East, with a budget north of $200 million.

“Brad was in the wilds of New Zealand, and we had test screenings coming up,” Bruckheimer says. “We couldn’t get his voice to do some looping, so we used an app that could mimic Brad Pitt. I’m sure the union will come after me if you write that, but it wasn’t used in the movie because he became available.”

While he’s skeptical of AI’s ability to generate truly original ideas — “We’re always going to need writers,” he says — Bruckheimer, whose films have grossed more than $16 billion worldwide, sees AI as a powerful tool for global reach.

“They can take Brad’s voice from the movie and turn it into other languages so it’s actually his voice, rather than another actor,” he says. “If it’s not available yet, it will be.”

The debate over AI in performance flared earlier this year with “The Brutalist,” Brady Corbet’s award-winning drama about a Hungarian architect. After the film’s editor, Dávid Jancsó, revealed that AI voice-cloning software had been used to subtly modify the Hungarian accents of stars Adrien Brody and Felicity Jones, the backlash followed swiftly.

Some critics accused the film of using AI to smooth over performances while presenting itself as handcrafted, a move one viral post derided as trying to “cheap out without soul.” Corbet later clarified that AI was used sparingly, only to adjust vowel sounds, but the decision left some viewers uneasy — even as Brody went on to win the Oscar for lead actor.

If the controversy over “The Brutalist” struck some as a moral crisis, David Cronenberg found the whole thing overblown. Few filmmakers have probed the entanglement of flesh, identity and technology as relentlessly as the director of “Videodrome,” “The Fly” and last year’s “The Shrouds,” so he’s not particularly rattled by the rise of AI-assisted performances.

“All directors have always messed around with actors’ performances — that’s what editing is,” Cronenberg told The Times in April. “Filmmaking isn’t theater. It’s not sacred. We’ve been using versions of this for years. It’s another tool in the toolbox. And it’s not controlling you — you can choose not to use it.”

Long before digital tools, Cronenberg recalls adjusting actor John Lone’s vocal pitch in his 1993 film “M. Butterfly,” in which Lone played a Chinese opera singer and spy who presents as a woman to seduce a French diplomat. The director raised the pitch when the character appeared as a woman and lowered it when he didn’t — a subtle manipulation to reinforce the illusion.

A man with gray hair looks off to the side.

David Cronenberg, photographed at his home in Toronto, Canada, in April.

(Kate Dockeray / For The Times)

Far from alarmed, Cronenberg is intrigued by AI’s creative potential as a way of reshaping authorship itself. With new platforms like OpenAI’s Sora and Google’s Veo 3 now capable of generating increasingly photorealistic clips from simple text prompts, an entire performance could conceivably be conjured from a writer’s keyboard.

“Suddenly you can write a scene — a woman is walking down the street, she looks like this, she’s wearing that, it’s raining, whatever — and AI can create a video for you,” Cronenberg says. “To me, this is all exciting. It absolutely can threaten all kinds of jobs and that has to be dealt with, but every technological advance has done that and we just have to adapt and figure it out.”

Ghosts in the frame

In the Hollywood of the late 1970s, there was no AI to tweak an actor’s face. So when “Star Wars” star Mark Hamill fractured his nose and left cheekbone in a serious car crash between shooting the first and second films, the solution was to tweak the story. The 1980 sequel “The Empire Strikes Back” opened with Luke Skywalker being attacked by a nine-foot-tall snow beast called a wampa on the ice planet Hoth, partly to account for the change in his appearance.

Decades later, when Hamill was invited to return as a younger version of himself in the 2020 Season 2 finale of “The Mandalorian,” the chance to show Luke “at the height of his powers was irresistible,” he says.

But the reality left him feeling oddly detached from the character that made him famous. Hamill shared the role with a younger body double, and digital de-aging tools recreated his face from decades earlier. The character’s voice, meanwhile, was synthesized using Respeecher, a neural network trained on old recordings of Hamill to mimic his speech from the original trilogy era.

“I didn’t have that much dialogue: ‘Are you Luke Skywalker?’ ‘I am,’” Hamill recalled in an interview with The Times earlier this year. “I don’t know what they do when they take it away, in terms of tweaking it and making your voice go up in pitch or whatever.”

When fans speculated online that he hadn’t participated at all, Hamill declined to correct the record.

“My agent said, ‘Do you want me to put out a statement or something?’” Hamill recalls. “I said, ‘Eh, people are going to say what they want to say.’ Maybe if you deny it, they say, ‘See? That proves it — he’s denying it.’”

A young Jedi in black robes stands at a doorway.

A digitally de-aged Mark Hamill as the young Luke Skywalker in a 2020 episode of “The Mandalorian.”

(Lucasfilm Ltd.)

When Luke returned again in a 2022 episode of “The Book of Boba Fett,” the process was even more synthetic: Hamill was minimally involved on camera and the character was built almost entirely from digital parts: a de-aged face mapped onto a body double with an AI-generated voice delivering his lines. Hamill was credited and compensated, though the exact terms of the arrangement haven’t been made public.

The visual effect was notably improved from earlier efforts, thanks in part to a viral deepfake artist known as Shamook, whose YouTube video improving the VFX in “The Mandalorian” finale had racked up millions of views. He was soon hired by Industrial Light & Magic — a rare case of fan-made tech critique turning into a studio job.

“In essence, yes, I did participate,” Hamill says.

It’s one thing to be digitally altered while you’re still alive. It’s another to keep performing after you’re gone.

Before his death last year, James Earl Jones — whose resonant baritone helped define Darth Vader for generations — gave Lucasfilm permission to recreate his voice using AI. In a recent collaboration with Disney, Epic Games deployed that digital voice in Fortnite, allowing players to team up with Vader and hear new lines delivered in Jones’ unmistakable tones, scripted by Google’s Gemini AI.

In May, SAG-AFTRA later filed a labor charge, saying the use of Jones’ voice hadn’t been cleared with the union.

Last year’s “Alien: Romulus” sparked similar backlash over the digital resurrection of Ian Holm’s android character Ash nearly a decade after Holm’s death. Reconstructed using a blend of AI and archival footage, the scenes were slammed by some fans as a form of “digital necromancy.” For the film’s home video release, director Fede Álvarez quietly issued an alternate cut that relied more heavily on practical effects, including an animatronic head modeled from a preexisting cast of Holm’s face.

For Hollywood, AI allows nostalgia to become a renewable resource, endlessly reprocessed and resold. Familiar faces can be altered, repurposed and inserted into entirely new stories. The audience never has to say goodbye and the industry never has to take the risk of introducing someone new.

Hamill, for his part, seems ready to let go of Luke. After his final arc in 2017’s “The Last Jedi,” he says he feels a sense of closure.

“I don’t know the full impact AI will have but I find it very ominous,“ he says. “I’m fine. I had my time. Now the spotlight should be on the current and future actors and I hope they enjoy it as much as I did.”

Actors, not avatars

A man in a blue top poses for the camera.

Actor and AI startup Wonder Dynamics co-founder Tye Sheridan, photographed by The Times in 2021.

(Michael Nagle / For The Times)

Actor Tye Sheridan knows how dark an AI future could get. After all, he starred in Steven Spielberg’s 2018 “Ready Player One,” a sci-fi thriller set inside a corporate-controlled world of digital avatars. But Sheridan isn’t trying to escape into that world — he’s trying to shape the one ahead.

With VFX supervisor Nikola Todorovic, Sheridan co-founded Wonder Dynamics in 2017 to explore how AI can expand what’s possible on screen. Their platform uses AI to insert digital characters into live-action scenes without green screens or motion-capture suits, making high-end VFX more accessible to low-budget filmmakers. Backed by Spielberg and “Avengers” co-director Joe Russo, Wonder Dynamics was acquired last year by Autodesk, the software firm behind many animation and design tools.

“Since the advent of the camera, technology has been pushing this industry forward,” Sheridan, 28, says on a video call. “AI is just another part of that path. It can make filmmaking more accessible, help discover new voices. Maybe the next James Cameron will find their way into the industry through some AI avenue. I think that’s really exciting.”

With production costs spiraling, Todorovic sees AI as a way to lower the barrier to entry and make riskier, more ambitious projects possible. “We really see AI going in that direction, where you can get those A24-grounded stories with Marvel visuals,” he says. “That’s what younger audiences are hungry for.”

The shift, Todorovic argues, could lead to more films overall and more opportunities for actors. “Maybe instead of 10,000 people making five movies, it’ll be 1,000 people making 50,” he says.

Still, Todorovic sees a threshold approaching, one where synthetic actors could, in theory, carry a film. “I do think technically it is going to get solved,” Todorovic says. “But the question remains — is that what we really want? Do we really want the top five movies of the year to star humans who don’t exist? I sure hope not.”

For him, the boundary isn’t just about realism. It’s about human truth.

“You can’t prompt a performance,” he says. “You can’t explain certain movements of the body and it’s very hard to describe emotions. Acting is all about reacting. That’s why when you make a movie, you do five takes — or 40. Because it’s hard to communicate.”

Sheridan, who has appeared in the “X-Men” franchise as well as smaller dramas like “The Card Counter” and “The Tender Bar,” understands that instinctively and personally. “I started acting in films when I was 11 years old,” he says. “I wouldn’t ever want to build something that put me out of a job. That’s the fun part — performing, exploring, discovering the nuances. That’s why we fall in love with certain artists: their unique sensibility, the way they do what no one else can.”

He knows that may sound contradictory coming from the co-founder of an AI company. That’s exactly why he believes it’s critical that artists, not Silicon Valley CEOs, are the ones shaping how the technology is used.

“We should be skeptical of AI and its bad uses,” he says. “It’s a tool that can be used for good or bad. How are we going to apply it to create more access and opportunity in this industry and have more voices heard? We’re focused on keeping the artist as an essential part of the process, not replacing them.”

For now, Sheridan lives inside that paradox, navigating a technology that could both elevate and imperil the stories he cares most about.

His next acting gig? “The Housewife,” a psychological drama co-starring Naomi Watts and Michael Imperioli, in which he plays a 1960s New York Times reporter investigating a suspected Nazi hiding in Queens. No AI. No doubles. Just people pretending to be other people the old way, while it lasts.

Source link