likeness

Hollywood-AI battle heats up, as OpenAI and studios clash over copyrights and consent

A year after tech firm OpenAI roiled Hollywood with the release of its Sora AI video tool, Chief Executive Sam Altman was back — with a potentially groundbreaking update.

Unlike the generic images Sora could initially create, the new program allows users to upload videos of real people and put them into AI-generated environments, complete with sound effects and dialogue.

In one video, a synthetic Michael Jackson takes a selfie video with an image of “Breaking Bad” star Bryan Cranston. In another, a likeness of SpongeBob SquarePants speaks out from behind the White House’s Oval Office desk.

“Excited to launch Sora 2!” Altman wrote on social media platform X on Sept. 30. “Video models have come a long way; this is a tremendous research achievement.”

But the enthusiasm wasn’t shared in Hollywood, where the new AI tools have created a swift backlash. At the core of the dispute is who controls the copyrighted images and likenesses of actors and licensed characters — and how much they should be compensated for their use in AI models.

The Motion Picture Assn. trade group didn’t mince words.

“OpenAI needs to take immediate and decisive action to address this issue,” Chairman Charles Rivkin said in a statement Monday. “Well-established copyright law safeguards the rights of creators and applies here.”

By the end of the week, multiple agencies and unions, including SAG-AFTRA, chimed in with similar statements, marking a rare moment of consensus in Hollywood and putting OpenAI on the defensive.

“We’re engaging directly with studios and rightsholders, listening to feedback, and learning from how people are using Sora 2,” Varun Shetty, OpenAI’s vice president of media partnerships, said in a statement. “Many are creating original videos and excited about interacting with their favorite characters, which we see as an opportunity for rightsholders to connect with fans and share in that creativity.”

For now, the skirmish between well-capitalized OpenAI and the major Hollywood studios and agencies appears to be only just the beginning of a bruising legal fight that could shape the future of AI use in the entertainment business.

“The question is less about if the studios will try to assert themselves, but when and how,” said Anthony Glukhov, senior associate at law firm Ramo, of the clash between Silicon Valley and Hollywood over AI. “They can posture all they want; but at the end of the day, there’s going to be two titans battling it out.”

Before it became the focus of ire in the creative community, OpenAI quietly tried to make inroads into the film and TV business.

The company’s executives went on a charm offensive last year. They reached out to key players in the entertainment industry — including Walt Disney Co. — about potential areas for collaboration and trying to assuage concerns about its technology.

This year, the San Francisco-based AI startup took a more assertive approach.

Before unveiling Sora 2 to the general public, OpenAI executives had conversations with some studios and talent agencies, putting them on notice that they need to explicitly declare which pieces of intellectual property — including licensed characters — were being opted-out of having their likeness depicted on the AI platform, according to two sources familiar with the matter who were not authorized to comment. Actors would be included in Sora 2 unless they opted out, the people said.

OpenAI disputes the claim and says that it was always the company’s intent to give actors and other public figures control over how their likeness is used.

The response was immediate.

Beverly Hills talent agency WME, which represents stars such as Michael B. Jordan and Oprah Winfrey, told OpenAI its actions were unacceptable, and that all of its clients would be opting out.

Creative Artists Agency and United Talent Agency also argued that their clients had the right to control and be compensated for their likenesses.

Studios, including Warner Bros., echoed the point.

“Decades of enforceable copyright law establishes that content owners do not need to ‘opt out’ to prevent infringing uses of their protected IP,” Warner Bros. Discovery said in a statement. “As technology progresses and platforms advance, the traditional principles of copyright protection do not change.”

Unions, including SAG-AFTRA — whose members were already alarmed over the recent appearance of a fake, AI-generated composite named Tilly Norwood — also expressed alarm.

“OpenAI’s decision to honor copyright only through an ‘opt-out’ model threatens the economic foundation of our entire industry and underscores the stakes in the litigation currently working through the courts,” newly elected President Sean Astin and National Executive Director Duncan Crabtree-Ireland said in a statement.

The dispute underscores a clash of two very different cultures. On one side is the brash, Silicon Valley “move fast and break things” ethos, where asking for forgiveness is seen as preferable to asking for permission. On the other is Hollywood’s eternal wariness over the effect of new technology, and its desire to retain control over increasingly valuable intellectual property rights.

“The difficulty, as we’ve seen, is balancing the capabilities with the prior rights owned by other people,” said Rob Rosenberg, a partner with law firm Moses and Singer LLP and a former Showtime Networks general counsel. “That’s what was driving the entire entertainment industry bonkers.”

Amid the outcry, Sam Altman posted on his blog days after the Sora 2 launch that the company would be giving more granular controls to rights holders and is working on a way to compensate them for video generation.

OpenAI said it has guardrails to block the generation of well-known characters and a team of reviewers who are taking down material that doesn’t follow its updated policy. Rights holders can also request removal of content.

The strong pushback from the creative community could be a strategy to force OpenAI into entering licensing agreements for the content they need, legal experts said.

Existing law is clear — a copyright holder has full control over their copyrighted material, said Ray Seilie, entertainment litigator at law firm Kinsella Holley Iser Kump Steinsapir.

“It’s not your job to go around and tell other people to stop using it,” he said. “If they use it, they use it at their own risk.”

Disney, Universal and Warner Bros. Discovery have previously sued AI firms MiniMax and Midjourney, accusing them of copyright infringement.

One challenge is figuring out a way that fairly compensates talent and rights holders. Several people who work within the entertainment industry ecosystem said they don’t believe a flat fee works.

“Bring monetization that is not a one size fits all,” said Dan Neely, chief executive of Chicago-based Vermillio, which works with Hollywood talent and studios and protects how their likenesses and characters are used in AI. “That’s what will move the needle for talent and studios.”

Visiting journalist Nilesh Christopher contributed to this report.

Source link

Real estate investor denies improper use of Shohei Ohtani’s likeness

Lawyers for a Hawaii real estate investor and broker who sued Shohei Ohtani and his agent denied any improper use of the Dodgers star’s likeness for a development project and alleged the agent was trying to deflect blame for cost overruns at the player’s home.

Ohtani and Nez Balelo of CAA Baseball were sued Aug. 8 in Hawaii Circuit Court for the First Circuit by developer Kevin J. Hayes Sr., real estate broker Tomoko Matsumoto, West Point Investment Corp. and Hapuna Estates Property Owners. They accused Ohtani and Balelo of “abuse of power” that allegedly resulted in tortious interference and unjust enrichment impacting a $240 million luxury housing development on the Big Island’s coveted Hapuna Coast.

Hayes and Matsumoto had been dropped from the development deal by Kingsbarn Realty Capital, the joint venture’s majority owner.

The amended complaint filed Tuesday added Creative Artists Agency and CAA Sports as defendants.

“Balelo and CAA sought to deflect blame by scapegoating Hayes for the cost overruns on Otani’s home — overruns caused entirely by defendants’ own decisions,” the complaint said.

“The allegations as we clarified them make very clear that there was never a breach of the endorsement agreement, the video that was posted on the website promoting specifically this project was sent to Balelo and CAA and another adviser to Ohtani, Mark Daulton, and they were aware of it and never objected to it,” said Josh Schiller, a lawyer for Hayes and the suing entities.

In a motion to dismiss filed Sept. 14, attorneys for Ohtani and Balelo said “plaintiffs exploited Ohtani’s name and photograph to drum up traffic to a website that marketed plaintiffs’ own side project development.”

“This is a desperate attempt to avoid dismissal of a frivolous complaint and, as we previously said, to distract from plaintiffs’ myriad of failures and their blatant misappropriation of Shohei Ohtani’s rights,” Laura Smolowe, a lawyer for Ohtani and Balelo, said in a statement. “Nez Balelo has always prioritized Mr. Ohtani’s best interests, including protecting his name, image, and likeness from unauthorized use.”

Lawyers for Hayes and the plaintiffs claimed they kept Balelo and CAA informed.

“Before the website went live, Hayes submitted a link to the entire site — including its promotional aspects — by email to Balelo and Terry Prince, the director of legal and business Affairs at CAA Sports LLC,” the amended complaint said. “It remained online with no material changes for 14 months before Balelo suddenly objected and threatened litigation — weaponizing the issue in order to create pretext for yet another set of demands and concessions.”

“The sudden demand that Kingsbarn terminate plaintiffs was instead a retaliatory measure against Hayes for resisting the constant and improper demands of Balelo and (Ohtani),” the complaint added. “Defendants further calculated that, with plaintiffs removed, they could more easily extract financial concessions from the project and enrich themselves at plaintiffs’ expense.”

Source link

Shohei Ohtani’s lawyers claim he was victim in Hawaii real estate deal

Dodgers star Shohei Ohtani and his agent, Nez Balelo, moved to dismiss a lawsuit filed last month accusing them of causing a Hawaii real estate investor and broker to be fired from a $240-million luxury housing development on the Big Island’s Hapuna Coast.

Ohtani and Balelo were sued Aug. 8 in Hawaii Circuit Court for the First Circuit by developer Kevin J. Hayes Sr. and real estate broker Tomoko Matsumoto, West Point Investment Corp. and Hapuna Estates Property Owners, who accused them of “abuse of power” that allegedly resulted in tortious interference and unjust enrichment.

Hayes and Matsumoto had been dropped from the development deal by Kingsbarn Realty Capital, the joint venture’s majority owner.

In papers filed Sunday, lawyers for Ohtani and Balelo said Hayes and Matsumoto in 2023 acquired rights for a joint venture in which they owned a minority percentage to use Ohtani’s name, image and likeness under an endorsement agreement to market the venture’s real estate development at the Mauna Kea Resort. The lawyers said Ohtani was a “victim of NIL violations.”

“Unbeknownst to Ohtani and his agent Nez Balelo, plaintiffs exploited Ohtani’s name and photograph to drum up traffic to a website that marketed plaintiffs’ own side project development,” the lawyers wrote. “They engaged in this self-dealing without authorization, and without paying Ohtani for that use, in a selfish and wrongful effort to take advantage of their proximity to the most famous baseball player in the world.”

The lawyers claimed Hayes and Matsumoto sued after “Balelo did his job and protected his client by expressing justifiable concern about this misuse and threatening to take legal action against this clear misappropriation.” They called Balelo’s actions “clearly protected speech “

In a statement issued after the suit was filed last month, Kingsbarn called the allegations “completely frivolous and without merit.”

Ohtani is a three-time MVP on the defending World Series champion Dodgers.

“Nez Balelo has always prioritized Shohei Ohtani’s best interests, including protecting his name, image, and likeness from unauthorized use,” a lawyer for Ohtani and Balelo, said in a statement. “This frivolous lawsuit is a desperate attempt by plaintiffs to distract from their myriad of failures and blatant misappropriation of Mr. Ohtani’s rights.”

Lawyers for Hayes and Matsumoto did not immediately respond to a request for comment.

Source link

De-aged stars, cloned voices: How AI is changing acting

For filmmaker Scott Mann, three dozen F-bombs had the makings of a million-dollar headache.

When Mann wrapped “Fall,” a 2022 thriller about two women stranded atop a 2,000-foot radio tower, he figured the hard part was over. Shot in the Mojave Desert on a $3-million budget, the film didn’t have money to burn and seemed on course. But Lionsgate wanted a PG-13 rating and, with 35 expletives, “Fall” was headed for an R. Reshoots would cost more than $1 million — far beyond what the production could afford.

In the past, a director might have taken out a second mortgage or thrown themselves at the mercy of the ratings board. Mann instead turned to AI.

A few years earlier, he had been dismayed by how a German dub of his 2015 thriller “Heist” flattened the performances, including a key scene with Robert De Niro, to match stiff, mistranslated dialogue. That frustration led Mann to co-found Flawless, an AI startup aimed at preserving the integrity of an actor’s performance across languages. As a proof of concept, he used the company’s tech to subtly reshape De Niro’s mouth movements and restore the emotional nuance of the original scene.

On “Fall,” Mann applied that same technology to clean up the profanity without reshoots, digitally modifying the actors’ mouths to match PG-13-friendly lines like “freaking” — at a fraction of the cost.

A series on how the AI revolution is reshaping the creative foundations of Hollywood — from storytelling and performance to production, labor and power.

As AI stirs both hype and anxiety in Hollywood, Mann understands why even such subtle digital tweaks can feel like a violation. That tension came to a head during the 2023 SAG-AFTRA strike, in which AI became the defining flash point in the fight over acting’s future.

“Ours is a rights-based industry,” says Mann, 45, who helped develop a digital rights management platform at Flawless to ensure performers approve any changes to their work. “It’s built on protecting human creativity, the contributions of actors, directors, editors, and if those rights aren’t protected, that value gets lost.”

A man crosses his arms and smiles in an office.

Mann at his office in Santa Monica.

(Brian Feinzimer / For The Times)

Still, Mann doesn’t see AI as a threat so much as a misunderstood tool — one that, used carefully, can support the artists it’s accused of replacing. Flawless’ DeepEditor, for example, lets directors transfer facial expressions from one take to another, even when the camera angle or lighting changes, helping actors preserve their strongest moments without breaking continuity.

“Plenty of actors I’ve worked with have had that moment where they see what’s possible and realize, ‘Oh my God, this is so much better,’” Mann says. “It frees them up, takes off the pressure and helps them do a better job. Shutting AI out is naive and a way to end up on the wrong side of history. Done right, this will make the industry grow and thrive.”

AI isn’t hovering at the edges of acting anymore — it’s already on soundstages and in editing bays. Studios have used digital tools to de-age Harrison Ford in “Indiana Jones and the Dial of Destiny,” resurrect Peter Cushing’s Grand Moff Tarkin in “Rogue One” and clone Val Kilmer’s voice in “Top Gun: Maverick” after throat cancer left him unable to speak. The technology has reshaped faces, smoothed dialogue and fast-tracked everything from dubbing to reshoots. And its reach is growing: Studios can now revive long-dead stars, conjure stunt doubles who never get hurt and rewrite performances long after wrap.

But should they?

Actors march in protest outside a studio gate.

Actors outside Paramount Studios during a SAG-AFTRA solidarity rally in September 2023.

(Al Seib / For The Times)

As the tools grow more sophisticated, the threat to actors goes beyond creative disruption. In an industry where steady work is already elusive and the middle class of working actors is vanishing, AI raises the prospect of fewer jobs, lower pay and, in a dystopian twist, a future in which your disembodied face and voice might get work without you.

Background actors were among the first to sound the alarm during the 2023 strike, protesting studio proposals to scan them once and reuse their likenesses indefinitely. That scenario is already beginning to unfold: In China, a state-backed initiative will use AI to reimagine 100 kung fu classics, including films starring Jackie Chan and Bruce Lee, through animation and other digital enhancements. Lee’s estate said it was unaware of the project, raising questions about how these actors’ likenesses might be used, decades after filming.

If the soul of acting is a human presence, what remains when even that can be simulated?

“You want to feel breath — you want to feel life,” said actor and director Ethan Hawke during a panel at 2023’s Telluride Film Festival, where strike-era unease over AI was palpable. “When we see a great painting, we feel a human being’s blood, sweat and tears. That’s what we’re all looking for, that connection with the present moment. And AI can’t do that.”

Who’s in control?

Justine Bateman may seem like an unlikely crusader in Hollywood’s fight against AI. Launched to fame as Mallory Keaton on the 1980s sitcom “Family Ties,” she later became a filmmaker and earned a computer science degree from UCLA. Now, as founder of the advocacy group CREDO23, Bateman has become one of the industry’s fiercest voices urging filmmakers to reject AI-generated content and defend the integrity of human-made work. Loosely modeled on Dogme 95, CREDO23 offers a certification of films made without AI, using minimal VFX and union crews. It’s a pledge backed by a council including “Mad Men” creator Matthew Weiner, “The Handmaid’s Tale” director Reed Morano and actor Juliette Lewis.

The 2023 SAG-AFTRA contract set new guardrails: Studios must get actors’ consent to create or use digital replicas of their likenesses, and those replicas can’t generate new performances without a separate deal. Actors must also be compensated and credited when their digital likeness is used.

But to Bateman, a former SAG-AFTRA board member and negotiating committee rep, those protections are little more than sandbags against an inevitable AI flood: hard-won but already straining to keep the technology at bay.

“The allowances in the contract are pretty astounding,” Bateman says by phone, her voice tight with exasperation. “If you can picture the Teamsters allowing self-driving trucks in their contract — that’s on par with what SAG did. If you’re not making sure human roles are played by human actors, I’m not sure what the union is for.”

A woman in a dark top gazes into the lens.

Justine Bateman, photographed by The Times in 2022.

(Jay L. Clendenin / Los Angeles Times)

To Bateman, the idea that AI expands access to filmmaking — a central tenet of its utopian sales pitch — is a dangerous myth, one that obscures deeper questions about authorship and the value of creative labor.

“Anyone can make a film — my last two, I shot on an iPhone,” Bateman says. “The idea that AI is ‘democratizing film’ doesn’t even make sense. What it really does is remove the barrier of skill. It lets people pretend they’re filmmakers when they’re not, by prompting software that wouldn’t even function without having stolen a hundred years of film and TV production made by real filmmakers.”

Bateman’s opposition to AI is rooted in a deep distrust of Silicon Valley’s expanding influence over the creative process and a belief that filmmaking should be driven by artists, not algorithms. “The tech bro business completely jumped the shark with generative AI,” she says. “Is it solving plastics in the ocean? Homelessness? L.A. traffic? Not that I’m aware of.”

She scoffs at the supposed efficiencies AI brings to the filmmaking process: “It’s like saying, whatever somebody enjoys — sex or an ice cream sundae — ‘Hey, now you can do it in a quarter of the time.’ OK, but then what do you think life is for?“

To Bateman, an actor’s voice, face, movements or even their choice of costume is not raw material to be reshaped but an expression of authorship. AI, in her view, erases those choices and the intent behind them. “I’m deeply against changing what the actor did,” she says. “It’s not right to have the actor doing things or saying things they didn’t do — or to alter their hair, makeup or clothes in postproduction using AI. The actor knows what they did.”

While Bateman has been public and unwavering in her stance, many actors remain unsure whether to raise their voices. In the wake of the strikes, much of the conversation around AI has moved behind closed doors, leaving those who do speak out feeling at times exposed and alone.

Scarlett Johansson, who lent her smoky, hypnotic voice to the fictional AI in Spike Jonze’s Oscar-winning 2013 film “Her,” now finds herself in a uniquely uncomfortable position: She’s both a symbol of our collective fascination with artificial performance and a real-world example of what’s at stake when that line is crossed. Last year, she accused OpenAI of using a chatbot voice that sounded “eerily similar” to hers, months after she declined to license it. OpenAI denied the claim and pulled the voice, but the incident reignited concern over consent and control.

Johansson has long spoken out against the unauthorized use of her image, including her appearance in deepfake pornography, and has pushed for stronger safeguards against digital impersonation. To date, though, she is one of the few major stars to publicly push back against the creeping mimicry enabled by AI — and she’s frustrated that more haven’t joined her. “There has to be some agreed-upon set of boundaries in order for [AI] to not be detrimental,” she told Vanity Fair in May. “I wish more people in the public eye would support and speak out about that. I don’t know why that’s not the case.”

Lights, camera, replication

Ed Ulbrich, 60, a pioneering visual effects producer and co-founder of Digital Domain, has spent his career helping actors do the impossible, one pixel at a time.

In 2008’s “The Curious Case of Benjamin Button,” he led the team of more than 150 artists in building a fully digital version of Brad Pitt’s face so the actor could convincingly age in reverse — a two-year effort that earned Ulbrich and three colleagues an Oscar for visual effects and set a new benchmark for digital performance. (Nearly two decades later, the achievement is still impressive, although some scenes, especially those with Pitt’s aged face composited on a child’s body, now show their digital seams.) For 2010’s “Tron: Legacy,” Ulbrich helped digitally transform Jeff Bridges into his 1982 self using motion capture and CGI.

Working on last year’s “Here” — Robert Zemeckis’ technically daring drama starring Tom Hanks and Robin Wright as a couple whose lives play out across decades in a single New Jersey living room — showed Ulbrich just how far things have come. For someone who jokes he has “real estate in the uncanny valley,” it wasn’t just the AI-enabled realism that floored him. It was the immediacy. On set, AI wasn’t enhancing footage after the fact; it was visually reshaping the performance in real time.

A man and a woman celebrate at a birthday party in a living room.

Tom Hanks and Robin Wright in the movie “Here.”

(Sony Pictures Ent.)

“You look up and see 67-year-old Tom Hanks. You look down at the monitor — he’s 20, and it looks better than the best CGI,” Ulbrich says. “In my world, the human face is the holy grail. That is the most complicated thing you can do. And now it’s getting done in near real time before your eyes. The actor can come back and look at the monitor and get new ideas, because they’re seeing a different version of themselves: younger, older, as an alien or whatever.”

This kind of seamless AI-driven alteration marks a new frontier in postproduction. Modern AI systems can now “beautify” actors’ faces, like some would with a Instagram or Zoom filter: smooth out wrinkles, alter skin tone, sharpen jawlines, subtly nudge eye position to better match a desired gaze. What once required painstaking VFX can now be handled by fast, flexible AI tools, often with results invisible to audiences.

Once limited to only big-budget sci-fi and fantasy productions, this digital touch-up capability is expanding into rom-coms, prestige dramas, high-end TV and even some indie films. Dialogue can be rewritten and re-lipped in post. Facial expressions can be smoothed or swapped without reshoots. More and more, viewers may have no way of knowing what’s real and what’s been subtly adjusted.

“Here” was largely rejected by both audiences and critics, with some deeming its digitally de-aged performances more unsettling than moving. But Ulbrich says digitally enhanced performance is already well underway.

Talent agency CAA has built a vault of client scans, a kind of biometric asset library for future productions. Some stars now negotiate contracts that reduce their time on set, skipping hours in the makeup chair or performance-capture gear, knowing AI can fill in the gaps.

“Robert Downey, Brad Pitt, Will Smith — they’ve all been scanned many times,” says Ulbrich, who recently joined the AI-driven media company Moonvalley, which pitches itself as a more ethical, artist-centered player in the space. “If you’ve done a studio tentpole, you’ve been scanned.

“There is a lot of fear around AI and it’s founded,” he adds. “Unless you do something about it, you can just get run over. But there are people out there that are harnessing this. At this point, fighting AI is like fighting against electricity.”

While many in Hollywood wrestle with what AI means for the oldest component of moviemaking, others take a more pragmatic view, treating it as a tool to solve problems and keep productions on track. Jerry Bruckheimer, the powerhouse producer behind “Top Gun,” “Pirates of the Caribbean” and this summer’s “F1,” is among those embracing its utility.

“AI is not going anywhere and it’s only going to get more useful for people in our business,” he said in a recent interview with The Times.

He recalled one such moment during post-production on his new Brad Pitt–led Formula One drama, a logistical feat filmed during actual Formula One races across Europe and the Middle East, with a budget north of $200 million.

“Brad was in the wilds of New Zealand, and we had test screenings coming up,” Bruckheimer says. “We couldn’t get his voice to do some looping, so we used an app that could mimic Brad Pitt. I’m sure the union will come after me if you write that, but it wasn’t used in the movie because he became available.”

While he’s skeptical of AI’s ability to generate truly original ideas — “We’re always going to need writers,” he says — Bruckheimer, whose films have grossed more than $16 billion worldwide, sees AI as a powerful tool for global reach.

“They can take Brad’s voice from the movie and turn it into other languages so it’s actually his voice, rather than another actor,” he says. “If it’s not available yet, it will be.”

The debate over AI in performance flared earlier this year with “The Brutalist,” Brady Corbet’s award-winning drama about a Hungarian architect. After the film’s editor, Dávid Jancsó, revealed that AI voice-cloning software had been used to subtly modify the Hungarian accents of stars Adrien Brody and Felicity Jones, the backlash followed swiftly.

Some critics accused the film of using AI to smooth over performances while presenting itself as handcrafted, a move one viral post derided as trying to “cheap out without soul.” Corbet later clarified that AI was used sparingly, only to adjust vowel sounds, but the decision left some viewers uneasy — even as Brody went on to win the Oscar for lead actor.

If the controversy over “The Brutalist” struck some as a moral crisis, David Cronenberg found the whole thing overblown. Few filmmakers have probed the entanglement of flesh, identity and technology as relentlessly as the director of “Videodrome,” “The Fly” and last year’s “The Shrouds,” so he’s not particularly rattled by the rise of AI-assisted performances.

“All directors have always messed around with actors’ performances — that’s what editing is,” Cronenberg told The Times in April. “Filmmaking isn’t theater. It’s not sacred. We’ve been using versions of this for years. It’s another tool in the toolbox. And it’s not controlling you — you can choose not to use it.”

Long before digital tools, Cronenberg recalls adjusting actor John Lone’s vocal pitch in his 1993 film “M. Butterfly,” in which Lone played a Chinese opera singer and spy who presents as a woman to seduce a French diplomat. The director raised the pitch when the character appeared as a woman and lowered it when he didn’t — a subtle manipulation to reinforce the illusion.

A man with gray hair looks off to the side.

David Cronenberg, photographed at his home in Toronto, Canada, in April.

(Kate Dockeray / For The Times)

Far from alarmed, Cronenberg is intrigued by AI’s creative potential as a way of reshaping authorship itself. With new platforms like OpenAI’s Sora and Google’s Veo 3 now capable of generating increasingly photorealistic clips from simple text prompts, an entire performance could conceivably be conjured from a writer’s keyboard.

“Suddenly you can write a scene — a woman is walking down the street, she looks like this, she’s wearing that, it’s raining, whatever — and AI can create a video for you,” Cronenberg says. “To me, this is all exciting. It absolutely can threaten all kinds of jobs and that has to be dealt with, but every technological advance has done that and we just have to adapt and figure it out.”

Ghosts in the frame

In the Hollywood of the late 1970s, there was no AI to tweak an actor’s face. So when “Star Wars” star Mark Hamill fractured his nose and left cheekbone in a serious car crash between shooting the first and second films, the solution was to tweak the story. The 1980 sequel “The Empire Strikes Back” opened with Luke Skywalker being attacked by a nine-foot-tall snow beast called a wampa on the ice planet Hoth, partly to account for the change in his appearance.

Decades later, when Hamill was invited to return as a younger version of himself in the 2020 Season 2 finale of “The Mandalorian,” the chance to show Luke “at the height of his powers was irresistible,” he says.

But the reality left him feeling oddly detached from the character that made him famous. Hamill shared the role with a younger body double, and digital de-aging tools recreated his face from decades earlier. The character’s voice, meanwhile, was synthesized using Respeecher, a neural network trained on old recordings of Hamill to mimic his speech from the original trilogy era.

“I didn’t have that much dialogue: ‘Are you Luke Skywalker?’ ‘I am,’” Hamill recalled in an interview with The Times earlier this year. “I don’t know what they do when they take it away, in terms of tweaking it and making your voice go up in pitch or whatever.”

When fans speculated online that he hadn’t participated at all, Hamill declined to correct the record.

“My agent said, ‘Do you want me to put out a statement or something?’” Hamill recalls. “I said, ‘Eh, people are going to say what they want to say.’ Maybe if you deny it, they say, ‘See? That proves it — he’s denying it.’”

A young Jedi in black robes stands at a doorway.

A digitally de-aged Mark Hamill as the young Luke Skywalker in a 2020 episode of “The Mandalorian.”

(Lucasfilm Ltd.)

When Luke returned again in a 2022 episode of “The Book of Boba Fett,” the process was even more synthetic: Hamill was minimally involved on camera and the character was built almost entirely from digital parts: a de-aged face mapped onto a body double with an AI-generated voice delivering his lines. Hamill was credited and compensated, though the exact terms of the arrangement haven’t been made public.

The visual effect was notably improved from earlier efforts, thanks in part to a viral deepfake artist known as Shamook, whose YouTube video improving the VFX in “The Mandalorian” finale had racked up millions of views. He was soon hired by Industrial Light & Magic — a rare case of fan-made tech critique turning into a studio job.

“In essence, yes, I did participate,” Hamill says.

It’s one thing to be digitally altered while you’re still alive. It’s another to keep performing after you’re gone.

Before his death last year, James Earl Jones — whose resonant baritone helped define Darth Vader for generations — gave Lucasfilm permission to recreate his voice using AI. In a recent collaboration with Disney, Epic Games deployed that digital voice in Fortnite, allowing players to team up with Vader and hear new lines delivered in Jones’ unmistakable tones, scripted by Google’s Gemini AI.

In May, SAG-AFTRA later filed a labor charge, saying the use of Jones’ voice hadn’t been cleared with the union.

Last year’s “Alien: Romulus” sparked similar backlash over the digital resurrection of Ian Holm’s android character Ash nearly a decade after Holm’s death. Reconstructed using a blend of AI and archival footage, the scenes were slammed by some fans as a form of “digital necromancy.” For the film’s home video release, director Fede Álvarez quietly issued an alternate cut that relied more heavily on practical effects, including an animatronic head modeled from a preexisting cast of Holm’s face.

For Hollywood, AI allows nostalgia to become a renewable resource, endlessly reprocessed and resold. Familiar faces can be altered, repurposed and inserted into entirely new stories. The audience never has to say goodbye and the industry never has to take the risk of introducing someone new.

Hamill, for his part, seems ready to let go of Luke. After his final arc in 2017’s “The Last Jedi,” he says he feels a sense of closure.

“I don’t know the full impact AI will have but I find it very ominous,“ he says. “I’m fine. I had my time. Now the spotlight should be on the current and future actors and I hope they enjoy it as much as I did.”

Actors, not avatars

A man in a blue top poses for the camera.

Actor and AI startup Wonder Dynamics co-founder Tye Sheridan, photographed by The Times in 2021.

(Michael Nagle / For The Times)

Actor Tye Sheridan knows how dark an AI future could get. After all, he starred in Steven Spielberg’s 2018 “Ready Player One,” a sci-fi thriller set inside a corporate-controlled world of digital avatars. But Sheridan isn’t trying to escape into that world — he’s trying to shape the one ahead.

With VFX supervisor Nikola Todorovic, Sheridan co-founded Wonder Dynamics in 2017 to explore how AI can expand what’s possible on screen. Their platform uses AI to insert digital characters into live-action scenes without green screens or motion-capture suits, making high-end VFX more accessible to low-budget filmmakers. Backed by Spielberg and “Avengers” co-director Joe Russo, Wonder Dynamics was acquired last year by Autodesk, the software firm behind many animation and design tools.

“Since the advent of the camera, technology has been pushing this industry forward,” Sheridan, 28, says on a video call. “AI is just another part of that path. It can make filmmaking more accessible, help discover new voices. Maybe the next James Cameron will find their way into the industry through some AI avenue. I think that’s really exciting.”

With production costs spiraling, Todorovic sees AI as a way to lower the barrier to entry and make riskier, more ambitious projects possible. “We really see AI going in that direction, where you can get those A24-grounded stories with Marvel visuals,” he says. “That’s what younger audiences are hungry for.”

The shift, Todorovic argues, could lead to more films overall and more opportunities for actors. “Maybe instead of 10,000 people making five movies, it’ll be 1,000 people making 50,” he says.

Still, Todorovic sees a threshold approaching, one where synthetic actors could, in theory, carry a film. “I do think technically it is going to get solved,” Todorovic says. “But the question remains — is that what we really want? Do we really want the top five movies of the year to star humans who don’t exist? I sure hope not.”

For him, the boundary isn’t just about realism. It’s about human truth.

“You can’t prompt a performance,” he says. “You can’t explain certain movements of the body and it’s very hard to describe emotions. Acting is all about reacting. That’s why when you make a movie, you do five takes — or 40. Because it’s hard to communicate.”

Sheridan, who has appeared in the “X-Men” franchise as well as smaller dramas like “The Card Counter” and “The Tender Bar,” understands that instinctively and personally. “I started acting in films when I was 11 years old,” he says. “I wouldn’t ever want to build something that put me out of a job. That’s the fun part — performing, exploring, discovering the nuances. That’s why we fall in love with certain artists: their unique sensibility, the way they do what no one else can.”

He knows that may sound contradictory coming from the co-founder of an AI company. That’s exactly why he believes it’s critical that artists, not Silicon Valley CEOs, are the ones shaping how the technology is used.

“We should be skeptical of AI and its bad uses,” he says. “It’s a tool that can be used for good or bad. How are we going to apply it to create more access and opportunity in this industry and have more voices heard? We’re focused on keeping the artist as an essential part of the process, not replacing them.”

For now, Sheridan lives inside that paradox, navigating a technology that could both elevate and imperil the stories he cares most about.

His next acting gig? “The Housewife,” a psychological drama co-starring Naomi Watts and Michael Imperioli, in which he plays a 1960s New York Times reporter investigating a suspected Nazi hiding in Queens. No AI. No doubles. Just people pretending to be other people the old way, while it lasts.

Source link

Vin Diesel says Paul Walker’s Brian O’Conner could return in ‘Fast & Furious 11’

Vin Diesel says the planned finale of the long-running “Fast & Furious” franchise will come with an unexpected passenger.

Speaking at Fuel Fest, an automotive event in Pomona over the weekend, Diesel told fans that the final “Fast & Furious” film will bring back one of the series’ most beloved characters: Paul Walker’s Brian O’Conner. The longtime on-screen partner to Diesel’s Dominic Toretto, O’Conner last appeared in 2015’s “Furious 7,” which was completed after Walker’s death in a car accident in 2013 at age 40.

The franchise — known for its blend of street racing, elaborate heists and outsized action — has grown into one of the most successful of all time, with more than $7 billion at the global box office.

“Just yesterday I was with Universal Studios,” Diesel said in a video from the event. “The studio said to me, ‘Vin, can we please have the finale of ‘Fast & Furious’ [in] April 2027?’ I said, ‘Under three conditions’ — because I’ve been listening to my fanbase.”

Those conditions, he said, were to bring the franchise back to L.A., return to its street-racing roots and reunite Dom and Brian.

“That is what you’re going to get in the finale,” Diesel promised.

How the production might accomplish that reunion remains unclear. When Walker died during the making of “Furious 7,” the filmmakers turned to a mix of archived footage, digital effects and performances by Walker’s brothers, Caleb and Cody, who served as stand-ins for unfinished scenes. Artists at Weta Digital created more than 300 visual-effects shots to map Walker’s likeness onto his brothers’ bodies, often piecing together dialogue from existing recordings. The film’s farewell — showing Brian and Dom driving side by side before splitting onto separate roads — became one of the franchise’s most memorable and emotional moments, widely seen as a tribute to Walker’s legacy.

A return for Brian O’Conner would join a growing list of posthumous digital performances in major franchises — a practice that continues to stir debate over where the line should be drawn. In 2016’s “Rogue One: A Star Wars Story,” Peter Cushing’s Grand Moff Tarkin was recreated through a mix of motion capture, CGI and archival material, decades after Cushing’s death. In 2019, “The Rise of Skywalker” relied on previously unused footage and digital stitching to return Carrie Fisher’s Leia to the screen three years after the actress’ passing.

And in last year’s “Alien: Romulus,” the late Ian Holm’s likeness was recreated as an android using AI and digital effects, with the approval of his estate — a choice that sparked controversy and led to more practical effects being used in the film’s home release.

Source link