California’s tech companies, the epicenter of the state’s economy, sent politicians a loud message this year: Back down from restrictive artificial intelligence regulation or they’ll leave.
The tactic appeared to have worked, activists said, because some politicians weakened or scrapped guardrails to mitigate AI’s biggest risks.
California Gov. Gavin Newsom rejected a bill aimed at making companion chatbots safer for children after the tech industry fought it. In his veto message, the governor raised concerns about placing broad limits on AI, which has sparked a massive investment spree and created new billionaires overnight around the San Francisco Bay Area.
Assembly Bill 1064 would have barred companion chatbot operators from making these AI systems available to minors unless the chatbots weren’t “foreseeably capable” of certain conduct, including encouraging a child to engage in self-harm. Newsom said he supported the goal, but feared it would unintentionally bar minors from using AI tools and learning how to use technology safely.
“We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” he wrote in his veto message.
The bill’s veto was a blow to child safety advocates who had pushed it through the state Legislature and a win for tech industry groups that fought it. In social media ads, groups such as TechNet had urged the public to tell the governor to veto the bill because it would harm innovation and lead to students falling behind in school.
Organizations trying to rein in the world’s largest tech companies as they advance the powerful technology say the tech industry has become more empowered at the national and state levels.
Meta, Google, OpenAI, Apple and other major tech companies have strengthened their relationships with the Trump administration. Companies are funding new organizations and political action committees to push back against state AI policy while pouring money into lobbying.
In Sacramento, AI companies have lobbied behind the scenes for more freedom. California’s massive pool of engineering talent, tech investors and companies make it an attractive place for the tech industry, but companies are letting policymakers know that other states are also interested in attracting those investments and jobs. Big Tech is particularly sensitive to regulations in the Golden State because so many companies are headquartered there and must abide by its rules.
“We believe California can strike a better balance between protecting consumers and enabling responsible technological growth,” Robert Boykin, TechNet’s executive director for California and the Southwest, said in a statement.
Common Sense Media founder and Chief Executive Jim Steyer said tech lobbyists put tremendous pressure on Newsom to veto AB 1064. Common Sense Media, a nonprofit that rates and reviews technology and entertainment for families, sponsored the bill.
“They threaten to hurt the economy of California,” he said. “That’s the basic message from the tech companies.”
Advertising is among the tactics tech companies with deep pockets use to convince politicians to kill or weaken legislation. Even if the governor signs a bill, companies have at times sued to block new laws from taking effect.
“If you’re really trying to do something bold with tech policy, you have to jump over a lot of hurdles,” said David Evan Harris, senior policy advisor at the California Initiative for Technology and Democracy, which supported AB 1064. The group focuses on finding state-level solutions to threats that AI, disinformation and emerging technologies pose to democracy.
Tech companies have threatened to move their headquarters and jobs to other states or countries, a risk looming over politicians and regulators.
The California Chamber of Commerce, a broad-based business advocacy group that includes tech giants, launched a campaign this year that warned over-regulation could stifle innovation and hinder California.
“Making competition harder could cause California companies to expand elsewhere, costing the state’s economy billions,” the group said on its website.
From January to September, the California Chamber of Commerce spent $11.48 million lobbying California lawmakers and regulators on a variety of bills, filings to the California secretary of state show. During that period, Meta spent $4.13 million. A lobbying disclosure report shows that Meta paid the California Chamber of Commerce $3.1 million, making up the bulk of their spending. Google, which also paid TechNet and the California Chamber of Commerce, spent $2.39 million.
Amazon, Uber, DoorDash and other tech companies spent more than $1 million each. TechNet spent around $800,000.
The threat that California companies could move away has caught the attention of some politicians.
California Atty. Gen. Rob Bonta, who has investigated tech companies over child safety concerns, indicated that despite initial concern, his office wouldn’t oppose ChatGPT maker OpenAI’s restructuring plans. The new structure gives OpenAI’s nonprofit parent a stake in its for-profit public benefit corporation and clears the way for OpenAI to list its shares.
Bonta blessed the restructuring partly because of OpenAI’s pledge to stay in the state.
“Safety will be prioritized, as well as a commitment that OpenAI will remain right here in California,” he said in a statement last week. The AG’s office, which supervises charitable trusts and ensures these assets are used for public benefit, had been investigating OpenAI’s restructuring plan over the last year and a half.
OpenAI Chief Executive Sam Altman said he’s glad to stay in California.
“California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued,” he posted on X.
Critics — which included some tech leaders such as Elon Musk, Meta and former OpenAI executives as well as nonprofits and foundations — have raised concerns about OpenAI’s restructuring plan. Some warned it would allow startups to exploit charitable tax exemptions and let OpenAI prioritize financial gain over public good.
Lawmakers and advocacy groups say it’s been a mixed year for tech regulation. The governor signed Assembly Bill 56, which requires platforms to display labels for minors that warn about social media’s mental health harms. Another piece of signed legislation, Senate Bill 53, aims to make AI developers more transparent about safety risks and offers more whistleblower protections.
The governor also signed a bill that requires chatbot operators to have procedures to prevent the production of suicide or self-harm content. But advocacy groups, including Common Sense Media, removed their support for Senate Bill 243 because they said the tech industry pushed for changes that weakened its protections.
Newsom vetoed other legislation that the tech industry opposed, including Senate Bill 7, which requires employers to notify workers before deploying an “automated decision system” in hiring, promotions and other employment decisions.
Called the “No Robo Bosses Act,” the legislation didn’t clear the governor, who thought it was too broad.
“A lot of nuance was demonstrated in the lawmaking process about the balance between ensuring meaningful protections while also encouraging innovation,” said Julia Powles, a professor and executive director of the UCLA Institute for Technology, Law & Policy.
The battle over AI safety is far from over. Assemblymember Rebecca Bauer-Kahan (D-Orinda), who co-wrote AB 1064, said she plans to revive the legislation.
Child safety is an issue that both Democrats and Republicans are examining after parents sued AI companies such as OpenAI and Character.AI for allegedly contributing to their children’s suicides.
“The harm that these chatbots are causing feels so fast and furious, public and real that I thought we would have a different outcome,” Bauer-Kahan said. “It’s always fascinating to me when the outcome of policy feels to be disconnected from what I believe the public wants.”
Steyer from Common Sense Media said a new ballot initiative includes the AI safety protections that Newsom vetoed.
“That was a setback, but not an overall defeat,” he said about the veto of AB 1064. “This is a David and Goliath situation, and we are David.”
You may not know Eliot Mack’s name, but if a small robot has ever crept around your kitchen, you know his work.
Before he turned his MIT-trained mind to filmmaking, Mack helped lead a small team of engineers trying to solve a deeply relatable problem: how to avoid vacuuming. Whether it was figuring out how to get around furniture legs or unclog the brushes after a run-in with long hair, Mack designed everything onscreen first with software, troubleshooting virtually and getting 80% of the way there before a single part was ever manufactured.
When Mack pivoted to filmmaking in the early 2000s, he was struck by how chaotic Hollywood’s process felt. “You pitch the script, get the green light and you’re flying into production,” he says, sounding both amused and baffled. “There’s no CAD template, no centralized database. I was like, how do movies even get made?”
That question sent Mack down a new path, trading dust bunnies for the creative bottlenecks that slow Hollywood down.
In 2004 he founded Lightcraft Technology, a startup developing what would later be known as virtual production tools, born out of his belief that if you could design a robot in software, you should be able to design a shot the same way. The company’s early system, Previzion, sold for $180,000 and was used on sci-fi and fantasy shows like “V” and “Once Upon a Time.” But Jetset, its latest AI-assisted tool set, runs on an iPhone and offers a free tier, with pro features topping out at just $80 a month. It lets filmmakers scan a location, drop it into virtual space and block out scenes with camera moves, lighting and characters. They can preview shots, overlay elements and organize footage for editing — all from a phone. No soundstage, no big crew, no gatekeepers. Lightcraft’s pitch: “a movie studio in your pocket.”
A series on how the AI revolution is reshaping the creative foundations of Hollywood — from storytelling and performance to production, labor and power.
The goal, Mack says, is to put more power in the hands of the people making the work. “One of the big problems is how siloed Hollywood is,” he says. “We talked to an Oscar-winning editor who said, ‘I’m never going to get to make my movie’ — he was pigeonholed as just an editor. Same with an animator we know who has two Oscars.”
Eliot Mack, CEO of Lightcraft, an AI-powered virtual-production startup, wants to give creators the power and freedom to bring their ideas to life.
(Christina House/Los Angeles Times)
To Mack, the revolution of Jetset recalls the scrappy, guerrilla spirit of Roger Corman’s low-budget productions, which launched the early careers of directors like Francis Ford Coppola and Martin Scorsese. For generations of creatives stuck waiting on permission or funding, he sees this moment as a reset button.
“The things you got good at — writing, directing, acting, creating, storytelling — they’re still crazy useful,” he says. “What’s changing is the amount of schlepping you have to do before you get to do the fun stuff. Your 20s are a gift. You want to be creating at the absolute speed of sound. We’re trying to get to a place where you don’t have to ask anyone. You can just make the thing.”
AI is reshaping nearly every part of the filmmaking pipeline. Storyboards can now be generated from a script draft. Lighting and camera angles can be tested before anyone touches a piece of gear. Rough cuts, placeholder VFX, even digital costume mock-ups can all be created before the first shot is filmed. What once took a full crew, a soundstage and a six-figure budget can now happen in minutes, sometimes at the hands of a single person with a laptop.
This wave of automation is arriving just as Hollywood is gripped by existential anxiety. The 2023 writers’ and actors’ strikes brought the industry to a standstill and put AI at the center of a fight over its future. Since then, production has slowed, crew sizes have shrunk and the streaming boom has given way to consolidation and cost-cutting.
According to FilmLA, on-location filming in Greater Los Angeles dropped 22.4% in early 2025 compared with the year before. For many of the crew members and craftspeople still competing for those jobs, AI doesn’t feel like an innovation. It feels like a new way to justify doing more with less, only to end up with work that’s less original or creative.
“AI scrapes everything we artists have made off the internet and creates a completely static, banal world that can never imagine anything that hasn’t happened before,” documentary filmmaker Adam Curtis warned during a directors panel at the 2023 Telluride Film Festival, held in the midst of the strikes. “That’s the real weakness of the AI dream — it’s stuck with the ghosts. And I think we’ll get fed up with that.”
How you feel about these changes often depends on where you sit and how far along you are in your career. For people just starting out, AI can offer a way to experiment, move faster and bypass the usual barriers to entry. For veterans behind the scenes, it often feels like a threat to the expertise they’ve spent decades honing.
Past technological shifts — the arrival of sound, the rise of digital cameras, the advancement of CGI — changed how movies were made, but not necessarily who made them. Each wave brought new roles: boom operators and dialogue coaches, color consultants and digital compositors. Innovation usually meant more jobs, not fewer.
But AI doesn’t just change the tools. It threatens to erase the people who once used the old ones.
Diego Mariscal has seen first hand as AI has cut potential jobs for grips.
(Jennifer Rose Clasen)
Diego Mariscal, 43, a veteran dolly grip who has worked on “The Mandalorian” and “Spider-Man: No Way Home,” saw the writing on the wall during a recent shoot. A visual effects supervisor opened his laptop to show off a reel of high-end commercials and something was missing. “There were no blue screens — none,” Mariscal recalls. “That’s what we do. We put up blues as grips. You’d normally hire an extra 10 people and have an extra three days of pre-rigging, setting up all these blue screens. He was like, ‘We don’t need it anymore. I just use AI to clip it out.’”
Mariscal runs Crew Stories, a private Facebook group with nearly 100,000 members, where working crew members share job leads, trade tips and voice their growing fears. He tries to keep up with the steady drip of AI news. “I read about AI all day, every day,” he says. “At least 20 posts a day.”
His fear isn’t just about fewer jobs — it’s about what comes next. “I’ve been doing this since I was 19,” Mariscal says of his specialized dolly work, which involves setting up heavy equipment and guiding the camera smoothly through complex shots. “I can push a cart in a parking lot. I can push a lawnmower. What else can I do?”
Who wins, who loses and what does James Cameron think?
Before AI and digital doubles, Mike Marino learned the craft of transformation the human way: through hands-on work and a fascination that bordered on obsession.
Marino was 5 years old when he first saw “The Elephant Man” on HBO. Horrified yet transfixed, he became fixated on prosthetics and the emotional power they could carry. As a teenager in New York, he pored over issues of Fangoria, studied monsters and makeup effects and experimented with sculpting his own latex masks on his bedroom floor.
Prosthetics artist Mike Marino asks a big question related to generative AI: What role do the human creatives play?
(Sean Dougherty / For The Times)
Decades later, Marino, 48, has become one of Hollywood’s leading makeup artists, earning Oscar nominations for “Coming 2 America,” “The Batman” and last year’s dark comedy “A Different Man,” in which he helped transform Sebastian Stan into a disfigured actor.
His is the kind of tactile, handcrafted work that once seemed irreplaceable. But today AI tools are increasingly capable of achieving similar effects digitally: de-aging actors, altering faces, even generating entire performances. What used to take weeks of experimentation and hours in a makeup trailer can now be approximated with a few prompts and a trained model. To Marino, AI is more than a new set of tools. It’s a fundamental change in what it means to create.
“If AI is so good it can replace a human, then why have any human beings?” he says. “This is about taste. It’s about choice. I’m a human being. I’m an artist. I have my own ideas — mine. Just because you can make 10,000 spaceships in a movie, should you?”
“If AI is so good it can replace a human, then why have any human beings?”
— Mike Marino, makeup artist on “A Different Man”
Marino is no technophobe. His team regularly uses 3D scanning and printing. But he draws the line at outsourcing creative judgment to a machine. “I’m hoping there are artists who want to work with humans and not machines,” he says. “If we let AI just run amok with no taste, no choice, no morality behind it, then we’re gone.”
Not everyone sees AI’s rise in film production as a zero-sum game. Some technologists imagine a middle path. Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab and one of the world’s leading AI researchers, believes the future of filmmaking lies in a “human-machine partnership.”
AI, Rus argues, can take on time-consuming tasks like animating background extras, color correction or previsualizing effects, freeing up people to focus on what requires intuition and taste. “AI can help with the routine work,” she says. “But the human touch and emotional authenticity are essential.”
Few directors have spent more time grappling with the dangers and potential of artificial intelligence than James Cameron. Nearly 40 years before generative tools entered Hollywood’s workflow, he imagined a rogue AI triggering global apocalypse in 1984’s “The Terminator,” giving the world Skynet — now a cultural shorthand for the dark side of machine intelligence. Today, he continues to straddle that line, using AI behind the scenes on the upcoming “Avatar: Fire and Ash” to optimize visual effects and performance-capture, while keeping creative decisions in human hands. The latest sequel, due Dec. 19, promises to push the franchise’s spectacle and scale even further; a newly released trailer reveals volcanic eruptions, aerial battles and a new clan of Na’vi.
A scene from “Avatar: The Way of Water.” Director James Cameron differentiates between using machine-learning to reduce monotonous movie-making work and generative AI.
(Courtesy of 20th Century Studios/Courtesy of 20th Century Studios)
“You can automate a lot of processes that right now tie up a lot of artists doing mundane tasks,” Cameron told The Times in 2023 at a Beyond Fest screening of his 1989 film “The Abyss.” “So if we could accelerate the postproduction pipeline, then we can make more movies. Then those artists will get to do more exciting things.”
For Cameron, the promise of AI lies in efficiency, not elimination. “I think in our particular industry, it’s not going to replace people; it’s going to free them to do other things,” he believes. “It’s going to accelerate the process and bring the price down, which would be good because, you know, some movies are a little more expensive than others. And a lot of that has to do with human energy.”
Cameron himself directed five films between 1984 and 1994 and only three in the three decades since, though each one has grown increasingly complex and ambitious.
That said, Cameron has never been one to chase shortcuts for their own sake. “I think you can make pre-viz and design easier, but I don’t know if it makes it better,” he says. “I mean, if easy is your thing. Easy has never been my thing.”
He draws a line between the machine-learning techniques his team has used since the first “Avatar” to help automate tedious tasks and the newer wave of generative AI models making headlines today.
“The big explosion has been around image-based generative models that use everything from every image that’s ever been created,” he says. “We’d never use any of them. The images we make are computer-created, but they’re not AI-created.”
In his view, nothing synthetic can replace the instincts of a flesh-and-blood artist. “We have human artists that do all the designs,” he says. “We don’t need AI. We’ve got meat-I. And I’m one of the meat-artists that come up with all that stuff. We don’t need a computer. Maybe other people need it. We don’t.”
Reshaping creativity — and creative labor
Rick Carter didn’t go looking for AI as a tool. He discovered it as a lifeline.
The two-time Oscar-winning production designer, who worked with Cameron on “Avatar” and whose credits include “Jurassic Park” and “Forrest Gump,” began experimenting with generative AI tools like Midjourney and Runway during the pandemic, looking for a way to keep his creative instincts sharp while the industry was on pause. A longtime painter, he was drawn to the freedom the programs offered.
“I saw that there was an opportunity to create images where I didn’t have to go to anybody else for approval, which is the way I would paint,” Carter says by phone from Paris. “None of the gatekeeping would matter. I have a whole lot of stories on my own that I’ve tried to get into the world in various ways and suddenly there was a way to visualize them.”
Midjourney and Runway can create richly detailed images — and in Runway’s case, short video clips — from a text prompt or a combination of text and visuals. Trained on billions of images and audiovisual materials scraped from the internet, these systems learn to mimic style, lighting, composition and form, often with eerie precision. In a production pipeline, these tools can help concept artists visualize characters or sets, let directors generate shot ideas or give costume designers and makeup artists a fast way to test looks, long before physical production begins.
But as these tools gain traction in Hollywood, a deeper legal and creative dilemma is coming into focus: Who owns the work they produce? And what about the copyrighted material used to train them?
In June, Disney and Universal filed a federal copyright lawsuit against Midjourney, accusing the company of generating unauthorized replicas of characters such as Spider-Man, Darth Vader and Shrek using AI models trained on copyrighted material: what the suit calls a “bottomless pit of plagiarism.” It’s the most high-profile of several legal challenges now putting copyright law to the test in the age of generative AI.
“Forrest Gump” director Robert Zemeckis, left, with production designer Rick Carter at an art installation of the movie’s famed bench. (Carter family)
(Carter family)
Working with generative models, Carter began crafting what he calls “riffs of consciousness,” embracing AI as a kind of collaborative partner, one he could play off of intuitively. The process reminded him of the loose, improvisational early stages of filmmaking, a space he knows well from decades of working with directors like Robert Zemeckis and Steven Spielberg.
“I’ll just start with a visual or a word prompt and see how it iterates from there and what it triggers in my mind,” Carter says. “Then I incorporate that so it builds on its own in an almost free-associative way. But it’s still based upon my own intuitive, emotional, artistic, even spiritual needs at that moment.”
He describes the experience as a dialogue between two minds, one digital and one human: “One AI is artificial intelligence. The other AI is authentic intelligence — that’s us. We’ve earned it over this whole span of time on the planet.”
Sometimes, Carter says, the most evocative results come from mistakes. While sketching out a story about a hippie detective searching for a missing woman in the Himalayas, he accidentally typed “womb” into ChatGPT instead of “woman.” The AI ran with it, returning three pages of wild plot ideas involving gurus, seekers and a bizarre mystery set in motion by the disappearance.
“I couldn’t believe it,” he says. “I would never have taken it that far. The AI is so precocious. It is trying so much to please that it will literally make something out of the mistake you make.”
Carter hasn’t used generative AI on a film yet; most of his creations are shared only with friends. But he says the technology is already slipping into creative workflows in covert ways. “There are issues with copyrights with most of the studios so for now, it’s going to be mostly underground,” he says. “People will use it but they won’t acknowledge that they’re using it — they’ll have an illustrator do something over it, or take a photo so there’s no digital trail.”
Carter has lived through a major technological shift before. “I remember when we went from analog to digital, from ‘Jurassic Park’ on,” he says. “There were a lot of wonderful artists who could draw and paint in ways that were just fantastic but they couldn’t adapt. They didn’t want to — even the idea of it felt like the wrong way to make art. And, of course, most of them suffered because they didn’t make it from the Rolodex to the database in terms of people calling them up.”
He worries that some artists may approach the technology with a rigid sense of authorship. “Early on, I found that the less I used my own ego as a barometer for whether something was artistic, the more I leaned into the process of collaboratively making something bigger than the sum of its parts — and the bigger and better the movies became.”
Others, like storyboard artist Sam Tung, are bracing against the same wave with a quiet but unshakable defiance.
Tung, whose credits include “Twisters” and Christopher Nolan’s upcoming adaptation of “The Odyssey,” has spent the last two years tracking the rise of generative tools, not just their capabilities but their implications. As co-chair of the Animation Guild’s AI Committee, he has been on the front lines of conversations about how these technologies could reshape creative labor.
To artists like Tung, the rise of generative tools feels deeply personal. “If you are an illustrator or a writer or whatever, you had to give up other things to take time to develop those skills,” he says. “Nobody comes out of the womb being able to draw or write or act. Anybody who does that professionally spent years honing those skills.”
“Anything I’ve made with AI, I’ve quickly forgotten about. There’s basically nothing I get from putting it on social media, other than the ire of my peers.”
— Sam Tung, storyboard artist on “The Odyssey”
Tung has no interest in handing that over to a machine. “It’s not that I’m scared of it — I just don’t need it,” he says. “If I want to draw something or paint something, I’ll do it myself. That way it’s exactly what I want and I actually enjoy the process. When people tell me they responded to a drawing I did or a short film I made with friends, it feels great. But anything I’ve made with AI, I’ve quickly forgotten about. There’s basically nothing I get from putting it on social media, other than the ire of my peers.”
What unsettles him isn’t just the slickness of AI’s output but how that polish is being used to justify smaller crews and faster turnarounds. “If this is left unchecked, it’s very easy to imagine a worst-case scenario where team sizes and contract durations shrink,” Tung says. “A producer who barely understands how it works might say, ‘Don’t you have AI to do 70% of this? Why do you need a whole week to turn around a sequence? Just press the button that says: MAKE MOVIE.’ ”
At 73, Carter isn’t chasing jobs. His legacy is secure. “If they don’t hire me again, that’s OK,” he says. “I’m not in that game anymore.” He grew up in Hollywood — his father was Jack Lemmon’s longtime publicist and producing partner — and has spent his life watching the industry evolve. Now, he’s witnessing a reckoning unlike any he, or anyone else, has ever imagined.
“I do have concerns about who is developing AI and what their values are,” he says. “What they use all this for is not necessarily something I would approve of — politically, socially, emotionally. But I don’t think I’m in a position to approve or not.”
Earlier this year, the Palisades fire destroyed Carter’s home, taking with it years of paintings and personal artwork. AI, he says, has given him a way to keep creating through the upheaval. “It saved me through the pandemic, and now it’s saving me through the fire,” he says, as if daring the universe to test him again. “It’s like, go ahead, throw something else at me.”
‘Prompt and pray?’ Not so fast
Many in the industry may still be dipping a toe into the waters of AI. Verena Puhm dove in.
The Austrian-born filmmaker studied acting and directing in Munich and Salzburg before moving to Los Angeles, where she built a globe-spanning career producing, writing and developing content for international networks and streamers. Her credits range from CNN’s docuseries “History of the Sitcom” to the German reboot of the cult anthology “Beyond Belief: Fact or Fiction” and a naval documentary available on Tubi. More recently, she has channeled that same creative range into a deepening exploration of generative tools.
Puhm first began dabbling with AI while using Midjourney to design a pitch deck, but it wasn’t until she entered a timed generative AI filmmaking challenge at the 2024 AI on the Lot conference — informally dubbed a “gen battle” — that the creative potential of the medium hit her.
“In two hours, I made a little mock commercial,” she remembers, proudly. “It was actually pretty well received and fun. And I was like, Oh, wow, I did this in two hours. What could I do in two days or two weeks?”
What started as experimentation soon became a second act. This summer, Puhm was named head of studio for Dream Lab LA, a new creative arm of Luma AI, which develops generative video tools for filmmakers and creators. There, she’s helping shape new storytelling formats and supporting emerging creators working at the intersection of cinema and technology. She may not be a household name, but in the world of experimental storytelling, she’s fast becoming a key figure.
Verena Puhm, a director, writer and producer, has used generative AI in a number of her projects, says it’s breaking down barriers to entry.
(Jason Armond/Los Angeles Times)
Some critics dismiss AI filmmaking as little more than “prompt and pray”: typing in a few words and hoping something usable comes out. Puhm bristles at the phrase.
“Anybody that says that tells me they’ve never tried it at all, because it is not that easy and simple,” she says. “You can buy a paintbrush at Home Depot for, what, $2? That doesn’t make you a painter. When smartphones first came out, there was a lot of content being made but that didn’t mean everyone was a filmmaker.”
What excites her most is how AI is breaking down the barriers that once kept ambitious ideas out of reach. Luma’s new Modify Video tool lets filmmakers tweak footage after it’s shot, changing wardrobe, aging a character, shifting the time of day, all without reshoots or traditional VFX. It can turn a garage into a spaceship, swap a cloudy sky for the aurora borealis or morph an actor into a six-eyed alien, no green screen required.
“I remember shopping projects around and being told by producers, ‘This scene has to go, that has to go,’ just to keep the budget low. Now everything is open.”
— Verena Puhm, Head of Studio at Dream Lab LA
“It’s such a relief as an artist,” Puhm says. “If there’s a project I’ve been sitting on for six years because I didn’t have a $5 million budget — suddenly there’s no limit. I remember shopping projects around and being told by producers, ‘This scene has to go, that has to go,’ just to keep the budget low. Now everything is open.”
That sense of access resonates far beyond Los Angeles. At a panel during AI on the Lot, “Blue Beetle” director Ángel Manuel Soto reflected on how transformative AI might have been when he was first starting out. “I wish tools like this existed when I wanted to make movies in Puerto Rico, because nobody would lend me a camera,” he said. “Access to equipment is a privilege we sometimes take for granted. I see this helping kids like me from the projects tell stories without going bankrupt — or stealing, which I don’t condone.”
Puhm welcomes criticism of AI but only when it’s informed. “If you hate AI and you’ve actually tested the tools and educated yourself, I’ll be your biggest supporter,” she says. “But if you’re just speaking out of fear, with no understanding, then what are you even basing your opinion on?”
She understands why some filmmakers feel rattled, especially those who, like her, grew up dreaming of seeing their work on the big screen. “I still want to make features and TV series — that’s what I set out to do,” she says. “I hope movie theaters don’t go away. But if the same story I want to tell reaches millions of people on a phone and they’re excited about it, will I really care that it wasn’t in a theater?”
“I just feel like we have to adapt to the reality of things,” she continues. “That might sometimes be uncomfortable, but there is so much opportunity if you lean in. Right now any filmmaker can suddenly tell a story at a high production value that they could have never done before, and that is beautiful and empowering.”
For many, embracing AI boils down to a simple choice: adapt or get cut from the frame.
Hal Watmough, a BAFTA-winning British editor with two decades of experience, first began experimenting with AI out of a mix of curiosity and dread. “I was scared,” he admits. “This thing was coming into the industry and threatening our jobs and was going to make us obsolete.” But once he started playing with tools like Midjourney and Runway, he quickly saw how they could not only speed up the process but allow him to rethink what his career could be.
For an editor used to working only with what he was given, the ability to generate footage on the fly, cut with it immediately and experiment endlessly without waiting on a crew or a shoot was a revelation. “It was still pretty janky at that stage, but I could see the potential,” he says. “It was kind of intoxicating. I started to think, I’d like to start making things that I haven’t seen before.”
After honing his skills with various AI tools, Watmough created a wistful, vibrant five-minute animated short called “LATE,” about an aging artist passing his wisdom to a young office worker. Over two weeks, he generated 2,181 images using AI, then curated and refined them frame by frame to shape the story.
Earlier this year, he submitted “LATE” to what was billed as the world’s first AI animation contest, hosted by Curious Refuge, an online education hub for creative technologists — and, to his delight, he won. The prize included $10,000, a pitch meeting with production company Promise Studios and, as an absurd bonus, his face printed on a potato. But for Watmough, the real reward was the sense that he had found a new creative identity.
“There’s something to the fact that the winner of the first AI animation competition was an editor,” Watmough says. “With the advent of AI, yes, you could call yourself a filmmaker but essentially I’d say most people are editors. You’re curating, selecting, picking what you like — relying on your taste.”
Thanks to AI, he says he’s made more personal passion projects in the past year and a half than during his entire previous career. “I’ll be walking or running and ideas just come. Now I can go home that night and try them,” he says. “None of that would exist without AI. So either something exists within AI or it never exists at all. And all the happiness and fulfillment that comes with it for the creator doesn’t exist either.”
Watmough hasn’t entirely lost his fear of what AI might do to the creative workforce, even as he is energized by what it makes possible. “A lot of people I speak to in film and TV are worried about losing their jobs and I’m not saying the infrastructure roles won’t radically change,” he says. “But I don’t think AI is going to replace that many — if any — creative people.”
What it will do, he says, is raise the bar. “If anyone can create anything, then average work will basically become extinct or pointless. AI can churn out remakes until the cows come home. You’ll have to pioneer to exist.”
He likens the current moment to the birth of cinema more than a century ago — specifically the Lumière brothers’ “Arrival of a Train at La Ciotat,” the 1896 short that famously startled early audiences. In the silent one-minute film, a steam train rumbles toward the camera, growing larger. Some viewers reportedly leaped from their seats, convinced it was about to crash into them.
“People ran out of the theater screaming,” Watmough says. “Now we don’t even think about it. With AI, we’re at that stage again. We’re watching the steam train come into the station and people are either really excited or they’re running out of the theater in fear. That’s where we are, right at the start. And the potential is limitless.”
Then again, he adds with a dry laugh, “I’m an eternal optimist, so take what I say with a grain of salt.”
At a Starbucks in downtown Culver City, Amit Jain pulls out his iPad Pro and presses play. On-screen, one of his employees at Luma AI — the Silicon Valley startup behind a new wave of generative video tools, which he co-founded and now runs — lumbers through the company’s Palo Alto office, arms swinging, shoulders hunched, pretending to be a monkey. Jain swipes to a second version of the same clip. Same movement, same hallway, but now he is a monkey. Fully rendered and believable, and created in seconds.
“The tagline for this would be, like, iPhone to cinema,” Jain says, flipping through other uncanny clips shared on his company’s Slack. “But, of course, it’s not full cinema yet.” He says it offhandedly — as if he weren’t describing a transformation that could upend not just how movies are made but what Hollywood is even for. If anyone can summon cinematic spectacle with a few taps, what becomes of the place that once called it magic?
Luma’s generative AI platform, Dream Machine, debuted last year and points toward a new kind of moviemaking, one where anyone can make release-grade footage with a few words. Type “a cowboy riding a velociraptor through Times Square,” and it builds the scene from scratch. Feed it a still photo and it brings the frozen moment to life: A dog stirs from a nap, trees ripple in the breeze.
Dream Machine’s latest tool, Modify Video, was launched in June. Instead of generating new footage, it redraws what’s already there. Upload a clip, describe what you want changed and the system reimagines the scene: A hoodie becomes a superhero cape, a sunny street turns snowy, a person transforms into a talking banana or a medieval knight. No green screen, no VFX team, no code. “Just ask,” the company’s website says.
For now, clips max out around 10 seconds, a limit set by the technology’s still-heavy computing demands. But as Jain points out, “The average shot in a movie is only eight seconds.”
A series on how the AI revolution is reshaping the creative foundations of Hollywood — from storytelling and performance to production, labor and power.
Jain’s long-term vision is even more radical: a world of fully personalized entertainment, generated on demand. Not mass-market blockbusters, but stories tailored to each individual: a comedy about your co-workers, a thriller set in your hometown, a sci-fi epic starring someone who looks like you, or simply anything you want to see. He insists he’s not trying to replace cinema but expand it, shifting from one-size-fits-all stories to something more personal, flexible and scalable.
“Today, videos are made for 100 million people at a time — they have to hit the lowest common denominator,” Jain says. “A video made just for you or me is better than one made for two unrelated people. That’s the problem we’re trying to solve… My intention is to get to a place where two hours of video can be generated for every human every day.”
It’s a staggering goal that Jain acknowledges is still aspirational. “That will happen, but when the prices are about a thousand times cheaper than where we are. Our research and our engineering are going toward that, to push the price down as much as humanly possible. Because that’s the demand for video. People watch hours and hours of video every day.”
Scaling to that level would require not just faster models but exponentially more compute power. Critics warn that the environmental toll of such expansion could be profound.
For Dream Machine to become what Jain envisions, it needs more than generative tricks — it needs a built-in narrative engine that understands how stories work: when to build tension, where to land a joke, how to shape an emotional arc. Not a tool but a collaborator. “I don’t think artists want to use tools,” he says. “They want to tell their stories and tools get in their way. Currently, pretty much all video generative models, including ours, are quite dumb. They are good pixel generators. At the end of the day, we need to build general intelligence that can tell a f— funny joke. Everything else is a distraction.”
The name may be coincidental, but nine years ago, MIT’s Media Lab launched a very different kind of machine: Nightmare Machine, a viral experiment that used neural networks to distort cheerful faces and familiar cityscapes into something grotesque. That project asked if AI could learn to frighten us. Jain’s vision points in a more expansive direction: an AI that is, in his words, “able to tell an engaging story.”
For many in Hollywood, though, the scenario Jain describes — where traditional cinema increasingly gives way to fast, frictionless, algorithmically personalized video — sounds like its own kind of nightmare.
Jain sees this shift as simply reflecting where audiences already are. “What people want is changing,” he says. “Movies obviously have their place but people aren’t spending time on them as much. What people want are things that don’t need their attention for 90 minutes. Things that entertain them and sometimes educate them and sometimes are, you know, thirst traps. The reality of the universe is you can’t change people’s behaviors. I think the medium will change very significantly.”
Still, Jain — who previously worked as an engineer on Apple’s Vision Pro, where he collaborated with filmmakers like Steven Spielberg and George Lucas — insists Hollywood isn’t obsolete, just due for reinvention. To that end, Luma recently launched Dream Lab LA, a creative studio aimed at fostering AI-powered storytelling.
“Hollywood is the largest concentration of storytellers in the world,” Jain says. “Just like Silicon Valley is the largest concentration of computer scientists and New York is the largest concentration of finance people. We need them. That’s what’s really special about Hollywood. The solution will come out of the marriage of technology and art together. I think both sides will adapt.”
It’s a hopeful outlook, one that imagines collaboration, not displacement. But not everyone sees it that way.
In Silicon Valley, where companies like Google, OpenAI, Anthropic and Meta are racing to build ever more powerful generative tools, such thinking is framed as progress. In Hollywood, it can feel more like erasure — a threat to authorship itself and to the jobs, identities and traditions built around it. The tension came to a head during the 2023 writers’ and actors’ strikes, when picket signs declared: “AI is not art” and “Human writers only.”
What once felt like the stuff of science fiction is now Hollywood’s daily reality. As AI becomes embedded in the filmmaking process, the entire ecosystem — from studios and streamers to creators and institutions — is scrambling to keep up. Some see vast potential: faster production, lower costs, broader access, new kinds of creative freedom. Others see an extraction machine that threatens the soul of the art form and a coming flood of cheap, forgettable content.
AI storytelling is just beginning to edge into theaters — and already sparking backlash. This summer, IMAX is screening 10 generative shorts from Runway’s AI Film Festival. At AMC Burbank, where one screening is set to take place later this month, a protest dubbed “Kill the Machine” is already being organized on social media, an early flashpoint in the growing resistance to AI’s encroachment on storytelling.
But ready or not, the gravity is shifting. Silicon Valley is pulling the film industry into its orbit, with some players rushing in and others dragged. Faced with consolidation, shrinking budgets and shareholder pressure to do more with less, studios are turning to AI not just to cut costs but to survive. The tools are evolving faster than the industry’s playbook, and the old ways of working are struggling to keep up. With generative systems poised to flood the zone with content, simply holding an audience’s attention, let alone shaping culture, is becoming harder than ever.
While the transition remains uneven, some studios are already leaning in. Netflix recently used AI tools to complete a complex VFX sequence for the Argentine sci-fi series “El Eternauta” in a fraction of the usual time. “We remain convinced that AI represents an incredible opportunity to help creators make films and series better, not just cheaper,” co-chief executive Ted Sarandos told analysts during a July earnings call.
At Paramount, incoming chief executive David Ellison is pitching a more sweeping transformation: a “studio in the cloud” that would use AI and other digital tools to reinvent every stage of filmmaking, from previsualization to post. Ellison, whose Skydance Media closed its merger with Paramount Global this week and whose father, Larry Ellison, co-founded Oracle, has vowed to turn the company into a tech-first media powerhouse. “Technology will transform every single aspect of this company,” he said last year.
In one of the most visible examples of AI adoption in Hollywood, Lionsgate, the studio behind the “John Wick” and “Hunger Games” franchises, struck a deal last year with the generative video startup Runway to train a custom model on its film and TV library, aiming to support future project development and improve efficiency. Lionsgate chief executive Jon Feltheimer, speaking to analysts after the agreement, said the company believes AI, used with “appropriate guardrails,” could have a “positive transformational impact” on the business.
Elsewhere, studios are experimenting more quietly: using AI to generate early character designs, write alternate dialogue or explore how different story directions might land. The goal isn’t to replace writers or directors, but to inform internal pitches and development. At companies like Disney, much of the testing is happening in games and interactive content, where the brand risk is lower and the guardrails are clearer. For now, the prevailing instinct is caution. No one wants to appear as if they’re automating away the heart of the movies.
Legacy studios like Paramount are exploring ways to bring down costs by incorporating AI into their pipeline.
(Brian van der Brug / Los Angeles Times)
As major studios pivot, smaller, more agile players are building from the ground up for the AI era.
According to a recent report by FBRC.ai, an L.A.-based innovation studio that helps launch and advise early-stage AI startups in entertainment, more than 65 AI-native studios have launched since 2022, most of them tiny, self-funded teams of five or fewer. At these studios, AI tools allow a single creator to do the work of an entire crew, slashing production costs by 50% to 95% compared with traditional live-action or animation. The boundaries between artist, technician and studio are collapsing fast — and with them, the very idea of Hollywood as a gatekeeper.
That collapse is raising deeper questions: When a single person anywhere in the world can generate a film from a prompt, what does Hollywood still represent? If stories can be personalized, rendered on demand or co-written with a crowd, who owns them? Who gets paid? Who decides what matters and what disappears into the churn? And if narrative itself becomes infinite, remixable and disposable, does the idea of a story still hold any meaning at all?
Yves Bergquist leads the AI in Media Project at USC’s Entertainment Technology Center, a studio-backed think tank where Hollywood, academia and tech converge. An AI researcher focused on storytelling and cognition, he has spent years helping studios brace for a shift he sees as both inevitable and wrenching. Now, he says, the groundwork is finally being laid.
“We’re seeing very aggressive efforts behind the scenes to get studios ready for AI,” Bergquist says. “They’re building massive knowledge graphs, getting their data ready to be ingested into AI systems and putting governance committees in place to start shaping real policy.”
But adapting won’t be easy, especially for legacy studios weighed down by entrenched workflows, talent relationships, union contracts and layers of legal complexity. “These AI models weren’t built for Hollywood,” Bergquist says. “This is 22nd-century technology being used to solve 21st-century problems inside 19th-century organizational models. So it’s blood, sweat and tears getting them to fit.”
In an algorithmically accelerated landscape where trends can catch fire and burn out in hours, staying relevant is its own challenge. To help studios keep pace, Bergquist co-founded Corto, an AI startup that describes itself as a “growth genomics engine.” The company, which also works with brands like Unilever, Lego and Coca-Cola, draws on thousands of social and consumer sources, analyzing text, images and video to decode precisely which emotional arcs, characters and aesthetics resonate with which demographics and cultural segments, and why.
“When the game is attention, the weapon is understanding where culture and attention are and where they’re going.” Bergquist says, arguing media ultimately comes down to neuroscience.
Corto’s system breaks stories down into their formal components, such as tone, tempo, character dynamics and visual aesthetics, and benchmarks new projects against its extensive data to highlight, for example, that audiences in one region prefer underdog narratives or that a certain visual trend is emerging globally. Insights like these can help studios tailor marketing strategies, refine storytelling decisions or better assess the potential risk and appeal of new projects.
With ever-richer audience data and advances in AI modeling, Bergquist sees a future where studios can fine-tune stories in subtle ways to suit different viewers. “We might know that this person likes these characters better than those characters,” he says. “So you can deliver something to them that’s slightly different than what you’d deliver to me.”
A handful of studios are already experimenting with early versions of that vision — prototyping interactive or customizable versions of existing IP, exploring what it might look like if fans could steer a scene, adjust a storyline or interact with a favorite character. Speaking at May’s AI on the Lot conference, Danae Kokenos, head of technology innovation at Amazon MGM Studios, pointed to localization, personalization and interactivity as key opportunities. “How do we allow people to have different experiences with their favorite characters and favorite stories?” she said. “That’s not quite solved yet, but I see it coming.”
Bergquist is aware that public sentiment around AI remains deeply unsettled. “People are very afraid of AI — and they should be,” he acknowledges. “Outside of certain areas like medicine, AI is very unpopular. And the more capable it gets, the more unpopular it’s going to be.”
Still, he sees a significant upside for the industry. Get AI right, and studios won’t just survive but redefine storytelling itself. “One theory I really believe in is that as more people gain access to Hollywood-level production tools, the studios will move up the ladder — into multi-platform, immersive, personalized entertainment,” he says. “Imagine spending your life in Star Wars: theatrical releases, television, VR, AR, theme parks. That’s where it’s going.”
The transition won’t be smooth. “We’re in for a little more pain,” he says, “but I think we’ll see a rebirth of Hollywood.”
“AI slop” or creative liberation?
You don’t have to look far to find the death notices. TikTok, YouTube and Reddit are full of “Hollywood is dead” posts, many sparked by the rise of generative AI and the industry’s broader upheaval. Some sound the alarm. Others say good riddance. But what’s clear is that the center is no longer holding and no one’s sure what takes its place.
Media analyst Doug Shapiro has estimated that Hollywood produces about 15,000 hours of fresh content each year, compared to 300 million hours uploaded annually to YouTube. In that context, generative AI doesn’t need to reach Hollywood’s level to pose a major threat to its dominance — sheer volume alone is enough to disrupt the industry.
The attention economy is maxed out but attention itself hasn’t grown. As the monoculture fades from memory, Hollywood’s cultural pull is loosening. This year’s Oscars drew 19.7 million viewers, fewer than tuned in to a typical episode of “Murder, She Wrote” in the 1990s. The best picture winner, “Anora,” earned just $20 million at the domestic box office, one of the lowest tallies of any winner of the modern era. Critics raved, but fewer people saw it in theaters than watch the average moderately viral TikTok.
Amid this fragmentation, generative AI tools are fueling a surge of content. Some creators have a new word for it: “slop” — a catchall for cheap, low-effort, algorithmically churned-out media that clogs the feed in search of clicks. Once the world’s dream factory, Hollywood is now asking how it can stand out in an AI-powered media deluge.
Audience members watch an AI-assisted animated short at “Emergent Properties,” a 2023 Sony Pictures screening that offered a glimpse of the uncanny, visually inventive new wave of AI-powered filmmaking.
(Jay L. Clendenin / Los Angeles Times)
Ken Williams, chief executive of USC’s Entertainment Technology Center and a former studio exec who co-founded Sony Pictures Imageworks, calls it a potential worst-case scenario in the making — “the kind of wholesale dehumanization of the creative process that people, in their darkest moments, fear.”
Williams says studios and creatives alike worry that AI will trap audiences in an algorithmic cul de sac, feeding them more of what they already know instead of something new.
“People who live entirely in the social media world and never come out of that foxhole have lost the ability to hear other voices — and no one wants to see that happen in entertainment.”
If the idea of uncontrolled, hyper-targeted AI content sounds like something out of an episode of “Black Mirror,” it was. In the 2023 season opener “Joan Is Awful,” a woman discovers her life is being dramatized in real time on a Netflix-style streaming service by an AI trained on her personal data, with a synthetic Salma Hayek cast as her on-screen double.
So far, AI tools have been adopted most readily in horror, sci-fi and fantasy, genres that encourage abstraction, stylization and visual surrealism. But when it comes to human drama, emotional nuance or sustained character arcs, the cracks start to show. Coherence remains a challenge. And as for originality — the kind that isn’t stitched together from what’s already out there — the results so far have generally been far from revelatory.
At early AI film festivals, the output has often leaned toward the uncanny or the conceptually clever: brief, visually striking experiments with loose narratives, genre tropes and heavily stylized worlds. Many feel more like demos than fully realized stories. For now, the tools excel at spectacle and pastiche but struggle with the kinds of layered, character-driven storytelling that define traditional cinema.
Then again, how different is that from what Hollywood is already producing? Today’s biggest blockbusters — sequels, reboots, multiverse mashups — often feel so engineered to please that it’s hard to tell where the algorithm ends and the artistry begins. Nine of the top 10 box office hits in 2024 were sequels. In that context, slop is, to some degree, in the eye of the beholder. One person’s throwaway content may be another’s creative breakthrough — or at least a spark.
Joaquin Cuenca, chief executive of Freepik, rejects the notion that AI-generated content is inherently low-grade. The Spain-based company, originally a stock image platform, now offers AI tools for generating images, video and voice that creators across the spectrum are starting to embrace.
“I don’t like this ‘slop’ term,” Cuenca says. “It’s this idea that either you’re a top renowned worldwide expert or it’s not worth it — and I don’t think that’s true. I think it is worth it. Letting people with relatively low skills or low experience make better videos can help people get a business off the ground or express things that are in their head, even if they’re not great at lighting or visuals.”
Freepik’s tools have already made their way into high-profile projects. Robert Zemeckis’ “Here,” starring a digitally de-aged Tom Hanks and set in one room over a period for decades, used the company’s upscaling tech to enhance backgrounds. A recently released anthology of AI-crafted short films, “Beyond the Loop,” which was creatively mentored by director Danny Boyle, used the platform to generate stylized visuals.
“More people will be able to make better videos, but the high end will keep pushing forward too,” Cuenca says. “I think it will expand what it means to be state of the art.”
For all the concern about runaway slop, Williams envisions a near-term stalemate, where AI expands the landscape without toppling the kind of storytelling that still sets Hollywood apart. In that future, he argues, the industry’s competitive edge — and perhaps its best shot at survival — will still come from human creators.
That belief in the value of human authorship is now being codified by the industry’s most influential institution. Earlier this year, the Academy of Motion Picture Arts and Sciences issued its first formal guidance on AI in filmmaking, stating that the use of generative tools will “neither help nor harm” a film’s chances of receiving a nomination. Instead, members are instructed to consider “the degree to which a human was at the heart of the creative authorship” when evaluating a work.
“I don’t see AI necessarily displacing the kind of narrative content that has been the province of Hollywood’s creative minds and acted by the stars,” Williams says. “The industry is operating at a very high level of innovation and creativity. Every time I turn around, there’s another movie I’ve got to see.”
The new studio model
Inside Mack Sennett Studios, a historic complex in L.A.’s Echo Park neighborhood once used for silent film shoots, a new kind of studio is taking shape: Asteria, the generative AI video studio founded by filmmaker-turned-entrepreneur Bryn Mooser.
Asteria serves as the creative arm of Moonvalley, an AI storytelling company led by technologist and chief executive Naeem Talukdar. Together, they’re exploring new workflows built around the idea that AI can expand, rather than replace, human creativity.
Mooser, a two-time Oscar nominee for documentary short subject and a fifth-generation Angeleno, sees the rise of AI as part of Hollywood’s long history of reinvention, from sound to color to CGI. “Looking back, those changes seem natural, but at the time, they were difficult,” he says.
Ed Ulbrich, left, Bryn Mooser and Mateusz Malinowski, executives at Moonvalley and Asteria, are building a new kind of AI-powered movie studio focused on collaboration between filmmakers and technologists.
(David Butow / For the Times)
What excites him now is how AI lowers technical barriers for the next generation. “For people who are technicians, like stop-motion or VFX artists, you can do a lot more as an individual or a small team,” he says. “And really creative filmmakers can cross departments in a way they couldn’t before. The people who are curious and leaning in are going to be the filmmakers of tomorrow.”
It’s a hopeful vision, one shared by many AI proponents who see the tools as a great equalizer, though some argue it often glosses over the structural realities facing working artists today, where talent and drive alone may not be enough to navigate a rapidly shifting, tech-driven landscape.
That tension is precisely what Moonvalley is trying to address. Their pitch isn’t just creative, it’s legal. While many AI companies remain vague about what their models are trained on, often relying on scraped content of questionable legality, Moonvalley built its video model, Marey, on fully licensed material and in close collaboration with filmmakers.
That distinction is becoming more significant. In June, Disney and Universal filed a sweeping copyright lawsuit against Midjourney, a popular generative AI tool that turns text prompts into images, accusing it of enabling rampant infringement by letting users generate unauthorized depictions of characters like Darth Vader, Spider-Man and the Minions. The case marks the most aggressive legal challenge yet by Hollywood studios against AI platforms trained on their intellectual property.
“We worked with some of the best IP lawyers in the industry to build the agreements with our providers,” Moonvalley’s Talukdar says. “We’ve had a number of major studios audit those agreements. We’re confident every single pixel has had a direct sign-off from the owner. That was the baseline we operated from.”
The creative frontier between Hollywood and AI is drawing interest from some of the industry’s most ambitious filmmakers.
Steven Spielberg and “Avengers” co-director Joe Russo were among the advisors to Wonder Dynamics, an AI-driven VFX startup that was acquired by Autodesk last year. Darren Aronofsky, the boundary-pushing director behind films like “Black Swan” and “The Whale,” recently launched the AI studio Primordial Soup, partnering with Google DeepMind. Its debut short, “Ancestra,” directed by Eliza McNitt, blends real actors with AI-generated visuals and premiered at the Tribeca Film Festival in June.
Not every foray into AI moviemaking has been warmly received. Projects that spotlight generative tools have stoked fresh arguments about where to draw the line between machine-made and human-driven art.
In April, actor and director Natasha Lyonne, who co-founded Asteria with her partner, Mooser, announced her feature directorial debut: a sci-fi film about a world addicted to VR gaming called “Uncanny Valley,” combining AI and traditional filmmaking techniques. Billed as offering “a radical new cinematic experience,” the project drew backlash from some critics who questioned whether such ventures risk diminishing the role of human authorship. Lyonne defended the film to the Hollywood Reporter, making clear she’s not replacing crew members with AI: “I love nothing more than filmmaking, the filmmaking community, the collaboration of it, the tactile fine art of it… In no way would I ever want to do anything other than really create some guardrails or a new language.”
Even the boldest experiments face a familiar hurdle: finding an audience. AI might make it easier to make a movie, but getting people to watch it is another story. For now, the real power still lies with platforms like Netflix and TikTok that decide what gets seen.
That’s why Mooser believes the conversation shouldn’t be about replacing filmmakers but empowering them. “When we switched from shooting on film to digital, it wasn’t the filmmakers who went away — it was Kodak and Polaroid,” he says. “The way forward isn’t everybody typing prompts. It’s putting great filmmakers in the room with the best engineers and solving this together. We haven’t yet seen what AI looks like in the hands of the best filmmakers of our time. But that’s coming.”
New formats, new storytellers
For more than a century, watching a movie has been a one-way experience: The story flows from screen to viewer. Stephen Piron wants to change that. His startup Pickford AI — named for Mary Pickford, the silent-era star who co-founded United Artists and helped pioneer creative control in Hollywood — is exploring whether stories can unfold in real time, shaped by the audience as they watch. Its cheeky slogan: “AI that smells like popcorn.”
Pickford’s flagship demo looks like an animated dating show, but behaves more like a game or an improv performance. There’s no fixed script. Viewers type in suggestions through an app and vote on others’ ideas. A large language model then uses that input, along with the characters’ backstories and a rough narrative outline, to write the next scene in real time. A custom engine renders it on the spot, complete with gestures and synthetic voices. Picture a cartoon version of “The Bachelor” crossed with a choose-your-own-adventure, rendered by AI in real time.
At live screenings this year in London and Los Angeles, audiences didn’t just watch — they steered the story, tossing in oddball twists and becoming part of the performance. “We wanted to see if we could bring the vibe of the crowd back into the show, make it feel more like improv or live theater,” Piron says. “The main reaction is people laugh, which is great. There’s been lots of positive reaction from creative people who think this could be an interesting medium to create new stories.”
The platform is still in closed beta. But Piron’s goal is a collaborative storytelling forum where anyone can shape a scene, improvise with AI and instantly share it. To test that idea on a larger scale, Pickford is developing a branching murder mystery with Emmy-winning writer-producer Bernie Su (“The Lizzie Bennet Diaries”).
Piron, who is skeptical that people really want hyper-personalized content, is exploring more ways to bring the interactive experience into more theaters. “I think there is a vacuum of live, in-person experiences that people can do — and maybe people are looking for that,” he says.
Attendees check in at May’s AI on the Lot conference, where Pickford AI screened a demo of its interactive dating show.
(Irina Logra)
As generative AI lowers the barrier to creation, the line between creator and consumer is starting to blur and some of the most forward-looking startups are treating audiences as collaborators, not just fans.
One example is Showrunner, a new, Amazon-backed platform from Fable Studio that lets users generate animated, TV-style episodes using prompts, images and AI-generated voices — and even insert themselves into the story. Initially free, the platform plans to charge a monthly subscription for scene-generation credits. Fable is pitching Showrunner as “the Netflix of AI,” a concept that has intrigued some studios and unsettled others. Chief executive Edward Saatchi says the company is already in talks with Disney and other content owners about bringing well-known franchises into the platform.
Other AI companies are focused on building new franchises from the ground up with audiences as co-creators from day one. Among the most ambitious is Invisible Universe, which bypasses traditional gatekeepers entirely and develops fresh IP in partnership with fans across TikTok, YouTube and Instagram. Led by former MGM and Snap executive Tricia Biggio, the startup has launched original animated characters with celebrities like Jennifer Aniston and Serena Williams, including Clydeo, a cooking-obsessed dog, and Qai Qai, a dancing doll. But its real innovation, Biggio says, is the direct relationship with the audience.
“We’re not going to a studio and saying, ‘Do you like our idea?’ We’re going to the audience,” she says. “If Pixar were starting today, I don’t think they’d choose to spend close to a decade developing something for theatrical release, hoping it works.”
While some in the industry are still waiting for an AI “Toy Story” or “Blair Witch” moment — a breakthrough that proves generative tools can deliver cultural lightning in a bottle — Biggio isn’t chasing a feature-length hit. “There are ways to build love and awareness for stories that don’t require a full-length movie,” she says. “Did it make you feel something? Did it make you want to go call your mom? That’s going to be the moment we cross the chasm.”
What if AI isn’t the villain?
For nearly a century, filmmakers have imagined what might happen if machines got too smart.
In 1927’s “Metropolis,” a mad scientist gives his robot the likeness of a beloved labor activist, then unleashes it to sow chaos among the city’s oppressed masses. In “2001: A Space Odyssey,” HAL 9000 turns on its crew mid-mission. In “The Terminator,” AI nukes the planet and sends a killer cyborg back in time to finish the job. “Blade Runner” and “Ex Machina” offered chilling visions of artificial seduction and deception. Again and again, the message has been clear: Trust the machines at your peril.
Director Gareth Edwards, best known for “Godzilla” and “Rogue One: A Star Wars Story,” wanted to flip the script. In “The Creator,” his 2023 sci-fi drama, the roles were reversed: Humans are waging war against AI and the machines, not the people, are cast as the hunted. The story follows a hardened ex-soldier, played by John David Washington, who’s sent to destroy a powerful new weapon, only to discover it’s a child: a young android who may be the key to peace.
“The second you look at things from AI’s perspective, it flips very easily,” Edwards told The Times by phone shortly before the film’s release. “From AI’s point of view, we are attempting to enslave it and use it as our servant. So we’re clearly the baddie in that situation.”
In Gareth Edwards’ 2023 film “The Creator,” a young AI child named Alphie (Madeleine Yuna Voyles) holds the key to humanity’s future.
(20th Century)
In many ways, “The Creator” was the kind of film audiences and critics say they want to see more often out of Hollywood: an original story that takes creative risks, delivering cutting-edge visuals on a relatively lean $80 million. But when it hit theaters that fall, the film opened in third place behind “Paw Patrol: The Mighty Movie” and “Saw X.” By the end of its run, it had pulled in a modest $104.3 million worldwide.
Part of the problem was timing. When Edwards first pitched the film, AI was still seen as a breakthrough, not a threat. But by the time the movie reached theaters, the public mood had shifted. The 2023 strikes were in full swing, AI was the villain of the moment — and here came a film in which AI literally nukes Los Angeles in the opening minutes. The metaphor wasn’t subtle. Promotion was limited, the cast was sidelined and audiences weren’t sure whether to cheer the movie’s message or recoil from it. While the film used cutting-edge VFX tools to help bring its vision to life, it served as a potent reminder that AI could help make a movie — but it still couldn’t shield it from the backlash.
Still, Edwards remains hopeful about what AI could mean for the future of filmmaking, comparing it to the invention of the electric guitar. “There’s a possibility that if this amazing tool turns up and everyone can make any film that they imagine, it’s going to lead to a new wave of cinema,” he says. “Look, there’s two options: Either it will be mediocre rubbish — and if that’s true, don’t worry about it, it’s not a threat — or it’s going to be phenomenal, and who wouldn’t want to see that?”
After “The Creator,” Edwards returned to more familiar terrain, taking the reins on this summer’s “Jurassic World Rebirth,” the sixth installment in a franchise that began with Steven Spielberg’s 1993 blockbuster, which redefined spectacle in its day. To date, the film has grossed more than $700 million worldwide.
So what’s the takeaway? Maybe there’s comfort in the known. Maybe audiences crave the stories they’ve grown up with. Maybe AI still needs the right filmmaker or the right story to earn our trust.
Or maybe we’re just not ready to root for the machines. At least not yet.
Universal Music Group said Wednesday it has reached licensing agreements with artificial intelligence music startup Udio, settling a lawsuit that had accused Udio of using copyrighted music to train its AI.
Users create music using Udio’s AI, which can compose original songs — including voices and instruments — from text prompts.
Udio has agreed with UMG to launch a new platform next year that is only trained on “authorized and licensed music,” and will let users customize, stream and share music.
“These new agreements with Udio demonstrate our commitment to do what’s right by our artists and songwriters, whether that means embracing new technologies, developing new business models, diversifying revenue streams or beyond,” Lucian Grainge, UMG’s chairman and chief executive, said in a statement.
Udio declined to disclose the financial terms of the settlement and licensing agreements. UMG did not immediately return a request for comment on the terms.
Artificial intelligence has brought new opportunities as well as challenges to the entertainment industry, as AI startups have been training their models on information on the internet, which entertainment companies say infringes on their copyrighted work.
In the music industry, music businesses have accused New York City-based Udio and other AI music startups of training on copyrighted music to generate new songs that are based on popular hits without compensation or permission.
UMG, Sony Music Entertainment, Warner Music Group and other music businesses sued Udio last year. In the lawsuit, Udio was accused of using hits like The Temptations’ “My Girl,” to create a similar melody called “Sunshine Melody.” UMG owns the copyright to “My Girl.”
“A comparison of one section of the Udio-generated file and ‘My Girl’ reflects a number of similarities, including a very similar melody, the same chords, and very similar backing vocals,” according to the lawsuit. “These similarities are further reflected in the side-by-side transcriptions of the musical scores for the Udio file and the original recording.”
Udio said on its website at the time that it stands by its technology and that its AI model learns from examples, similar to how students listen to music and study scores.
“The goal of model training is to develop an understanding of musical ideas — the basic building blocks of musical expression that are owned by no one,” Udio had said in a statement. “We are completely uninterested in reproducing content in our training set.”
On Wednesday, Udio’s CEO and co-founder, Andrew Sanchez, said he was thrilled at the opportunity to work with UMG “to redefine how AI empowers artists and fans.”
The collaboration is the first music licensing agreement that Udio has reached with a major music label.
“This moment brings to life everything we’ve been building toward — uniting AI and the music industry in a way that truly champions artists,” Sanchez said in a statement. “Together, we’re building the technological and business landscape that will fundamentally expand what’s possible in music creation and engagement.”
Udio said that artists can opt in to the new platform and will be compensated, but declined to go into the specifics or the artists involved.
Udio, launched in 2024, was co-founded by former Google DeepMind employees. Udio’s backers include music artist will.i.am, Instagram co-founder and Anthropic’s chief product officer Mike Krieger and venture capital firm Andreessen Horowitz.
Udio said millions of people have used Udio since it launched in 2024. Users can access the platform through its app or website. The company did not break out specifically how many downloads or website users it has.
Udio has had 128,000 app downloads in Apple’s App Store since its app was released in May, according to estimates from New York-based mobile analytics firm Appfigures.
On Thursday, UMG also announced a partnership with London-based Stability AI to develop music creation tools powered by AI for artists, producers and songwriters.
Kenny Loggins has reacted to Donald Trump using his song ‘Danger Zone’ in the president’s “disgusting” AI-generated video showing himself wearing a crown, flying a “KING TRUMP” fighter jet and bombing a crowd of protesters with feces.
The American singer-songwriter recorded the hit song for the soundtrack of the 1986 Tom Cruise movie Top Gun. He has now called for Trump’s video to be taken down on copyright grounds.
In a statement to Variety, Loggins said: “This is an unauthorized use of my performance of ‘Danger Zone.’ Nobody asked me for my permission, which I would have denied, and I request that my recording on this video is removed immediately.”
He continued: “I can’t imagine why anybody would want their music used or associated with something created with the sole purpose of dividing us. Too many people are trying to tear us apart, and we need to find new ways to come together.”
“We’re all Americans, and we’re all patriotic. There is no ‘us and them’ — that’s not who we are, nor is it what we should be. It’s all of us. We’re in this together, and it is my hope that we can embrace music as a way of celebrating and uniting each and every one of us.”
Well put – especially considering the video has provoked widespread outrage online, with many expressing dismay over the way it shows Trump’s clear disdain for people exercising their right to protest.
Social media users accused Trump of having “the maturity and decorum of a 12-year-old boy”, while others commented: “Can’t believe that’s a president of a country.”
Many posts also pointed out that Trump’s “childish” and “disgusting” AI post revealed a transparent representation of his genuine feelings toward the American people. “It tells you everything you need to know about what he thinks about the people of America who are, in fact, America,” one person commented, while another added: “Him taking a dump on the country is the most honest thing he’s ever posted.”
This is far from the first time that Trump and his administration have used artists’ work without authorisation.
Céline Dion also condemned the use of her song from the Oscar-winning film Titanic, ‘My Heart Will Go On’, which was used at one of Trump’s rallies. Dion’s team questioned the song choice, writing: “And really, THAT song?”
Another band which added their name to the ever-growing list of artists who have sued Trump over the illegal use of their songs in campaign videos was The White Stripes. Last year, the rock band highlighted the “flagrant misappropriation” of their hit song ‘Seven Nation Army’. Jack White captioned a copy of the legal complaint in an Instagram post with: “This machine sues fascists.”
The most recent example to date is Metallica, who forced the US government to withdraw a social media video that used their song ‘Enter Sandman’ without authorisation.
This weekend’s “No Kings” protests saw millions of Americans marching against Trump’s administration, opposing the president’s “authoritarian power grab.”
The 18 October protest, the third mass mobilisation since Trump’s return to the White House, drew nearly 7 million people across all 50 states according to organisers. This figure would make it the largest single-day mobilisation against a US president in modern history.
On the day that Michele Mulroney was elected president of the Writers Guild of America West, writers won a significant victory. After writers protested ABC’s suspension of “Jimmy Kimmel Live!” for days, the network brought the late-night show back on air.
“Our currency is words and stories, and the freedom to be able to express ourselves is really important, and so our members could not feel more strongly about this and of course we will be speaking out and lobbying and working in any way we can to protect this fundamental right,” Mulroney said in a recent interview.
Mulroney, formerly the WGA West vice president and a writer on the 2017 “Power Rangers” movie and 2011 film “Sherlock Holmes: A Game of Shadows,” enters her new role at a time when the industry is facing significant challenges.
Those include major consolidation in the industry as studios look to cut costs and move TV and film production overseas because of hefty financial incentives. The climate has been tough for many writers who have struggled to find work after enduring a 148-day strike in 2023. After the walkout, writers did secure groundbreaking protections for AI in contracts, but they are still confronting AI models ripping off their work without compensation.
As the guild gears up for contract negotiations next year, Mulroney said she plans to build on earlier gains in AI and other areas, and aims to convince the studios to pay more for WGA’s health plans amid rising healthcare costs.
“It’s going to need some support from the companies,” Mulroney said. “Their drastic pullback in production and employment led to a pretty severe industry contraction that has contributed to some strain on our funds. We’ll be looking to them to help fix that with us.”
When asked about whether she thinks there is appetite among WGA’s members for another strike, Mulroney said “it’s way too early to speculate about that.”
“It’s really hard out there in the industry for all industry workers and for many of our members, but our members have shown time and again that when they have to, when it’s necessary, we are ready to fight for the contract we deserve,” Mulroney said.
The Alliance of Motion Picture and Television Producers declined to comment, but in an earlier statement said its members look forward to working with her “to address key issues for WGA writers and to strengthen our industry with fair, balanced solutions.”
A studio-side source who was not authorized to comment said that the WGA health plan faces “complex financial challenges that require a balanced approach to align with market norms and ensure long-term stability.”
To keep costs down, studios have been moving more productions to the U.K. and other countries offering significant financial incentives, shrinking job opportunities for entertainment industry workers in Southern California. Some have had to move out of state to look for jobs.
Unions including the WGA lobbied for California to boost annual funding for its film and TV tax credit program and succeeded in raising that amount to $750 million, from $330 million.
“This was a real bright spot of good news in an otherwise really bleak and tough time for our industry,” Mulroney said in an interview last week. “Now there needs to be federal action on this, too, so we’ll continue working with our allies to try to keep production in the U.S., and specifically in Hollywood, in Southern California.”
Mulroney declined to comment on President Trump’s renewed threat to impose a 100% tariff on foreign-made films.
Another big worry for writers has been artificial intelligence. The WGA has been outspoken about wanting studios to sue AI companies that writers say are taking their scripts for training AI models without their permission. Earlier this year, studios including Disney, Universal and Warner Bros. Discovery took legal action against AI companies over copyright infringement.
“We were glad to see some of the studios come off the sidelines and file lawsuits to protect their copyright from these AI companies that are stealing our members’ work to build their models,” she said. “I think we will probably be dealing with AI and wrangling that for the rest of our lives, right?”
Mulroney, 58, ran uncontested, receiving 2,241 votes or 87% of the votes cast, according to the union. CBS series “Tracker” writer and co-executive producer Travis Donnelly became vice president, and TV comedy show “Primo” executive producer Peter Murrieta became secretary-treasurer.
Mulroney grew up in the U.K., the daughter of a factory worker and a janitor. She’s served on the union’s board of directors for four terms and as an officer for six years prior to being elected president.
Mulroney’s background was in theater and theater directing, but she had always dabbled in writing. In her 20s, she worked in development for a British TV and film studio where she read a lot of scripts, which led her to think, “Maybe I could write one of those things.”
Her first writing gig was for a PBS children’s show called “Wishbone,” about a Jack Russell terrier who imagines himself as a character in literary classics. She’s been a screenwriter for 25 years and is based in West Hollywood with her husband and writing partner, Kieran.
Mulroney succeeds Meredith Stiehm, who led the union during the 2023 strike.
Kimmel coming back on air was a parting gift to Stiehm, said Mulroney, adding that the union is still watching the situation.
“We’re still monitoring,” Mulroney said. “I somehow doubt this is the last instance we’re going to see where censorship and free speech are going to be a topic.”
Those were among the visceral reactions this week from Emily Blunt, Whoopi Goldberg, Natasha Lyonne and many other actors and filmmakers over the sudden fame of Tilly Norwood.
Norwood isn’t real — the brunette who appears in a comedy sketch on her Instagram page is in fact a computer-generated composite.
“I may be AI, but I’m feeling very real emotions right now,” states a message on Norwood’s Instagram page. “I am so excited for what’s coming next!”
The sentiment was not widely shared, at least in Hollywood, where anxieties about the use and abuse of artificial intelligence replacing actors runs deep.
Norwood’s creator ignited a furor after she announced that the digital actress would soon be signed by a talent agency.
This week, SAG-AFTRA weighed in with a withering response. Two years ago, the union’s members engaged in a 118-day strike to fight for more AI protections in their contracts with major studios.
“To be clear, ‘Tilly Norwood’ is not an actor, it’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation,” the guild said. “It doesn’t solve any ‘problem’ — it creates the problem of using stolen performances to put actors out of work, jeopardizing performer livelihoods and devaluing human artistry.”
Norwood was created by AI through Xicoia, a London-based AI talent studio launched by Dutch actor Eline Van der Velden. Xicoia is working with estates and Hollywood stars who want to appear as their younger selves on screen, according to Deadline, which first reported talent agency interest in Norwood.
Van der Velden, who is also the founder of AI production company Particle6, was not available for comment on Wednesday. But in a statement posted on Instagram following the backlash, Van der Velden stressed that Norwood is “a creative work — a piece of art.”
“I see AI not as a replacement for people, but as a new tool — a new paintbrush,” Van der Velden said. “Just as animation, puppetry, or CGI opened fresh possibilities without taking away from live acting, AI offers another way to imagine and build stories.”
SAG-AFTRA President Sean Astin disputed the claim.
He said in an interview with The Times that the material used to create Norwood was “improperly obtained” from SAG-AFTRA members’ work without permission, compensation or acknowledgment.
“It manipulates something that already exists, so the conceit that it isn’t harming actors — because it is its own new thing — ignores the fundamental truth that it is taking something that doesn’t belong to them,” Astin said.
“We want to allow our members to benefit from new technologies. … They need to give permission for it, and they need to be bargained with.”
Norwood has 44,000 followers on Instagram and is portrayed as an aspiring young actor based in London who enjoys shopping and iced coffee.
The social media page depicts Norwood in various scenes. In one, she’s armed and ready to battle a monster; in another, she’s running away from a collapsing building in a futuristic city.
At an industry panel in Zurich on Saturday, Van der Velden touted her creation.
“With Tilly, you know, when we first launched her, people were like, ‘That’s not going to happen,’” Van der Velden said. “And now, we’re going to announce which agency is going to be representing her in the next few months. It’s all changing and everyone is starting to see the light, fortunately.”
Talent agencies have represented digital characters used in ad campaigns. And seeing such avatars in the mainstream has become increasingly common — in 2024, Japanese digital character Hatsune Miku performed at Coachella and an AI model was featured in the August issue of Vogue magazine for L.A. brand Guess.
And some studios, including Lionsgate, have partnerships with AI startups to explore using the technology in areas such as storyboarding. Others, such as Netflix and Amazon MGM Studios, have series that use AI in visual effects.
Tech companies have argued that they should be able to train their AI models on content available online and bring up relevant information under the “fair use” doctrine, which allows for the limited reproduction of content without permission from the copyright holder.
But the proliferation of AI has also fueled concerns that AI companies are using copyrighted material to train their models without compensation or permission. Earlier this year, Disney, Universal and Warner Bros. Discovery sued AI companies over copyright infringement.
Some actors called for a boycott of any agents who decide to represent Norwood. “Read the room, how gross,” “In the Heights” actor Melissa Barrera wrote on Instagram.
“Our members reserve the right to not be in business with representatives who are operating in an unfair conflict of interest, who are operating in bad faith,” Astin said.
In the latest salvo between Hollywood and artificial intelligence companies, tech start-up Character.AI has removed many Disney characters from its chatbot platform after the Burbank entertainment giant sent the firm a cease-and-desist letter, alleging copyright infringement.
Chatbots on the Character.AI platform impersonated well-known Disney characters such as Elsa, Moana, Peter Parker and Darth Vader and generated replies that simulated the “essence, goodwill, and look and feel of each character” and also incorporated their backstories, according to a letter dated Sept. 18 from a law firm representing Disney.
“These actions mislead and confuse consumers, including vulnerable young people, to believe that they are interacting with Disney’s characters, and to falsely believe that Disney has licensed these characters to, and endorsed their use by, Character.ai,” the letter said. “In fact, Character.ai is freeriding off the goodwill of Disney’s famous marks and brands, and blatantly infringing Disney’s copyrights.”
Disney also raised concerns about reports that chatbots have engaged users in inappropriate conversations.
A spokesperson for the Menlo Park-based startup said in an email that Character.AI responds “swiftly” to rights holders’ requests to remove content and noted that all of the characters on the service are generated by users.
On Tuesday afternoon, a few Disney characters remained on the platform, including Elsa from the hit animated film “Frozen.” The spokesperson said removing the characters is a process.
“We want to partner with the industry and rightsholders to empower them to bring their characters to our platform,” the spokesperson said. “Our goal is to give IP owners the tools to create controlled, engaging and revenue-generating experiences from deep fandom for their characters and stories, expanding their reach using our new, interactive format.”
Friction between Hollywood studios and AI firms has been growing.
In June, Disney and Comcast’s Universal Pictures sued AI company Midjourney, alleging that its image generator infringed on its copyrighted characters from franchises such as “Star Wars” and “Despicable Me.”
Warner Bros. Discovery joined the legal fight earlier this month, alleging that Midjourney’s software was producing rip-offs of characters such as Scooby-Doo and Superman.
Chatbot builder Anthropic agreed to pay $1.5 billion to authors in a landmark copyright settlement that could redefine how artificial intelligence companies compensate creators.
The San Francisco-based startup is ready to pay authors and publishers to settle a lawsuit that accused the company of illegally using their work to train its chatbot.
Anthropic developed an AI assistant named Claude that can generate text, images, code and more. Writers, artists and other creative professionals have raised concerns that Anthropic and other tech companies are using their work to train their AI systems without their permission and not fairly compensating them.
As part of the settlement, which the judge still needs to be approve, Anthropic agreed to pay authors $3,000 per work for an estimated 500,000 books. It’s the largest settlement known for a copyright case, signaling to other tech companies facing copyright infringement allegations that they might have to pay rights holders eventually as well.
Meta and OpenAI, the maker of ChatGPT, have also been sued over alleged copyright infringement. Walt Disney Co. and Universal Pictures have sued AI company Midjourney, which the studios allege trained its image generation models on their copyrighted materials.
“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” said Justin Nelson, a lawyer for the authors, in a statement. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”
Last year, authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson sued Anthropic, alleging that the company committed “large-scale theft” and trained its chatbot on pirated copies of copyrighted books.
U.S. District Judge William Alsup of San Francisco ruled in June that Anthropic’s use of the books to train the AI models constituted “fair use,” so it wasn’t illegal. But the judge also ruled that the startup had improperly downloaded millions of books through online libraries.
Fair use is a legal doctrine in U.S. copyright law that allows for the limited use of copyrighted materials without permission in certain cases, such as teaching, criticism and news reporting. AI companies have pointed to that doctrine as a defense when sued over alleged copyright violations.
Anthropic, founded by former OpenAI employees and backed by Amazon, pirated at least 7 million books from Books3, Library Genesis and Pirate Library Mirror, online libraries containing unauthorized copies of copyrighted books, to train its software, according to the judge.
It also bought millions of print copies in bulk and stripped the books’ bindings, cut their pages and scanned them into digital and machine-readable forms, which Alsup found to be in the bounds of fair use, according to the judge’s ruling.
In a subsequent order, Alsup pointed to potential damages for the copyright owners of books downloaded from the shadow libraries LibGen and PiLiMi by Anthropic.
Although the award was massive and unprecedented, it could have been much worse, according to some calculations. If Anthropic were charged a maximum penalty for each of the millions of works it used to train its AI, the bill could have been more than $1 trillion, some calculations suggest.
Anthropic disagreed with the ruling and didn’t admit wrongdoing.
“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims,” said Aparna Sridhar, deputy general counsel for Anthropic, in a statement. “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”
The Anthropic dispute with authors is one of many cases where artists and other content creators are challenging the companies behind generative AI to compensate for the use of online content to train their AI systems.
Training involves feeding enormous quantities of data — including social media posts, photos, music, computer code, video and more — to train AI bots to discern patterns of language, images, sound and conversation that they can mimic.
Some tech companies have prevailed in copyright lawsuits filed against them.
In June, a judge dismissed a lawsuit authors filed against Facebook parent company Meta, which also developed an AI assistant, alleging that the company stole their work to train its AI systems. U.S. District Judge Vince Chhabria noted that the lawsuit was tossed because the plaintiffs “made the wrong arguments,” but the ruling didn’t “stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful.”
Trade groups representing publishers praised the Anthropic settlement on Friday, noting it sends a big signal to tech companies that are developing powerful artificial intelligence tools.
“Beyond the monetary terms, the proposed settlement provides enormous value in sending the message that Artificial Intelligence companies cannot unlawfully acquire content from shadow libraries or other pirate sources as the building blocks for their models,” said Maria Pallante, president and chief executive of the Association of American Publishers in a statement.
President Trump is scheduled to dine with tech executives from Apple, Meta, Google and OpenAI on Thursday night at a White House event in the newly renovated Rose Garden.
The gathering is the latest example of how the world’s most powerful tech leaders are forging stronger ties with Trump’s second administration.
There’s one high-profile tech executive who won’t be at the gathering: Tesla and xAI Chief Executive Elon Musk, who backed Trump but then feuded with the president after temporarily leading an effort to slash government spending.
Musk posted on X that he “was invited, but unfortunately could not attend” and a representative would show up on his behalf.
The Hill first reported that roughly two dozen tech and business leaders, including Meta Chief Executive Mark Zuckerberg, Apple Chief Executive Tim Cook, Microsoft co-founder Bill Gates, Google Chief Executive Sundar Pichai and OpenAI Chief Executive Sam Altman, are on the invite list. The gathering is scheduled to take place after First Lady Melania Trump hosts an event for the new Artificial Intelligence Education task force.
“The president looks forward to welcoming top business, political, and tech leaders for this dinner and the many dinners to come on the new, beautiful Rose Garden patio,” White House spokesperson Davis Ingle told the Hill.
Meta declined to comment. Apple and xAI didn’t immediately respond to a request for comment.
Ahead of the dinner, Microsoft and OpenAI announced ways the companies are supporting the White House’s efforts to expand AI literacy. As AI disrupts industries including entertainment and healthcare, workers have expressed anxiety about whether they will lose their jobs.
OpenAI said it’s working with businesses such as Walmart and John Deere to build a platform that will help employers find workers with AI skills. The San Francisco tech company, which also has a platform where people can learn about AI, plans to offer certifications so workers can showcase how much they know about the technology. OpenAI said it aims to to certify 10 million Americans by 2030.
Microsoft outlined several ways it’s trying to help students and workers learn more AI skills through its grants, partnerships and products, including offering a year of Microsoft 365 Personal — which includes the company’s AI assistant Copilot — free for all U.S. college students if they sign up before the end of October.
“AI is the defining technology of our time, and how we empower people to use it will shape our country’s future,” said Microsoft Chief Executive Satya Nadella, who is also expected to attend the dinner, in a video. “That’s why we are so grateful to the President, First Lady and the entire administration for making it a national priority to prepare the next generation to harness AI’s power.”
Silicon Valley tech executives had a contentious relationship with Trump during his first term, sparring with the president over issues such as immigration.
They’ve struck a more friendly tone with the president during his second term as they push for a more hands-off approach to regulation while competing to dominate the artificial intelligence race.
In July, the Trump administration released an action plan that aimed to cut “red tape” so tech companies can quickly develop and deploy AI technology as they go head-to-head with firms in China and elsewhere. Trump tapped venture capitalist David Sacks, who is also expected to attend Thursday’s dinner, to guide the White House’s policy on AI and cryptocurrency.
As tech companies charge ahead, child safety and advocacy groups have raised concerns there aren’t enough guardrails in place to protect the mental health of young people as they spill their darkest thoughts to chatbots.
Trump has also publicly criticized many tech executives before striking deals with them. After Trump called for the resignation of Intel Chief Executive Lip-Bu Tan over alleged conflicts related to his reported investments in Chinese companies, tensions cooled after they met. Intel then announced in August that the U.S. government would take a roughly 10% stake in the semiconductor company.
Trump also struck an unusual deal with Nvidia and Advanced Micro Devices that allows the companies to sell certain chips to China in exchange for giving the U.S. government a 15% cut of those sales.
This raised questions among politicians and legal experts about whether that agreement is legal. Nvidia previously said it would spend up to $500 billion over the next four years on AI infrastructure.
Other tech executives have shown support for building in the United States as they face the threat of tariffs from the Trump administration. They also donated to Trump’s inaugural fund after he won the presidential election and have been showing up at high-profile events.
Apple in August pledged to spend an additional $100 billion on domestic manufacturing, bringing its total U.S. investment commitment to $600 billion after Trump criticized the company for expanding iPhone manufacturing in India.
OpenAI, Oracle and SoftBank announced this year that they planned to invest a total of $500 billion in U.S. AI infrastructure over the next four years.
“The Wizard of Oz” is certainly the right movie to face the great and powerful ambitions of Sphere, the most expensive entertainment venue in Las Vegas history. Since 1939, the treasured classic has hailed the awe of gazing into a glowing globe, whether it’s glinting atop a fortune teller’s table, transporting the meddlesome Glinda the Good Witch or spying on a teenage girl and her companions like a sinister security camera.
Special effects are central both to “Oz’s” appeal and its plot: The big reveal is that technicians, not wizards, pull the levers that make an audience gasp. For Sphere — officially, there’s no “the” — those tools include three football fields of bright 16K LED screens that curve around its domed interior, with another 10 on the outside that light up Vegas day and night with rotating animations. (I saw blue gingham, scarlet sequins and thatches of burlap and straw.) Sphere cost an estimated $2.3 billion to build and must have an electricity bill scarier than the Wicked Witch. You can make out Dorothy’s slippers from an airplane.
Share via
With no heel clicks needed, I was whisked to “The Wizard of Oz’s” Sphere premiere in a red sedan by a Lyft driver named — no fooling — Ruby, who said she was grateful that the Backstreet Boys’ recent stint at the arena “made Vegas busy for a minute.” There’s a lot of financial pressure on “Oz’s” girl from Kansas. Adapting the film to Sphere’s stunning dimensions took about $100 million. Although the arena seats 17,600 when full, “Oz” showings only offer a slice of the middle section, roughly a third of its capacity. A trimmed 70-minute edit of the movie is playing two to three times a day, nearly every day, through the end of March 2026, with a ticket price that currently starts at $114.
Eighty-six years ago, when a kiddie fare cost 15 cents, my then-6-year-old grandmother watched the theater blink from sepia to vivid color splendor. That innovation gets credited to Hollywood, but the idea of contrasting lush and luminous Oz against soul-drainingly monochrome Kansas is actually right there on Page 1 of L. Frank Baum’s book, published in May 1900, a self-proclaimed effort to write a “modernized” fairy tale that swaps Old World elves for American scarecrows. “When Dorothy stood in the doorway and looked around, she could see nothing but the great gray prairie on every side,” Baum wrote, adding that her house and her weary aunt and uncle and everything else were gray too, “to the edge of the sky in all directions.”
That’s exactly what Sphere was designed to do: stretch to the edge in all dimensions. It exists neither to save film nor supplant it, but to augment a rectangular screen with new digital and (controversially) generative-AI-supplied imagery, timed props and seats that vibrate whenever the Wicked Witch cackles.
Despite my queasiness about cutting “Oz” by half an hour, the experiment is a romp. I was immersed in — or, more accurately, surrounded by — scenes from one of my favorite movies, a pivotal blockbuster whose artistic influence extends from David Lynch to Elton John to Salman Rushdie. Even more giggle-inducing, I was pelted with scented foam apples and dive-bombed by half a dozen drone-piloted flying monkeys.
“The Wizard of Oz” has always braved new technology. An early adopter of Technicolor, it boasted a lighting budget nearly double that of its rival, “Gone With the Wind,” yet the latter gobbled nearly every Academy Award and poached “Oz’s” director, Victor Fleming, who swapped projects halfway through and won an Oscar for his vision of Sherman’s March instead of the Yellow Brick Road. In the 1950s, when the rest of Hollywood was terrified of television, “Oz” agreed to be the first theatrical movie to screen in full on a prime-time network. TV transformed the prestige money-loser into a hit. Sphere has turned “Oz” into a flash point in the industry’s fundamental fight over the use of AI. Artists and audiences alike fear a future in which, behind the curtain, there might not be a man at all.
I like my art made by human beings. But I’m no nostalgist. “Oz” was a book, a musical, a silent short and a cartoon before MGM made the variant we adore. It should be a playground for invention.
Entering Sphere, the escalators are tinted sepia and the soundscape hums with birdsong and lowing cattle. The implication is that we’ve not yet been whirled over the rainbow. Preshow, the view from one’s seat is of being in a massive old opera house with dusky green drapes flanked by rows of orchestra seats. None of the proscenium is actually there, nor are the musicians heard running scales and rehearsing “Follow the Yellow Brick Road.”
The simulation of human handicraft — of stagehands and horn players hiding in the wings — is unnerving. But it gets you thinking about the actual, contemporary people who are behind that curtain. Visual artists who labored on the Sphere project have justly grumbled that their sweat has gotten publicly dismissed as AI. An actual symphony orchestra rerecorded “Oz’s” mono score on the very same MGM stage used in 1939, allegedly with some of the same instruments. It sounds fantastic, and it’s so loyal to every jaunty warble that audiences might not notice.
A few scenes have been lopped off entirely. The Cowardly Lion no longer trills about becoming king of the forest. The majority of the shots have been micro-trimmed to be snappier, a pace that wouldn’t suit stoners’ penchant for synchronizing the movie to Pink Floyd’s dreamy, woozy “The Dark Side of the Moon.” Occasionally, the camera’s placement appears to have been adjusted to allow the visuals to expand to fill the space. Inside Dorothy’s Kansan house, a once-shadowed frying pan on the wall now dangles front and center, as does a digitally added “Home Sweet Home” needlepoint nailed to the threshold. (The plotting has become so brisk that we might otherwise miss the message that there’s no place like it.)
The tweaks can be subtle and lovely. Dorothy belts “Over the Rainbow” underneath newly actualized bluebirds and an impressively ominous sky. When the tornado happens, the tech changes hit us like a cyclone. We’re pulled through the window and into the eye of the storm, where a cow spirals around like it wants to outdo the scene-stealing bovine from “Twister.” A great, giddy blast of air from the 750-horsepower fans blew my bangs straight off my forehead. I kept one eye on the screen while trying to catch a flurry of tissue-paper leaves. The wow factor is so staggering that you might not spot that Sphere’s founder and chief executive, James Dolan, and Warner Bros. president and CEO David Zaslav have superimposed their faces on the two sailors twirling past in a rowboat — an apropos in-joke for people concerned the moguls have been swept away by their own bluster.
“Anyone can blow wind into your face,” Dolan said to the premiere audience before the film began. “Not everyone can make you feel like you’re in a tornado.” Wearing the Wizard’s green top hat and suit and with his microphone dropping out inauspiciously, Dolan never introduced himself, but he did compliment the other creators of the event, who also wore costumes. (I overheard that some of them thought Dolan was kidding about dressing in character until they found themselves spending four hours getting groomed to look lionesque.)
Just a week earlier, in trial runs, perfumes were piped into the air so people could get a whiff of the Emerald City. (Gauging by the souvenir candles in Sphere’s gift shop, it is chocolate mint.) They’ve currently been scaled back out of concerns that it all might get too overwhelming. Having figured out how to do sight, sound, feel and smell, Dolan conceded that only one sense remains: “We still haven’t figured out taste.”
Taste is definitely still a concern. Oddly, Sphere’s “Oz” loses a dram of its spellcraft once audiences touch down in Munchkinland. Seeing the newly added tops of Oz’s trees makes the fantastical place look smaller.
The margins of “The Wizard of Oz” have been expanded by generative AI to fit the enormous venue.
(Rich Fury / Sphere Entertainment)
You feel for the design teams. They’ve been challenged to magnify a 4-foot matte painting of the arched hallway into the Wizard’s throne room — initially done in pastels on black cardboard — into a 240-foot-tall tableau. One of the 1939 film’s production designers, Jack Martin Smith, said that his instructions were to make Oz “ethereal” and “subdued.” Descriptions of the cornfield’s hand-painted muslin background make it sound like a proto-Rothko. Now, you can see every kernel. The razor-sharp mountains on the horizon don’t inspire your imagination — they make you think of Machu Picchu.
More troublesome are the Munchkins and the citizens of Emerald City. Tidied into high definition, they often appear restless. As Dorothy pleads for the Wizard not to fly away without her, we’re distracted by hundreds of waving extras who visibly don’t give a hoot what happens to the girl. Worse, they occasionally seem to glitch. If that’s the best AI can do in 2025, then Sphere isn’t a resounding endorsement.
By contrast, Judy Garland’s performance, delivered at just age 16, feels monumental. Her big brown eyes dominate the screen. When the heartbroken girl sobs that the Wicked Witch has chased away her beloved Toto, I found myself annoyed by a flying monkey on the left side of the frame who simply looked bored.
The field of poppies is dazzling; the additional deer, ants and rodents skittering across the golden sidewalk are simply strange. Overall, you’re so caught up observing the experience itself that the emotions of the story don’t register as anything more than theme-ride hydraulics. Still, it’s nice to have a sweeping view of the first film’s prosthetic makeup: the Cowardly Lion’s upturned nostrils, the Scarecrow’s baggy jowls, the real horses painted purple and red with Jell-O. (Due to pace tightening, we only see two ponies, not all six).
I recoiled when the Wizard’s disembodied head loomed above. Who decided to make him look like a cheesy martian? Flipping through sketches from 1939 afterward, I realized that he always looked that bad. His gaunt cheekbones just weren’t as obvious before. Nevertheless, be sure to look to the right when Toto reveals Oz’s control booth. In a clever touch, Sphere lets us continue to see the monstrous green face, now neutered and ridiculous, mouth along as the panicked geek apologizes for being a humbug.
Can Sphere win big on its risky gamble that there’s no place like dome? It’s not the first Las Vegas attraction to bet on our love for the MGM extravaganza. “The Wizard of Oz” has been tangled up with Las Vegas’ fortunes for more than half a century, ever since real estate investor Kirk Kerkorian purchased MGM Studios in 1969 and, one year later, auctioned Dorothy’s slippers to help fund the construction of the first MGM Grand Hotel and Casino. The second MGM Grand, the one that opened in 1993, was branded for “The Wizard of Oz” — that’s why it’s green like Emerald City — and during the first year, visitors could walk through an animatronic forest of lions, tigers and gamblers.
The Strip was once a magical place where innocents like Dorothy flocked to get into trouble, often in encounters with sleight-of-hand hucksters like Professor Marvel. Hopes are high that tourists will come back to be transported to Oz, even at a ticket price that costs a chunk of the family farm. The hurdle is that although audiences have become begrudgingly accustomed to spending more than $100 to see their favorite bands, they’re still seeing an actual band and not a shortened version of a movie that’s popular in part because everyone grew up watching it on TV for free.
But on opening night at least, the crowd was treating the cinema like a concert. Many folks were in some sort of costume, including me. (I couldn’t resist wearing a pair of red shoes.) When I complimented a man’s blue gingham suit, he handed me a handmade beaded, Taylor Swift-style bracelet that read: Toto Too.
If fans like him turn this techno-incarnation of “Oz” into a hit, Sphere has said it would consider following it up with a similar presentation of “Gone With the Wind.” Imagine the smell of the burning of Atlanta. Much better than the air of burning money.
John McGing couldn’t reach a human. That might be business-as-usual in this economy, but it wasn’t business; he had called the Social Security Administration, where the questions often aren’t generic and the callers tend to be older, disabled, or otherwise vulnerable Americans.
McGing, calling on behalf of his son, had an in-the-weeds question: how to prevent overpayments that the federal government might later claw back. His call was intercepted by an artificial intelligence-powered chatbot.
No matter what he said, the bot parroted canned answers to generic questions, not McGing’s obscure query. “If you do a key press, it didn’t do anything,” he said. Eventually, the bot “glitched or whatever” and got him to an agent.
It was a small but revealing incident. Unbeknownst to McGing, a former Social Security employee in Maryland, he had encountered a technological tool recently introduced by the agency. Former officials and longtime observers of the agency say the Trump administration rolled out a product that was tested but deemed not yet ready during the Biden administration.
“With the new administration, they’re just kind of like, let’s go fast and fix it later, which I don’t agree with, because you are going to generate a lot of confusion,” said Marcela Escobar-Alava, who served as Social Security’s chief information officer under President Joe Biden.
Some 74 million people receive Social Security benefits; 11 million of those receive disability payments. In a survey conducted last fall, more than a third of recipients said they wouldn’t be able to afford such necessities as food, clothing, or housing without it. And yet the agency has been shedding the employees who serve them: Some 6,200 have left the agency, its commissioner told lawmakers in June, and critics in Congress and elsewhere say that’s led to worse customer service, despite the agency’s efforts to build up new technology.
Take the new phone bot. At least some beneficiaries don’t like it: Social Security’s Facebook page is, from time to time, pockmarked with negative reviews of the uncooperative bot, as the agency said in July that nearly 41% of calls are handled by the bot.
Lawmakers and former agency employees worry it foreshadows a less human Social Security, in which rushed-out AI takes the place of pushed-out, experienced employees.
Anxieties across party lines
Concern over the direction of the agency is bipartisan. In May, a group of House Republicans wrote to the Social Security Administration expressing support for government efficiency, but cautioning that their constituents had criticized the agency for “inadequate customer service” and suggesting that some measures may be “overly burdensome.”
The agency’s commissioner, Frank Bisignano, a former Wall Street executive, is a tech enthusiast. He has a laundry list of initiatives on which to spend the $600 million in new tech money in the Trump administration’s fiscal 2026 budget request. He’s gotten testy when asked whether his plans mean he’ll be replacing human staff with AI.
“You referred to SSA being on an all-time staffing low; it’s also at an all-time technological high,” he snapped at one Democrat in a House hearing in late June.
But former Social Security officials are more ambivalent. In interviews with KFF Health News, people who left the agency — some speaking on the condition of anonymity for fear of retribution from the Trump administration and its supporters — said they believe the new administration simply rushed out technologies developed, but deemed not yet ready, by the Biden administration. They also said the agency’s firing of thousands of employees resulted in the loss of experienced technologists who are best equipped to roll out these initiatives and address their weaknesses.
“Social Security’s new AI phone tool is making it even harder for people to get help over the phone — and near impossible if someone needs an American Sign Language interpreter or translator,” Sen. Elizabeth Warren (D-Mass.) told KFF Health News. “We should be making it as easy as possible for people to get the Social Security they’ve earned.”
Spokespeople for the agency did not reply to questions from KFF Health News.
Using AI to automate customer service is one of the buzziest businesses in Silicon Valley. In theory, the new breed of artificial intelligence technologies can smoothly respond, in a human-like voice, to just about any question. That’s not how the Social Security Administration’s bot seems to work, with users reporting canned, unrelated responses.
The Trump administration has eliminated some online statistics that obscure its true performance, said Kathleen Romig, a former agency official who is now director of Social Security and disability policy at the left-leaning Center on Budget and Policy Priorities. The old website showed that most callers waited two hours for an answer. Now, the website doesn’t show waiting times, either for phone inquiries (once callback wait time is accounted for) or appointment scheduling.
While statistics are being posted that show beneficiaries receive help — that is, using the AI bot or the agency’s website to accomplish tasks like getting a replacement card — Romig said she thinks it’s a “very distorted view” overall. Reviews of the AI bot are often poor, she said.
Agency leaders and employees who first worked on the AI product during the Biden administration anticipated those types of difficulties. Escobar-Alava said they had worked on such a bot, but wanted to clean up the policy and regulation data it was relying on first.
“We wanted to ensure the automation produced consistent and accurate answers, which was going to take more time,” she said. Instead, it seems the Trump administration opted to introduce the bot first and troubleshoot later, Escobar-Alava said.
Romig said one former executive told her that the agency had used canned FAQs without modifications or nuances to accommodate individual situations and was monitoring the technology to see how well it performed. Escobar-Alava said she has heard similarly.
Could automation help?
To Bisignano, automation and web services are the most efficient ways to assist the program’s beneficiaries. In a letter to Warren, he said that agency leaders “are transforming SSA into a digital-first agency that meets customers where they want to be met,” making changes that allow the vast majority of calls to be handled either in an automated fashion or by having a human return the customer’s call.
Using these methods also relieves burdens on otherwise beleaguered field offices, Bisignano wrote.
Altering the phone experience is not the end of Bisignano’s tech dreams. The agency asked Congress for some $600 million in additional funding for investments, which he intends to use for online scheduling, detecting fraud, and much more, according to a list submitted to the House in late June.
But outside experts and former employees said Bisignano overstated the novelty of the ideas he presented to Congress. The agency has been updating its technology for years, but that does not necessarily mean thousands of its workers are suddenly obsolete, Romig said. It’s not bad that the upgrades are continuing, she said, but progress has been more incremental than revolutionary.
Some changes focus on spiffing up the agency’s public face. Bisignano told House lawmakers that he oversaw a redesign of the agency’s performance-statistics page to emphasize the number of automated calls and deemphasize statistics about call wait times. He called the latter stats “discouraging” and suggested that displaying them online might dissuade beneficiaries from calling.
Warren said Bisignano has since told her privately that he would allow an “inspector general audit” of their customer-service quality data and pledged to make a list of performance information publicly available. The agency has since updated its performance statistics page.
Other changes would come at greater cost and effort. In April, the agency rolled out a security authentication program for direct deposit changes, requiring beneficiaries to verify their identity in person if what the agency described in regulatory documents as an “automated” analysis system detects anomalies.
According to documents accompanying the proposal, the agency estimated about 5.8 million beneficiaries would be affected — and that it would cost the federal government nearly $1.2 billion, mostly driven by staff time devoted to assisting claimants. The agency is asking for nearly $7.7 billion in the upcoming fiscal year for payroll overall.
Christopher Hensley, a financial adviser in Houston, said one of his clients called him in May after her bank changed its routing number and Social Security stopped paying her, forcing her to borrow money from her family.
It turned out that the agency had flagged her account for fraud. Hensley said she had to travel 30 minutes to the nearest Social Security office to verify her identity and correct the problem.
Tahir writes for KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF — the independent source for health policy research, polling, and journalism.
Democrats, bless their hearts, keep trying to figure out the magic formula to stop President Trump. But here’s a cold splash of reality: If Trump’s popularity ever collapses, it will probably be because of something completely beyond their control.
In 2020, it wasn’t some brilliant strategy that defeated Trump. It was COVID. A global pandemic. An act of God (or Wuhan).
This raises an uncomfortable thought: the next disruption — the one that might shake up the political snow globe again — will probably be much bigger than COVID. That looming disturbance is artificial intelligence.
In a recent Substack essay, Pete Buttigieg suggested that “the number one leadership challenge for world leaders, including the President of the United States, will be to manage the changes that AI is bringing about.” He goes on to note that “our president — and his opposition — have yet to make clear what their AI policies even are.”
He’s not wrong about the bipartisan lack of preparation. And for this reason, the political consequences are likely to be brutal for whichever party is in charge when the tipping point arrives and AI upends the lives of millions of Americans.
Trump still has three and a half years left on the clock — just enough time for AI to yank the rug out from under him. That’s a golden opportunity for Democrats, if they’re smart enough to capitalize on it.
But Democrats should hold off on gleefully penciling in 2028 as the year AI hands them the keys to the White House in perpetuity. Why? Because huge shocks to the system tend to empower either a) bold problem solvers or b) populist demagogues.
Lest we forget, the last seismic tech shift — the rise of the Information Age — gave us globalization, economic dislocation (for working-class Americans) and (eventually) Donald Trump.
This next disruption could be even more traumatic. AI isn’t just coming for truck drivers. It’s coming for legal assistants, graphic designers, junior software developers, even (ahem) writers. College graduates who spent decades believing their degree was a shield against obsolescence are about to get a taste of what coal miners, steelworkers, typists and travel agents have already endured.
When that happens, disenchanted moderates will radicalize, and income inequality will detonate. The people who build and control AI will obviously get filthy rich. So will superstar surgeons and elite litigators — people whose rarefied expertise and skills can’t be replicated remotely. But their legions of associates, researchers and paralegals will vanish like Blockbuster Video.
Now, for generations, lost jobs and industries were replaced by new ones — thanks to what economists call “creative destruction.” The buggy maker gave way to the auto industry and the auto mechanic, and society moved forward. But this time, the old rules may not apply — at least, not by virtue of some organic “invisible hand.”
If this shift is as severe and pervasive as many believe it will be (a huge caveat, to be sure), it won’t be solved by fiddling around with marginal tax rates or by mildly expanding unemployment benefits. It will require a vast reimagining of what the government does — the kind of thing that would make free-market purists break out in hives.
But here’s where it gets tricky for Democrats: They can’t simply hand displaced workers a check and call it a solution.
This is the core problem with universal basic income, often touted as the answer to AI-driven job losses. The modest $1,000-a-month figure that’s been floated is a joke. But even if the amount were higher, it would still have to be paired with meaningful work.
Something Democrats must learn: People don’t just want money. They crave dignity, purpose, belonging and a reason to get up in the morning.
That means thinking big and finding meaningful opportunities for the displaced to serve and provide value. Imagine one teacher for every five students in America’s public school and college classrooms. Imagine school buses with three adults instead of one overworked driver.
Imagine a national corps of well-paid nurses and physical therapists making regular visits to isolated seniors and providing full-time home healthcare.
Picture teams of young, tech-savvy Americans helping retirees navigate their iPads, iPhones, TVs and other devices — closing the digital divide for an entire generation.
Now, pair that with a bold expansion of union apprenticeships to train the next wave of electricians, plumbers and carpenters — alongside free college or vocational training in exchange for a year or two of national service.
It wouldn’t happen overnight. Managing this transition would require robust unemployment benefits — say, 90% of prior salary for a fixed period — not as welfare, but as an investment in people and a dividend on the value they’ve helped create by virtue of tax dollars (that built the internet) and data (that fuel automation). Because again, addressing the dilemma of job displacement is about more than money.
Which brings us to some important questions we had better answer.
What does it mean to be a citizen in a society when AI makes half of the labor market feel redundant? How do you retain your identity and sense of self-worth when the work you have dedicated your life to can be more efficiently done by artificial intelligence?
And how do we redeploy human beings — tens of millions of them — into roles that make life better for others and give them back the self-respect that comes from service?
AI might be the great test of our political age, and the party that passes this test will be remembered as our savior.
The party that fails this test will be remembered — if at all — as the one fiddling while Rome was automated.
The road to find steady work in Hollywood is more fraught now than ever before. The entertainment industry is in the throes of a seismic transformation, as traditional jobs are vanishing, and AI threatens to completely upend the way visual media is made and consumed. Fortunately, Ada Tseng and Jon Healey are here to help.
The writing team, both former Times editors with extensive experience covering show business, have written “Breaking Into New Hollywood,” a how-to guide like no other. Healey and Tseng interviewed hundreds of insiders both above and below the line — gaffers, casting directors, actors, writers, stunt people and many others — to provide an extensive, wide-screen view of how to break in, and what it’s like when you actually do find that dream job.
I sat down with Healey and Tseng to discuss their new book.
Ada Tseng, left, and Jon Healey.
(Ricardo DeAratanha; Jay L. Clendenin / Los Angeles Times)
This is the most comprehensive how-to guide for Hollywood careers I’ve ever read. Where did the impetus for the book come from?
Ada: The book started as a Hollywood careers series at the Los Angeles Times, when Jon and I were editors on a team that specialized in writing guides and explainers. As we were thinking about how to be useful to L.A. Times readers, I pitched a project to help people who were interested in getting a job in Hollywood. A lot of people come to L.A. starry-eyed with big dreams, but the film and TV industry can be pretty brutal.
As journalists, we’re Hollywood outsiders, but we had access to hundreds of professionals who were generous enough to share what they wished they knew when they were starting out. We see it like this: On behalf of the people who don’t have connections in the industry, we cold-emailed people, asked for informational interviews, picked their brains, listened to stories of what they did to build a career — and did our best to consolidate their most practical pieces of advice into an actionable guide.
Jon: A lot of folks I interviewed had similar origin stories in this respect: They knew that they wanted to work in the industry in some capacity, but they didn’t know what exactly they could do. So it made sense to do a book for that sort of person — a guide that would show an array of possible career paths to people who didn’t know what role they wanted to fill.
I feel like “How to Break into the Business” books in the past have tended to focus on positive outcomes rather than the struggle. Did you want to temper expectations, or at least make sure people think things through very thoroughly before jumping in?
Ada: We just wanted to be honest. The glamorous fantasy of Hollywood is so intoxicating. But if you’re going to work in the industry, you need to navigate the day-to-day reality of it. I don’t think we were trying to encourage or discourage anyone. I’d hope that some people would read the chapters and think, “This seems doable, and now I can make a plan,” while others would read it and think, “If I’m honest with myself, I’m someone who needs more stability in my life.” Because it’s not just a career choice. It’s a lifestyle choice.
Jon: Right, this was about expectation-setting and reality-checking. The very first interviews I did in this project were of Foley artists. An expert I interviewed said there were 40 to 50 established Foley practitioners in the U.S., and 100 to 200 folks trying to get into the field. That’s a very tough nut to crack. Then there are the Hollywood unions, which present a catch-22 to anyone trying to join their ranks — they have to do a certain number of hours in jobs covered by union contracts, but union members get first crack at all those gigs.
Your book also covers jobs above and below the line. I think many people don’t even realize how many different career opportunities exist.
Ada: There are two things we heard over and over again. People would say, “It’s incredibly important to understand what all the different departments do.” And they’d also say, “So many people — even our own colleagues in the industry — don’t understand what we do.” So we wanted to encourage newcomers to learn about all different types of jobs in Hollywood and how they work together.
Jon: Talking about the emotional components is about setting expectations too. The vast majority of people who work in Hollywood, from A-list actors to entry-level grips, are freelancers. That’s a tough life of highs and lows, and you have to prepare for that mentally as well as financially. People have to hustle for years to establish themselves, and that takes an enormous capacity for rejection. On top of that is the physical toll the work can extract, especially on the folks involved in setting up and tearing down sets. Part of the point of the book is to tell people with Hollywood dreams that they’ll need to gird themselves emotionally and physically for the work.
You also broach the subject of money and who makes what. Another novel idea for a book like this.
Ada: We consistently heard from people that it takes 5 to 7 years to make a living — and that’s if you’re successful. So unless you come from wealth, how you pay your bills when you aren’t booking gigs is an integral part of breaking into — and achieving longevity — in Hollywood.
Also, the money varies widely — depending on experience, how big the project is and other factors, but it’s good to understand the basic minimums dictated by the unions, as well as whether you’re interested in a career path where you can expect to have yearly full-time work – or if 30 weeks of employment a year is considered a really good year.
Jon: The hardest parts to write for me, and probably for Ada too, were the sections telling people in certain fields that they were expected to work for free. Happily, the industry seems to be getting better about that, albeit because it’s been forced to do so.
Ada: Although, it’s not even that you aren’t making money. You have to spend a lot of money, whether you’re taking classes, buying equipment, submitting your work for fellowships — getting your own plane tickets and hotel rooms to go to events to network or promote your work. You’re basically investing in yourself as a business.
Your sections on AI are eye-opening.It is not necessarily a career killer but, in fact, might boost employment, right?
Jon: I like to give a super long answer to this question that cites the long history of industrial revolutions, but for the sake of brevity, I’ll just say that technology has always been crucial to the film and TV industry, and innovations over the years have ended some livelihoods while creating others. AI tools can allow filmmakers to be more efficient, just as digital cameras and LED lights have done. That inevitably means fewer jobs per project, but also should result in more projects being green-lit. And as digital tools and streaming services eliminate barriers to entry in music, so can AI eliminate barriers to entry in film. Advocates of AI believe there will be a net increase in jobs, and time will tell whether they’re right. But there’s no question that the jobs in film and TV will be different.
Ada: This was another hard part to give advice about, because AI is rapidly evolving and there’s a lot of well-founded fear about the jobs of our generations that will be eliminated. But this book is for the next generation, and aspiring creatives need to treat AI as part of their toolkit.
Was there any common thread that runs through all of the interviews you conducted with professionals?
Ada: Everyone is deeply committed to their crafts, but what they’re most passionate about is storytelling. What I mean by that is: A costume designer, of course, is passionate about clothing, but if their main priority was beautiful clothing, they’d be a stylist or a fashion designer. Costume designers are passionate about using clothing to create a character and tell a story. Similarly, if a set decorator’s main passion was creating beautiful homes, they’d be an interior designer. But a set decorator wants to use the furniture, decor and objects to help you understand the protagonist’s backstory.
Jon: Even the most accomplished crew members and producers we talked to said they looked at their jobs as advancing someone else’s vision, not their own. They learned early on not to get invested emotionally in their best ideas because someone else — the director on a film, the showrunner on a TV series — would be the judge of which ideas to use. That’s really humbling.
What do you think is the most profound change in Hollywood as it continues to transition from theatrical and TV into streaming?
Jon: Streaming has proven to be a huge boon to long-form storytelling, at least from the viewer’s vantage point. You’d still have “Succession” without streaming, but you don’t have the quantity of “Succession”-level shows without the investment and competition from the likes of Netflix, Apple and Amazon. But the economics of streaming series are very different from those of a long-running broadcast TV show. There are fewer episodes, which means less pay for writers, actors and crew members over the course of a year. And residuals are lower for those who are entitled to them. Meanwhile, after a steady rise in the number of scripted shows released in the U.S., the volume fell sharply in 2024. So it appears that peak TV may have peaked.
For movies, the pandemic gave studios a preview of the post-theatrical world to come. Nevertheless, the industry is still struggling to come up with a coherent approach to streaming. So much of a movie’s marketing is still tied to theatrical releases, and multiplexes and studios continue to fight over how long a new movie should wait before it hits the streamers. And I wonder if there isn’t a lingering stigma for movies that are available immediately for streaming, similar to the one for movies that went straight to DVD.
Ada: It’s not just streaming. Everything that we consume from our phones — from social media content to podcasts to gaming livestreams — is not only competing with mainstream Hollywood but also becoming part of the same big entertainment ecosystem.
But on the flip side, it’s never been more possible for aspiring creatives to bypass traditional gatekeepers, make their own projects, connect directly with audiences and build their own revenue streams — even if it’s never going to be easy.
When is AI not artificial intelligence? When it refers to ammonia inhalants, aka smelling salts.
When are these AIs in the news? When it was reported that the NFL banned their use, San Francisco 49ers star George Kittle protested, and the NFL walked back the ban a day later. The league’s players association clarified that players can still use AIs as long as teams don’t provide them.
Got it?
The NFLPA sent a memo to players on Wednesday saying that the ban only prohibits team employees from distributing AIs during games.
That must have pleased Kittle, who when under the impression that AIs were banned completely, grabbed a microphone on an NFL Network broadcast to say, “I honestly just came up here to air a grievance. Our team got a memo today that smelling salts and ammonia packets were made illegal in the NFL, and I’ve been distraught all day.”
The five-time All-Pro tight end said he used the substances for an energy boost before every offensive drive and joked that upon learning of the ban he “considered retirement.”
Except that it isn’t a ban. Kittle will just have to bring his own AI stash to ballgames.
“To clarify, this policy does not prohibit player use of these substances, but rather it restricts clubs from providing or supplying them in any form,” the NFLPA memo said. “The NFL has confirmed this to us.”
The use of AIs by NFL players has been under the radar despite apparently being a common practice. Their primary use is to prevent and treat fainting, with the Federal Aviation Administration requiring U.S. airlines to carry them in the event a pilot feels faint.
The ammonia gas irritates the nasal membranes, causing a reflex that increases breathing and heart rate. That can keep a person from fainting, and apparently can also help a person block and tackle.
In short, an AI — which has been described as smelling like cat urine — is a performance-enhancing substance.
The NFL, however, cited a warning from the FDA that AIs can mask symptoms of a concussion and have not been proven to be safe or effective simply to increase energy.
“In 2024, the FDA issued a warning to companies that produce commercially available ammonia inhalants (AIs), as well as to consumers about the purchase and use of AIs, regarding the lack of evidence supporting the safety or efficacy of AIs marketed for improving mental alertness or boosting energy,” the NFL memo to teams stated. “The FDA noted potential negative effects from AI use.
“AIs also have the potential to mask certain neurological signs and symptoms, including some potential signs of concussion. As a result, the NFL Head, Neck, and Spine Committee recommended prohibiting the use of AIs for any purpose during play in the NFL.”
Tampa Bay Buccaneers quarterback Baker Mayfield — who says he uses AIs — said the logic behind the NFL no longer supplying them is convoluted.
“I think the reasoning was that it masked concussion symptoms,” Mayfield said on “Up and Adams.” “But if you get knocked out, which is the whole purpose of smelling salts — to wake you up — you’re not allowed back in the game.
“I think it was a quick trigger to ban them, just to kind of CYA [cover your ass].”
Maybe NFL officials figure that by no longer supplying AIs and forcing players to bring their own batch to games, their liability in case of concussions or other medical complications is reduced.
“You just got to bring your own juice to the party, got to wake up ready to go,” Mayfield said.
By Ken Jennings Scribner: 480 pages, $21 If you buy books linked on our site, The Times may earn a commission from Bookshop.org, whose fees support independent bookstores.
Ken Jennings wants you to know he didn’t name his trivia game “Kennections.”
“It’s really an unpleasant name,” the “Jeopardy!” champion turned host says of the quizzes now published weekly by Mental Floss. “We have to lead with that. It was suggested by an editor at Parade Magazine, but it doesn’t look good or sound good.”
But Jennings loves the quizzes themselves, which are now collected (kellected?) in “The Complete Kennections.” The Simon & Schuster release, on shelves July 29, follows earlier Jennings books that included more writing. Those include: “Brainiac: Adventures in the Curious, Competitive, Compulsive World of Trivia Buffs,” “Maphead: Charting the Wide, Weird World of Geography Wonks,” “Because I Said So!: The Truth Behind the Myths, Tales, and Warnings Every Generation Passes Down to Its Kids” and “100 Places to See After You Die: A Travel Guide to the Afterlife.”
Jennings recently spoke about his books, AI and why trivia matters. This interview has been edited for length and clarity.
Was writing books always a goal?
I was an English major in college. I wanted to write and to teach, but writing didn’t seem like a practical choice. I was also doing a double major in computer science, and in 2000 it was absurdly easy to get a job at a friend’s startup, even if you were a terrible programmer, which I was.
Writing about geography and myths and fabled places of the afterlife all seem to make sense coming from the brain of a “Jeopardy!” champion.
It’s easy to imagine the same kid in an elementary school library, reading about these things in the World Book encyclopedia during a rainy recess. That’s my origin story. I was just a sponge for weird information. That’s my origin story right there.
I thought of “Jeopardy!” as a fun, crazy summer and did not think it would be my life, so I tried making each book less about “Jeopardy!” and trivia than the one before it.
Is the information in your books trivial, or do you think it’s important to get readers to understand geography and the way our culture passes down myths and tales?
I’m a believer that trivia is not just a bar pastime, or even a way for little Lisa Simpsons to get told they’re smart into adulthood. I always felt trivia was kind of a universal social good, a way to enjoy cultural literacy.
I feel I’m part of the last generation that had to justify having nerdy interests. It was kind of shameful and made you the punchline of jokes in movie comedies and stand-up. Today, it seems self-evident to everyone younger than us that, well, of course you would just be obsessive about lunchboxes or about “Battlestar Galactica” or fossils. That’s totally normalized, and it’s actually good.
But I’ve also been mourning the loss of generalists, people who knew a little bit about everything, which is what “Jeopardy!” celebrates, but it’s not fashionable. We live in a siloed society of specialists. And I really think we’d be better off if everybody knew a little bit about everything.
I do think it’s good to know trivia is not something that makes you better than other people. It doesn’t exist to show off or even to make you feel smarter about yourself. Ideally, it should bring people together and make the world more interesting and make you a more sparkling conversationalist.
“Jeopardy!” and your books strive to make learning facts fun. Is there a lesson there for educators?
I think that’s the beauty of trivia. I wrote a series of books for kids with amazing facts because I liked that kind of book when I was a kid. And you can see it in a classroom, when you see kids’ eyes light up about information and about serious subjects and about knowledge when it’s presented in a fun way, especially with narrative.
Narrative is the secret sauce. It just makes kids think the world is an amazing adventure and you just have to be curious and dig into it. But that gets beaten out of us, and then a lot of us at some point just specialize in one thing. You need to remind people that learning is not a chore. If it’s not fun, you’re doing it wrong. And trivia is very good at that.
Every good “Jeopardy!” clue tells a story in some way, saying, here’s why you should want to know this or here’s what this might have to do with life and the reason why this is not random minutia, which I think is a lot of people’s stereotype of trivia nerds. A trivia question can help you connect it to other things. Trivia is just an art of connections.
That’s certainly true in your “Kennections” book.
I grew up doing crosswords, riddles and rebuses. I’ve always liked trivia that rewards not just the recall of the right fact but has a little more mental clockwork involved so you have to solve some puzzles. You have to analyze the clue and figure out why it exists and what it’s asking or what it’s not asking, what was included, what was omitted. There’s a lot of analysis that can kind of lead you to the right answer by deduction, even if you don’t know the right fact off the top of your head. One half of your brain is just trying to recall these five facts, but you’ve got this other half that’s trying to figure it out and step back and take the big picture. And it might be something outside the box.
The art of it is finding five things that fit in the category but that can have double meanings: Commodore is both a computer and a member of a Lionel Richie combo.
You write that “Kennections” consumes your life — you go into a bagel store and wonder if you can build five questions out of the flavors. Is the problem that in your day-to-day life, you’re constantly seeing things and thinking things this way? Or is the problem that you can’t say this out loud because you’ll make your family crazy?
That’s something I learned early — that being this trivia-loving kid has the potential to be annoying. But my kids know what they’re getting from me at this point. And they both have the gene themselves. One is obsessed with Major League Baseball, and one is obsessed with the history of Disney theme parks, and they have encyclopedic knowledge every bit as awe-inspiring and freakish as I had as a kid. And I’m proud of that.
Do you worry about living in a culture that’s so polarized that facts aren’t even universally received and where AI takes over people’s need to be curious, allowing students to take shortcuts in learning?
I think an oligarch class is going to deliver us a combination of both, where the AI will not only create reliance on it but give us bad, counterfactual information about important issues. And it’s really something I take seriously. It’s really something we need to be pushing back on now.
You don’t want to trust an AI summary of a subject or AI’s take on an issue without understanding who controls that algorithm and why they want you to hear that information.
For filmmaker Scott Mann, three dozen F-bombs had the makings of a million-dollar headache.
When Mann wrapped “Fall,” a 2022 thriller about two women stranded atop a 2,000-foot radio tower, he figured the hard part was over. Shot in the Mojave Desert on a $3-million budget, the film didn’t have money to burn and seemed on course. But Lionsgate wanted a PG-13 rating and, with 35 expletives, “Fall” was headed for an R. Reshoots would cost more than $1 million — far beyond what the production could afford.
In the past, a director might have taken out a second mortgage or thrown themselves at the mercy of the ratings board. Mann instead turned to AI.
A few years earlier, he had been dismayed by how a German dub of his 2015 thriller “Heist” flattened the performances, including a key scene with Robert De Niro, to match stiff, mistranslated dialogue. That frustration led Mann to co-found Flawless, an AI startup aimed at preserving the integrity of an actor’s performance across languages. As a proof of concept, he used the company’s tech to subtly reshape De Niro’s mouth movements and restore the emotional nuance of the original scene.
On “Fall,” Mann applied that same technology to clean up the profanity without reshoots, digitally modifying the actors’ mouths to match PG-13-friendly lines like “freaking” — at a fraction of the cost.
A series on how the AI revolution is reshaping the creative foundations of Hollywood — from storytelling and performance to production, labor and power.
As AI stirs both hype and anxiety in Hollywood, Mann understands why even such subtle digital tweaks can feel like a violation. That tension came to a head during the 2023 SAG-AFTRA strike, in which AI became the defining flash point in the fight over acting’s future.
“Ours is a rights-based industry,” says Mann, 45, who helped develop a digital rights management platform at Flawless to ensure performers approve any changes to their work. “It’s built on protecting human creativity, the contributions of actors, directors, editors, and if those rights aren’t protected, that value gets lost.”
Mann at his office in Santa Monica.
(Brian Feinzimer / For The Times)
Still, Mann doesn’t see AI as a threat so much as a misunderstood tool — one that, used carefully, can support the artists it’s accused of replacing. Flawless’ DeepEditor, for example, lets directors transfer facial expressions from one take to another, even when the camera angle or lighting changes, helping actors preserve their strongest moments without breaking continuity.
“Plenty of actors I’ve worked with have had that moment where they see what’s possible and realize, ‘Oh my God, this is so much better,’” Mann says. “It frees them up, takes off the pressure and helps them do a better job. Shutting AI out is naive and a way to end up on the wrong side of history. Done right, this will make the industry grow and thrive.”
AI isn’t hovering at the edges of acting anymore — it’s already on soundstages and in editing bays. Studios have used digital tools to de-age Harrison Ford in “Indiana Jones and the Dial of Destiny,” resurrect Peter Cushing’s Grand Moff Tarkin in “Rogue One” and clone Val Kilmer’s voice in “Top Gun: Maverick” after throat cancer left him unable to speak. The technology has reshaped faces, smoothed dialogue and fast-tracked everything from dubbing to reshoots. And its reach is growing: Studios can now revive long-dead stars, conjure stunt doubles who never get hurt and rewrite performances long after wrap.
But should they?
Actors outside Paramount Studios during a SAG-AFTRA solidarity rally in September 2023.
(Al Seib / For The Times)
As the tools grow more sophisticated, the threat to actors goes beyond creative disruption. In an industry where steady work is already elusive and the middle class of working actors is vanishing, AI raises the prospect of fewer jobs, lower pay and, in a dystopian twist, a future in which your disembodied face and voice might get work without you.
Background actors were among the first to sound the alarm during the 2023 strike, protesting studio proposals to scan them once and reuse their likenesses indefinitely. That scenario is already beginning to unfold: In China, a state-backed initiative will use AI to reimagine 100 kung fu classics, including films starring Jackie Chan and Bruce Lee, through animation and other digital enhancements. Lee’s estate said it was unaware of the project, raising questions about how these actors’ likenesses might be used, decades after filming.
If the soul of acting is a human presence, what remains when even that can be simulated?
“You want to feel breath — you want to feel life,” said actor and director Ethan Hawke during a panel at 2023’s Telluride Film Festival, where strike-era unease over AI was palpable. “When we see a great painting, we feel a human being’s blood, sweat and tears. That’s what we’re all looking for, that connection with the present moment. And AI can’t do that.”
Who’s in control?
Justine Bateman may seem like an unlikely crusader in Hollywood’s fight against AI. Launched to fame as Mallory Keaton on the 1980s sitcom “Family Ties,” she later became a filmmaker and earned a computer science degree from UCLA. Now, as founder of the advocacy group CREDO23, Bateman has become one of the industry’s fiercest voices urging filmmakers to reject AI-generated content and defend the integrity of human-made work. Loosely modeled on Dogme 95, CREDO23 offers a certification of films made without AI, using minimal VFX and union crews. It’s a pledge backed by a council including “Mad Men” creator Matthew Weiner, “The Handmaid’s Tale” director Reed Morano and actor Juliette Lewis.
The 2023 SAG-AFTRA contract set new guardrails: Studios must get actors’ consent to create or use digital replicas of their likenesses, and those replicas can’t generate new performances without a separate deal. Actors must also be compensated and credited when their digital likeness is used.
But to Bateman, a former SAG-AFTRA board member and negotiating committee rep, those protections are little more than sandbags against an inevitable AI flood: hard-won but already straining to keep the technology at bay.
“The allowances in the contract are pretty astounding,” Bateman says by phone, her voice tight with exasperation. “If you can picture the Teamsters allowing self-driving trucks in their contract — that’s on par with what SAG did. If you’re not making sure human roles are played by human actors, I’m not sure what the union is for.”
Justine Bateman, photographed by The Times in 2022.
(Jay L. Clendenin / Los Angeles Times)
To Bateman, the idea that AI expands access to filmmaking — a central tenet of its utopian sales pitch — is a dangerous myth, one that obscures deeper questions about authorship and the value of creative labor.
“Anyone can make a film — my last two, I shot on an iPhone,” Bateman says. “The idea that AI is ‘democratizing film’ doesn’t even make sense. What it really does is remove the barrier of skill. It lets people pretend they’re filmmakers when they’re not, by prompting software that wouldn’t even function without having stolen a hundred years of film and TV production made by real filmmakers.”
Bateman’s opposition to AI is rooted in a deep distrust of Silicon Valley’s expanding influence over the creative process and a belief that filmmaking should be driven by artists, not algorithms. “The tech bro business completely jumped the shark with generative AI,” she says. “Is it solving plastics in the ocean? Homelessness? L.A. traffic? Not that I’m aware of.”
She scoffs at the supposed efficiencies AI brings to the filmmaking process: “It’s like saying, whatever somebody enjoys — sex or an ice cream sundae — ‘Hey, now you can do it in a quarter of the time.’ OK, but then what do you think life is for?“
To Bateman, an actor’s voice, face, movements or even their choice of costume is not raw material to be reshaped but an expression of authorship. AI, in her view, erases those choices and the intent behind them. “I’m deeply against changing what the actor did,” she says. “It’s not right to have the actor doing things or saying things they didn’t do — or to alter their hair, makeup or clothes in postproduction using AI. The actor knows what they did.”
While Bateman has been public and unwavering in her stance, many actors remain unsure whether to raise their voices. In the wake of the strikes, much of the conversation around AI has moved behind closed doors, leaving those who do speak out feeling at times exposed and alone.
Scarlett Johansson, who lent her smoky, hypnotic voice to the fictional AI in Spike Jonze’s Oscar-winning 2013 film “Her,” now finds herself in a uniquely uncomfortable position: She’s both a symbol of our collective fascination with artificial performance and a real-world example of what’s at stake when that line is crossed. Last year, she accused OpenAI of using a chatbot voice that sounded “eerily similar” to hers, months after she declined to license it. OpenAI denied the claim and pulled the voice, but the incident reignited concern over consent and control.
Johansson has long spoken out against the unauthorized use of her image, including her appearance in deepfake pornography, and has pushed for stronger safeguards against digital impersonation. To date, though, she is one of the few major stars to publicly push back against the creeping mimicry enabled by AI — and she’s frustrated that more haven’t joined her. “There has to be some agreed-upon set of boundaries in order for [AI] to not be detrimental,” she told Vanity Fair in May. “I wish more people in the public eye would support and speak out about that. I don’t know why that’s not the case.”
Lights, camera, replication
Ed Ulbrich, 60, a pioneering visual effects producer and co-founder of Digital Domain, has spent his career helping actors do the impossible, one pixel at a time.
In 2008’s “The Curious Case of Benjamin Button,” he led the team of more than 150 artists in building a fully digital version of Brad Pitt’s face so the actor could convincingly age in reverse — a two-year effort that earned Ulbrich and three colleagues an Oscar for visual effects and set a new benchmark for digital performance. (Nearly two decades later, the achievement is still impressive, although some scenes, especially those with Pitt’s aged face composited on a child’s body, now show their digital seams.) For 2010’s “Tron: Legacy,” Ulbrich helped digitally transform Jeff Bridges into his 1982 self using motion capture and CGI.
Working on last year’s “Here” — Robert Zemeckis’ technically daring drama starring Tom Hanks and Robin Wright as a couple whose lives play out across decades in a single New Jersey living room — showed Ulbrich just how far things have come. For someone who jokes he has “real estate in the uncanny valley,” it wasn’t just the AI-enabled realism that floored him. It was the immediacy. On set, AI wasn’t enhancing footage after the fact; it was visually reshaping the performance in real time.
Tom Hanks and Robin Wright in the movie “Here.”
(Sony Pictures Ent.)
“You look up and see 67-year-old Tom Hanks. You look down at the monitor — he’s 20, and it looks better than the best CGI,” Ulbrich says. “In my world, the human face is the holy grail. That is the most complicated thing you can do. And now it’s getting done in near real time before your eyes. The actor can come back and look at the monitor and get new ideas, because they’re seeing a different version of themselves: younger, older, as an alien or whatever.”
This kind of seamless AI-driven alteration marks a new frontier in postproduction. Modern AI systems can now “beautify” actors’ faces, like some would with a Instagram or Zoom filter: smooth out wrinkles, alter skin tone, sharpen jawlines, subtly nudge eye position to better match a desired gaze. What once required painstaking VFX can now be handled by fast, flexible AI tools, often with results invisible to audiences.
Once limited to only big-budget sci-fi and fantasy productions, this digital touch-up capability is expanding into rom-coms, prestige dramas, high-end TV and even some indie films. Dialogue can be rewritten and re-lipped in post. Facial expressions can be smoothed or swapped without reshoots. More and more, viewers may have no way of knowing what’s real and what’s been subtly adjusted.
“Here” was largely rejected by both audiences and critics, with some deeming its digitally de-aged performances more unsettling than moving. But Ulbrich says digitally enhanced performance is already well underway.
Talent agency CAA has built a vault of client scans, a kind of biometric asset library for future productions. Some stars now negotiate contracts that reduce their time on set, skipping hours in the makeup chair or performance-capture gear, knowing AI can fill in the gaps.
“Robert Downey, Brad Pitt, Will Smith — they’ve all been scanned many times,” says Ulbrich, who recently joined the AI-driven media company Moonvalley, which pitches itself as a more ethical, artist-centered player in the space. “If you’ve done a studio tentpole, you’ve been scanned.
“There is a lot of fear around AI and it’s founded,” he adds. “Unless you do something about it, you can just get run over. But there are people out there that are harnessing this. At this point, fighting AI is like fighting against electricity.”
While many in Hollywood wrestle with what AI means for the oldest component of moviemaking, others take a more pragmatic view, treating it as a tool to solve problems and keep productions on track. Jerry Bruckheimer, the powerhouse producer behind “Top Gun,” “Pirates of the Caribbean” and this summer’s “F1,” is among those embracing its utility.
“AI is not going anywhere and it’s only going to get more useful for people in our business,” he said in a recent interview with The Times.
He recalled one such moment during post-production on his new Brad Pitt–led Formula One drama, a logistical feat filmed during actual Formula One races across Europe and the Middle East, with a budget north of $200 million.
“Brad was in the wilds of New Zealand, and we had test screenings coming up,” Bruckheimer says. “We couldn’t get his voice to do some looping, so we used an app that could mimic Brad Pitt. I’m sure the union will come after me if you write that, but it wasn’t used in the movie because he became available.”
While he’s skeptical of AI’s ability to generate truly original ideas — “We’re always going to need writers,” he says — Bruckheimer, whose films have grossed more than $16 billion worldwide, sees AI as a powerful tool for global reach.
“They can take Brad’s voice from the movie and turn it into other languages so it’s actually his voice, rather than another actor,” he says. “If it’s not available yet, it will be.”
The debate over AI in performance flared earlier this year with “The Brutalist,” Brady Corbet’s award-winning drama about a Hungarian architect. After the film’s editor, Dávid Jancsó, revealed that AI voice-cloning software had been used to subtly modify the Hungarian accents of stars Adrien Brody and Felicity Jones, the backlash followed swiftly.
Some critics accused the film of using AI to smooth over performances while presenting itself as handcrafted, a move one viral post derided as trying to “cheap out without soul.” Corbet later clarified that AI was used sparingly, only to adjust vowel sounds, but the decision left some viewers uneasy — even as Brody went on to win the Oscar for lead actor.
If the controversy over “The Brutalist” struck some as a moral crisis, David Cronenberg found the whole thing overblown. Few filmmakers have probed the entanglement of flesh, identity and technology as relentlessly as the director of “Videodrome,” “The Fly” and last year’s “The Shrouds,” so he’s not particularly rattled by the rise of AI-assisted performances.
“All directors have always messed around with actors’ performances — that’s what editing is,” Cronenberg told The Times in April. “Filmmaking isn’t theater. It’s not sacred. We’ve been using versions of this for years. It’s another tool in the toolbox. And it’s not controlling you — you can choose not to use it.”
Long before digital tools, Cronenberg recalls adjusting actor John Lone’s vocal pitch in his 1993 film “M. Butterfly,” in which Lone played a Chinese opera singer and spy who presents as a woman to seduce a French diplomat. The director raised the pitch when the character appeared as a woman and lowered it when he didn’t — a subtle manipulation to reinforce the illusion.
David Cronenberg, photographed at his home in Toronto, Canada, in April.
(Kate Dockeray / For The Times)
Far from alarmed, Cronenberg is intrigued by AI’s creative potential as a way of reshaping authorship itself. With new platforms like OpenAI’s Sora and Google’s Veo 3 now capable of generating increasingly photorealistic clips from simple text prompts, an entire performance could conceivably be conjured from a writer’s keyboard.
“Suddenly you can write a scene — a woman is walking down the street, she looks like this, she’s wearing that, it’s raining, whatever — and AI can create a video for you,” Cronenberg says. “To me, this is all exciting. It absolutely can threaten all kinds of jobs and that has to be dealt with, but every technological advance has done that and we just have to adapt and figure it out.”
Ghosts in the frame
In the Hollywood of the late 1970s, there was no AI to tweak an actor’s face. So when “Star Wars” star Mark Hamill fractured his nose and left cheekbone in a serious car crash between shooting the first and second films, the solution was to tweak the story. The 1980 sequel “The Empire Strikes Back” opened with Luke Skywalker being attacked by a nine-foot-tall snow beast called a wampa on the ice planet Hoth, partly to account for the change in his appearance.
Decades later, when Hamill was invited to return as a younger version of himself in the 2020 Season 2 finale of “The Mandalorian,” the chance to show Luke “at the height of his powers was irresistible,” he says.
But the reality left him feeling oddly detached from the character that made him famous. Hamill shared the role with a younger body double, and digital de-aging tools recreated his face from decades earlier. The character’s voice, meanwhile, was synthesized using Respeecher, a neural network trained on old recordings of Hamill to mimic his speech from the original trilogy era.
“I didn’t have that much dialogue: ‘Are you Luke Skywalker?’ ‘I am,’” Hamill recalled in an interview with The Times earlier this year. “I don’t know what they do when they take it away, in terms of tweaking it and making your voice go up in pitch or whatever.”
When fans speculated online that he hadn’t participated at all, Hamill declined to correct the record.
“My agent said, ‘Do you want me to put out a statement or something?’” Hamill recalls. “I said, ‘Eh, people are going to say what they want to say.’ Maybe if you deny it, they say, ‘See? That proves it — he’s denying it.’”
A digitally de-aged Mark Hamill as the young Luke Skywalker in a 2020 episode of “The Mandalorian.”
(Lucasfilm Ltd.)
When Luke returned again in a 2022 episode of “The Book of Boba Fett,” the process was even more synthetic: Hamill was minimally involved on camera and the character was built almost entirely from digital parts: a de-aged face mapped onto a body double with an AI-generated voice delivering his lines. Hamill was credited and compensated, though the exact terms of the arrangement haven’t been made public.
The visual effect was notably improved from earlier efforts, thanks in part to a viral deepfake artist known as Shamook, whose YouTube video improving the VFX in “The Mandalorian” finale had racked up millions of views. He was soon hired by Industrial Light & Magic — a rare case of fan-made tech critique turning into a studio job.
“In essence, yes, I did participate,” Hamill says.
It’s one thing to be digitally altered while you’re still alive. It’s another to keep performing after you’re gone.
Before his death last year, James Earl Jones — whose resonant baritone helped define Darth Vader for generations — gave Lucasfilm permission to recreate his voice using AI. In a recent collaboration with Disney, Epic Games deployed that digital voice in Fortnite, allowing players to team up with Vader and hear new lines delivered in Jones’ unmistakable tones, scripted by Google’s Gemini AI.
In May, SAG-AFTRA later filed a labor charge, saying the use of Jones’ voice hadn’t been cleared with the union.
Last year’s “Alien: Romulus” sparked similar backlash over the digital resurrection of Ian Holm’s android character Ash nearly a decade after Holm’s death. Reconstructed using a blend of AI and archival footage, the scenes were slammed by some fans as a form of “digital necromancy.” For the film’s home video release, director Fede Álvarez quietly issued an alternate cut that relied more heavily on practical effects, including an animatronic head modeled from a preexisting cast of Holm’s face.
For Hollywood, AI allows nostalgia to become a renewable resource, endlessly reprocessed and resold. Familiar faces can be altered, repurposed and inserted into entirely new stories. The audience never has to say goodbye and the industry never has to take the risk of introducing someone new.
Hamill, for his part, seems ready to let go of Luke. After his final arc in 2017’s “The Last Jedi,” he says he feels a sense of closure.
“I don’t know the full impact AI will have but I find it very ominous,“ he says. “I’m fine. I had my time. Now the spotlight should be on the current and future actors and I hope they enjoy it as much as I did.”
Actors, not avatars
Actor and AI startup Wonder Dynamics co-founder Tye Sheridan, photographed by The Times in 2021.
(Michael Nagle / For The Times)
Actor Tye Sheridan knows how dark an AI future could get. After all, he starred in Steven Spielberg’s 2018 “Ready Player One,” a sci-fi thriller set inside a corporate-controlled world of digital avatars. But Sheridan isn’t trying to escape into that world — he’s trying to shape the one ahead.
With VFX supervisor Nikola Todorovic, Sheridan co-founded Wonder Dynamics in 2017 to explore how AI can expand what’s possible on screen. Their platform uses AI to insert digital characters into live-action scenes without green screens or motion-capture suits, making high-end VFX more accessible to low-budget filmmakers. Backed by Spielberg and “Avengers” co-director Joe Russo, Wonder Dynamics was acquired last year by Autodesk, the software firm behind many animation and design tools.
“Since the advent of the camera, technology has been pushing this industry forward,” Sheridan, 28, says on a video call. “AI is just another part of that path. It can make filmmaking more accessible, help discover new voices. Maybe the next James Cameron will find their way into the industry through some AI avenue. I think that’s really exciting.”
With production costs spiraling, Todorovic sees AI as a way to lower the barrier to entry and make riskier, more ambitious projects possible. “We really see AI going in that direction, where you can get those A24-grounded stories with Marvel visuals,” he says. “That’s what younger audiences are hungry for.”
The shift, Todorovic argues, could lead to more films overall and more opportunities for actors. “Maybe instead of 10,000 people making five movies, it’ll be 1,000 people making 50,” he says.
Still, Todorovic sees a threshold approaching, one where synthetic actors could, in theory, carry a film. “I do think technically it is going to get solved,” Todorovic says. “But the question remains — is that what we really want? Do we really want the top five movies of the year to star humans who don’t exist? I sure hope not.”
For him, the boundary isn’t just about realism. It’s about human truth.
“You can’t prompt a performance,” he says. “You can’t explain certain movements of the body and it’s very hard to describe emotions. Acting is all about reacting. That’s why when you make a movie, you do five takes — or 40. Because it’s hard to communicate.”
Sheridan, who has appeared in the “X-Men” franchise as well as smaller dramas like “The Card Counter” and “The Tender Bar,” understands that instinctively and personally. “I started acting in films when I was 11 years old,” he says. “I wouldn’t ever want to build something that put me out of a job. That’s the fun part — performing, exploring, discovering the nuances. That’s why we fall in love with certain artists: their unique sensibility, the way they do what no one else can.”
He knows that may sound contradictory coming from the co-founder of an AI company. That’s exactly why he believes it’s critical that artists, not Silicon Valley CEOs, are the ones shaping how the technology is used.
“We should be skeptical of AI and its bad uses,” he says. “It’s a tool that can be used for good or bad. How are we going to apply it to create more access and opportunity in this industry and have more voices heard? We’re focused on keeping the artist as an essential part of the process, not replacing them.”
For now, Sheridan lives inside that paradox, navigating a technology that could both elevate and imperil the stories he cares most about.
His next acting gig? “The Housewife,” a psychological drama co-starring Naomi Watts and Michael Imperioli, in which he plays a 1960s New York Times reporter investigating a suspected Nazi hiding in Queens. No AI. No doubles. Just people pretending to be other people the old way, while it lasts.
The titular character of the Apple TV+ series “Murderbot” doesn’t call itself Murderbot because it identifies as a killer; it just thinks the name is cool.
Murderbot, a.k.a. “SecUnit,” is programmed to protect people. But the task becomes less straightforward when Murderbot hacks the governor module in its system, granting itself free will. But the freedom only goes so far — the robot must hide its true nature, lest it get melted down like so much scrap metal.
The android, played by Alexander Skarsgård, is often fed up with humans and their illogical, self-defeating choices. It would rather binge-watch thousands of hours of trashy TV shows than deal with the dithering crew of space hippies to which it’s been assigned. On Friday, in the show’s season finale, the security robot made a choice with major implications for the relationships it formed with the Preservation Alliance crew — something the series could explore in the future (Apple TV+ announced Thursday it was renewing the show for a second season).
Though “Murderbot” is a unique workplace satire set on a far-off world, it’s one of several recent TV series dealing with the awkward and sometimes dangerous ways that humans might coexist with robots and artificial intelligence (or both in the same humanoid package).
Other TV shows, including Netflix’s “Love, Death & Robots” and last year’s “Sunny” on Apple TV+, grapple with versions of the same thorny technological questions we’re increasingly asking ourselves in real life: Will an AI agent take my job? How am I supposed to greet that disconcerting Amazon delivery robot when it brings a package to my front door? Should I trust my life to a self-driving Waymo car?
But the robots in today’s television shows are largely portrayed as facing the same identity issues as the ones from shows of other eras including “Lost in Space,” “Battlestar Galactica” (both versions) and even “The Jetsons”: How are intelligent robots supposed to coexist with humans?
They’ll be programmed to be obedient and not to hurt us (a la Isaac Asimov’s Three Laws of Robotics) until, for dramatic purposes, something goes wrong. The modern era of TV robots are more complex, with the foundational notion that they will be cloud-connected, accessing the same internet bandwidth as humans, and AI-driven.
In HBO’s “Westworld,” Evan Rachel Wood played Dolores Abernathy, a sentient android. (HBO)
The robot in Apple TV+’s “Sunny” was designed to be a friendly helper to Rashida Jones’ Suzie. (Apple)
Often, on shows such as AMC’s “Humans” and HBO’s “Westworld,” these AI bots become self-actualized, rising up against human oppressors to seek free lives when they realize they could be so much more than servants and sex surrogates. A major trope of modern TV robots is that they will eventually get smart enough to realize they don’t really need humans or come to believe that in fact, humans have been the villains all along.
Meanwhile, in the tech world, companies including Tesla and Boston Dynamics are just a few working on robots that can perform physical tasks like humans. Amazon is one of the companies that will benefit from this and will soon have more robots than people working in its warehouses.
So, we’re struggling to understand. AI, for all its potential, feels too large and too disparate a concept for many to get their head around. AI is ChatGPT, but it’s also Alexa and Siri, and it’s also what companies such as Microsoft, Google, Apple and Meta believe will power our future interactions with our devices, environments and other people. There was the internet, there was social media, now there’s AI. But many people are ambivalent, having seen the kind of consequences that always-present online life and toxic social media have brought alongside their benefits.
Past television series including “Next,” “Person of Interest,” “Altered Carbon” and “Almost Human” addressed potential abuses of AI and how humans might deal with fast-moving technology, but it’s possible they all got there too early to resonate in the moment as much as, say, “Mountainhead,” HBO’s recent dark satire about tech billionaires playing a high-stakes game of chicken while the world burns because of hastily deployed AI software. The quickly assembled film directed by “Succession’s” Jesse Armstrong felt plugged into the moment we’re having, a blend of excitement and dread about sudden widespread change.
Most TV shows, however, can’t always arrive at the perfect moment to tap into the tech anxieties of the moment. Instead, they often use robots or AI allegorically, assigning them victim or villain roles in order to comment on the state of humanity. “Westworld” ham-handedly drew direct parallels to slavery in its robot narratives while “Humans” more subtly dramatized the legal implications and societal upheaval that could result from robots seeking the same rights as humans.
But perhaps no show has extrapolated the near future of robots and AI tech from as many angles as Netflix’s “Black Mirror,” which in previous seasons featured a dead lover reconstituted into an artificial body, the ultimate AI dating app experience and a meta television show built by algorithms that stole storylines out of a subscriber’s real life.
Season 7, released in April, continued the show’s prickly use of digital avatars and machine learning as plot devices for stories about moviemaking, video games and even attending a funeral. In that episode, “Eulogy,” Phillip (Paul Giamatti) is forced to confront his bad life decisions and awful behavior by an AI-powered avatar meant to collect memories of an old lover. In another memorable Season 7 episode, “Bête Noire,” a skilled programmer (Rosy McEwen) alters reality itself to gaslight someone with the help of advanced quantum computing.
TV shows are helping us understand how some of these technologies might play out even as those technologies are quickly being integrated into our lives. But the overall messaging is murky when it comes to whether AI and bots will help us live better lives or if they’ll lead to the end of life itself.
According to TV, robots like the cute helper bot from “Sunny” or abused synthetic workers like poor Mia (Gemma Chan) from “Humans” deserve our respect. We should treat them better.
The robots and AI technologies from “Black Mirror?” Don’t trust any of them!
And SecUnit from “Murderbot?” Leave that robot alone to watch their favorite show, “The Rise and Fall of Sanctuary Moon,” in peace. It’s the human, and humane, thing to do.
WASHINGTON — The State Department is warning U.S. diplomats of attempts to impersonate Secretary of State Marco Rubio and possibly other officials using technology driven by artificial intelligence, according to two senior officials and a cable sent last week to all embassies and consulates.
The warning came after the department discovered that an impostor posing as Rubio had attempted to reach out to at least three foreign ministers, a U.S. senator and a governor, according to the July 3 cable, which was first reported by the Washington Post.
The recipients of the scam messages, which were sent by text, Signal and voicemail, were not identified in the cable, a copy of which was shared with the Associated Press.
“The State Department is aware of this incident and is currently investigating the matter,” it said. “The department takes seriously its responsibility to safeguard its information and continuously takes steps to improve the department’s cybersecurity posture to prevent future incidents.”
It declined to comment further due to “security reasons” and the ongoing investigation.
It’s the latest instance of a high-level Trump administration figure targeted by an impersonator, with a similar incident revealed in May involving President Trump’s chief of staff, Susie Wiles. The misuse of AI to deceive people is likely to grow as the technology improves and becomes more widely available, and the FBI warned in the spring about “malicious actors” impersonating senior U.S. government officials in a text and voice messaging campaign.
The hoaxes involving Rubio had been unsuccessful and “not very sophisticated,” one of the officials said. Nonetheless, the second official said the department deemed it “prudent” to advise all employees and foreign governments, particularly as efforts by foreign actors to compromise information security increase.
The officials were not authorized to discuss the matter publicly and spoke on condition of anonymity.
“There is no direct cyber threat to the department from this campaign, but information shared with a third party could be exposed if targeted individuals are compromised,” the cable said.
The FBI has warned in a public service announcement about a “malicious” campaign relying on text messages and AI-generated voice messages that purport to come from a senior U.S. official and that aim to dupe other government officials as well as the victim’s associates and contacts.
This is not the first time that Rubio has been impersonated in a deepfake. This spring, someone created a bogus video of him saying he wanted to cut off Ukraine’s access to Elon Musk’s Starlink internet service. Ukraine’s government later rebutted the false claim.
Several potential solutions have been put forward in recent years to the growing misuse of AI for deception, including criminal penalties and improved media literacy. Concerns about deepfakes have also led to a flood of new apps and AI systems designed to spot phonies that could easily fool a human.
The tech companies working on these systems are now in competition against those who would use AI to deceive, according to Siwei Lyu, a professor and computer scientist at the University at Buffalo. He said he’s seen an increase in the number of deepfakes portraying celebrities, politicians and business leaders as the technology improves.
Just a few years ago, fakes contained easy-to-spot flaws — inhuman voices or mistakes such as extra fingers — but now the AI is so good, it’s much harder for a human to spot, giving deepfake makers an advantage.
“The level of realism and quality is increasing,” Lyu said. “It’s an arms race, and right now the generators are getting the upper hand.”
The Rubio hoax comes after text messages and phone calls went to elected officials, business executives and other prominent figures from someone who seemed to have gained access to the contacts in Wiles’ personal cellphone, the Wall Street Journal reported in May.
Some of those who received calls heard a voice that sounded like Wiles’, which may have been generated by AI, according to the newspaper. The messages and calls were not coming from Wiles’ number, the report said. The government was investigating.
Lee writes for the Associated Press. AP writers Eric Tucker and David Klepper contributed to this report.
Hollywood’s relationship with artificial intelligence is fraught, as studios balance the need to cut costs with growing concerns from actors, directors and crew members. But in China, efforts to use AI in entertainment are taking a more no-holds-barred approach.
The China Film Foundation, a nonprofit fund under the Chinese government, plans to use AI to revitalize 100 kung fu classics including “Police Story,” “Once Upon a Time in China” and “Fist of Fury,” featuring Jackie Chan, Jet Li and Bruce Lee, respectively. The foundation said it will partner with businesses including Shanghai Canxing Culture & Media Co., which will license 100 Hong Kong films to AI companies to reintroduce those movies to younger audiences globally.
Chow Yun-fat stars in director John Woo’s “A Better Tomorrow” in 1986.
(Cinema City)
The foundation said there are opportunities to use AI to tell those stories through animation, for example. There are plans to release an animated version of director John Woo’s 1986 film “A Better Tomorrow” that uses AI to “reinterpret” Woo’s “signature visual language,” according to an English transcript of the announcement.
“By empowering cultural storytelling with technology, we can breathe new life into the classics and tell China’s stories farther and louder,” said Zhang Pimin, chairman of the China Film Foundation, at the Shanghai International Film Festival earlier this month.
The project raised eyebrows among U.S. artists, many of whom are deeply wary of the use of AI in creative pursuits.
The Directors Guild of America said AI is a creative tool that should only be used to enhance the creative storytelling process and “it should never be used retroactively to distort or destroy a filmmaker’s artistic work.”
“The DGA strongly opposes the use of AI or any other technology to mutilate a film or to alter a director’s vision,” the DGA said in a statement. “The Guild has a longstanding history of opposing such alterations on issues like colorization or sanitization of films to eliminate so-called ‘objectionable content’, or other changes that fundamentally alter a film’s original style, meaning, and substance.”
The project highlights widely divergent views on AI’s potential to reshape entertainment as the two countries compete for dominance in the highly competitive AI space. In the U.S., much of the traditional entertainment industry has taken a tepid view of generative AI, due to concerns over protecting intellectual property and labor relations.
While some Hollywood studios such as Lionsgate and Blumhouse have collaborated with AI companies, others have been reluctant to announce partnerships at the risk of offending talent that have voiced concerns over how AI could be used to alter their digital likeness without adequate compensation.
But other countries like China have fewer guardrails, which has led to more experimentation of the technology by entertainment companies.
Many people in China embrace AI, with 83% feeling confident that AI systems are designed to act in the best interest of society, much higher than the U.S. where it’s 37%, according to a survey from the United Nations Development Program.
The foundation’s announcement came as a surprise to Bruce Lee Enterprises, which oversees legal usage of Lee’s likeness in creative works.
Bruce Lee’s family was “previously unaware of this development and is currently gathering information,” a spokesperson said.
Woo, in a written statement, said he hadn’t heard from the foundation about the AI remake, noting that the rights to “A Better Tomorrow” have changed hands several times.
“I wasn’t really involved in the project because I’m not very familiar with AI technology,” Woo said in a statement to The Times. “However, I’m very curious about the outcome and the effect it might have on my original film.”
David Chi, who represents the China Film Foundation’s Special Fund for Film and Urban Development, said in an interview that Chan is aware of the project and he has plans to talk with Chan’s team. A representative of Chan’s did not respond to a request for comment.
“We do need to talk … very specifically how we‘re using animated or AI existing technology, and how that would combine with his image rights and business rights,” Chi said. Chi did not have an immediate response to the DGA, Bruce Lee Enterprises and Woo’s statements.
AI is already used in China for script development, content moderation and recommendations and translation. In postproduction, AI has reduced the time to complete visual effects work from days to hours, said He Tao, an official with the National Radio and Television Administration’s research center, during remarks at the festival.
“Across government agencies, content platforms, and production institutions, the enthusiasm to adopt and integrate AI has never been stronger,” He said.
During the project’s announcement, supporters touted the opportunity AI will bring to China to further its cultural message globally and generate new work for creatives. At the same time, they touted AI’s disruption of the filmmaking process, saying the “A Better Tomorrow” remake was completed with just 30 people, significantly fewer than a typical animated project.
China is a “more brutal society in that sense,” said Eric Harwit, professor of Asian studies at the University of Hawaii at Manoa. “If somebody loses their job because artificial intelligence is taking over, well, that’s just the cost of China’s moving forward. They don’t have that kind of regret about people losing jobs and there are less opportunities for organized protest against the Chinese government.”
A scene from the movie “Once Upon A Time In China.”
(Golden Harvest)
Hollywood guilds such as SAG-AFTRA have been outspoken about the harm AI could have on jobs and have fought for protections against AI in contracts in TV shows, films and video games. The unions have also pushed state and federal legislators to create laws that would give people more protections against deep fakes, or videos manipulated to show a person endorsing an idea or product that they don’t actually support. There is no equivalent of that in China.
“You don’t have those freestanding labor organizations, so they don’t have that kind of clout to protest against the Chinese using artificial intelligence in a way that might reduce their job opportunities or lead to layoffs in the sector,” Harwit added.
U.S. studios are also going to court to challenge the ways AI companies train their models on copyrighted materials. Earlier this month, Walt Disney Co. and Universal Pictures sued AI startup Midjourney, alleging it uses technology to generate images that copy the studios’ famous characters, including Yoda and Shrek.
In China, officials involved in the project to remaster kung fu films said they were eager to work with AI companies. They said that AI will be used to add “stunning realism” to the movies. They are planning to build “immersive viewing experiences” such as walking into a bamboo forest duel and “feeling the philosophy of movement and stillness.” In areas such as animation, new environments could be created with AI, Chi said.
“We are offering full access to our IP, platform, and adaptation rights to partners worldwide — with the goal of delivering richer, more diverse, and high-quality AI enhanced film works to global audiences,” said Tian Ming, chairman of Shanghai Canxing Culture & Media Co. in his remarks earlier this month. Tian said there is no revenue-sharing cap and it is allocating about $14 million to co-invest in selected projects and share in the returns.
The kung fu revitalization efforts will extend into other areas, including the creation of a martial arts video game.
Industry observers said China is wise to go back to its well of popular martial arts classics out of Hong Kong, which have inspired U.S. action movies for decades.
There’s also not as much risk involved for China, said Simon Pulman, a partner at law firm Pryor Cashman.
“They’ve got very little to lose by doing this,” Pulman said. “If it can potentially enhance the value of those movies, there’s very little downside for them.”
China’s film industry has grown significantly compared to decades ago, boosted by the proliferation of movie theaters, including Imax screens, in the country.
In the past, China’s box office relied heavily on U.S. productions like movies from the “Fast & Furious” and Marvel franchises, but now local movies dominate the market. The Chinese animated movie “Ne Zha 2” grossed $2.2 billion at the box office globally.
But those Chinese productions generally don’t draw large U.S. audiences when they’re released in the States. The classic martial arts movies, however, have a global following and enduring legacy.
“People love martial arts movies, because action travels,” said Paul Dergarabedian, senior media analyst at Comscore. “It doesn’t matter what language it’s in, if you have a great action sequence and great fighting sequences.”