Taiwan has brushed off China’s protest over a meeting between its representative and Japanese Prime Minister Sanae Takaichi at last weekend’s APEC summit, calling the encounter “very normal.” Beijing lodged a formal complaint with Tokyo after Takaichi met Taiwan’s APEC representative Lin Hsin-i on the sidelines of the summit in South Korea.
Takaichi had posted about the meeting on her X account, describing Lin as a senior adviser to the presidential office a remark that drew Beijing’s ire, as China claims Taiwan as part of its territory. Lin, a former economy minister, told reporters in Taipei that all APEC delegations “participated on an equal footing” and that such meetings were routine.
Why It Matters
The exchange underscores Taiwan’s determination to engage internationally despite China’s diplomatic pressure. APEC is one of the few global platforms where Taiwan participates, though its presidents are barred from attending. The meeting also signals Japan’s willingness to maintain contact with Taiwan amid growing regional tensions.
Taiwan: Reiterates its right to equal participation and rejects Beijing’s sovereignty claims.
China: Continues to oppose any official or symbolic recognition of Taiwan by other governments.
Japan: Balances unofficial ties with Taiwan while seeking stable relations with Beijing.
United States: Watches closely as Tokyo and Taipei deepen cooperation, given its own security interests in the region.
What’s Next
Beijing’s protest is unlikely to derail Japan-Taiwan engagement, but it could add friction to China-Japan ties already strained over regional security. With Prime Minister Takaichi’s past remarks about forming a “quasi-security alliance” with Taiwan, any future interactions between Tokyo and Taipei at multilateral events will be closely monitored by both Beijing and Washington.
In the working-class city of Commerce, where cars speed past on highways and the Citadel Outlets tower over neighborhoods, there is a steakhouse named Stevens. By day, it’s a classic and charming old restaurant where working people go for quiet, hearty meals.
But every Sunday night, the outside world disappears.
As waiters whisk about in starched button ups, couples lead each other by the hand toward the dance floor in the restaurant’s ballroom, where Stevens’ tradition of Salsa Sundays has been bringing the community together for 73 years.
At 7 p.m. every Sunday, beginner lessons start at Stevens Steakhouse.
(Emil Ravelo / For The Times)
An eight-piece band plays brass, electric guitar, bongos and timbales, filling the room with music as dancers twirl in a dizzying array. One attendee, 29-year-old Amy Hernandez, greets a few familiar faces before she steps onto the dance floor, spinning in confident steps with a wide smile on her face.
Hernandez is part of a revival that’s been getting younger people excited about salsa music — and flocking to Stevens. She grew up watching her father dance salsa, but started diving back into the genre on her own to find comfort during the L.A. wildfires earlier this year. She credits Bad Bunny’s “Debí Tirar Más Fotos” for re-sparking her interest.
“It was very healing for me,” she says of the album, which blends old-school Puerto Rican boricua samples with Latin dance and reggaeton influences for an emotional imagining of Puerto Rican identity.
For decades, Stevens has brought friends, couples, and families together for live music and dance.
(Emil Ravelo/For The Times)
When college friends recommended Stevens as an affordable place to dance, Hernandez mentioned it in passing to her dad. “He laughed and said, ‘I remember that place. I used to dance there too,’” Hernandez says.
The increasingly mainstream artists of Latin fusion genre reggaeton are returning to tradition. Along with the music of Bad Bunny, who’s headlining the upcoming Super Bowl halftime show, you can find classic salsa references in reggaeton star Rauw Alejandro’s latest album “Cosa Nuestra,” and in Colombian pop star Karol G’s multi-genre summer album “Tropicoqueta,” which will be at the center of her headlining Coachella set.
“You can feel the younger energy,” says longtime Stevens salsa instructor Jennifer Aguirre. “It makes me really happy to see a younger generation take on salsa. Because I was worried for a bit. I didn’t know how salsa is going to continue.”
Los Angeles has a unique relationship with salsa, the Afro-Caribbean dance born from Cuban mambo. In cities like Miami and New York, salsa arrived with Cuban and Puerto Rican immigrants. Instead, L.A.’s salsa influence came from Golden Age Hollywood, where Latin dance in movies produced a singular, flashier Angeleno style, characterized by quick turns and theatrical movement, according to salsa historian Juliet McMains.
The 1990s were another high for the genre, when West Coast pioneers like the Vazquez brothers and their first-of-its-kind dance team Salsa Brava sparked a local dance craze. The Vazquezes introduced the “on-1” step and innovated a flashier, dramatic style of salsa in L.A. that brought crowds to competitions and congresses through the 2000s. Legendary late promoter Albert Torres founded the L.A. Salsa Congress in 1999, the first congress on the West Coast, drawing a worldwide audience for Angeleno salsa.
Opened in 1952 by Steven Filipan (and located on Stevens Place), Stevens in Commerce became a local hub for Latin music. “The interesting part was that the area wasn’t Latin at all,” says Jim Filipan, Steven’s grandson and now the third-generation owner of the restaurant. “My grandfather had a foresight that this genre would be the future.”
Jim recalls his childhood growing up in the restaurant. “We would have hundreds of people on Sundays,” he says. “The ballroom, the restaurant, everyone was dancing salsa, and it was incredible. My dad took over in the ‘70s, and I was running it with him in the ‘90s.”
Yet by the 2010s it was apparent that another genre was taking hold of the Latin dance scene: bachata, ushered in by smooth-singing New York stars like Prince Royce and Romeo Santos. Salsa quickly went from being considered hip to rather old-fashioned.
During a Stevens dance lesson, guests learn how to spin on the dance floor.
(Emil Ravelo / For The Times)
Aguirre witnessed the genre lose interest firsthand. “It was like an immediate switch,” Aguirre says. “Salsa just wasn’t as popular anymore, and people would walk over to the other side of the restaurant to take the bachata lessons.”
The pandemic also dealt a large blow to local salsa clubs, as peers in the long-standing dance club industry fell to lower attendance rates and rising rent. And in the last year, two historic venues, the Conga Room and the Mayan, closed permanently.
Stevens almost had the same fate. The financial burdens during the pandemic made Jim consider closing for good. But he couldn’t help but consider the responsibility of his family’s legacy and the special place Stevens holds for local dancers.
“It’s very emotional for me because I have four generations in this restaurant, and now my daughter works here,” he says.
When Stevens reopened, the community came back in droves, ushering in a new era of excitement for salsa.
These days, at the beginning of every class, dance instructor Miguel “Miguelito” Aguirre announces the same rule.
“Forget about what happened today, forget about your week, forget about all the bad stuff. Leave it at the door,” Aguirre says. “It’s going to be better because we’re going to dance salsa.”
Dance instructor, Miguel Aguirre, right, mans the DJ booth alongside DJ Pechanga, another longtime employee of Stevens. Every weekend, the duo brings Latin music to the forefront of the space.
(Emil Ravelo/For The Times)
Aguirre has taught salsa at Stevens for 30 years. In many ways, the steakhouse has shaped his life. It’s where he discovered his love for teaching dance and much more.
“I started coming here in the ‘90s, sneaking in through the back door. I was a teenager, so not old enough to show my ID, but one day, Jim just said, ‘You guys cannot come in through the back anymore. You can come into the front,’” Aguirre says. “And then one day he said, ‘Hey, we are missing the instructors. They’re not coming in. Can you guys teach the class?’ And, I’m still here.”
Jennifer Aguirre, a fellow dance teacher at Stevens, is his wife. She met him one day at Stevens’ annual Halloween party.
“He asked me to join his class because they ‘needed more girls,’” Jennifer says, laughing.
Now Jennifer teaches the beginner’s class, while Miguel is on intermediate. But once 10 p.m. hits, it’s social dancing time. The whole floor comes together and a familiar community converges. If attendees are lucky, they might catch Jennifer and Miguel, a smooth-dancing duo, letting loose, stepping and dipping effortlessly.
On a recent Sunday night, the low-lighted ambience of the restaurant met the purple lights of the dance room, with people sitting all around for a peek at the moves on display. Buttery steaks and potatoes cooking in the kitchen tinged the air as the dance floor came alive with women spinning in dresses and men in shining shoes gliding to the rhythm of the music. Miguel Aguirre manned the DJ stand, asking two singles if they knew each other and encouraging them to dance.
Gregorio Sines was one of the solo dancers on the floor, swaying partners easily under Miguel’s encouragement. Years ago, his friend, who frequented Stevens, dragged Sines out to dance socials, telling him it would be the best way to meet people and open up.
As someone who began with anxiety to dance in front of others, Sines now performs in Stevens’ dance showcases. He says consistently returning to the steakhouse’s historic floor and immersing himself in the supportive community not only changed his dance game, but brought him out of his shell.
“I tell anyone, if you’re scared to dance, you just have to get out there,” Sines says. “There’s a community waiting for you.”
You may not know Eliot Mack’s name, but if a small robot has ever crept around your kitchen, you know his work.
Before he turned his MIT-trained mind to filmmaking, Mack helped lead a small team of engineers trying to solve a deeply relatable problem: how to avoid vacuuming. Whether it was figuring out how to get around furniture legs or unclog the brushes after a run-in with long hair, Mack designed everything onscreen first with software, troubleshooting virtually and getting 80% of the way there before a single part was ever manufactured.
When Mack pivoted to filmmaking in the early 2000s, he was struck by how chaotic Hollywood’s process felt. “You pitch the script, get the green light and you’re flying into production,” he says, sounding both amused and baffled. “There’s no CAD template, no centralized database. I was like, how do movies even get made?”
That question sent Mack down a new path, trading dust bunnies for the creative bottlenecks that slow Hollywood down.
In 2004 he founded Lightcraft Technology, a startup developing what would later be known as virtual production tools, born out of his belief that if you could design a robot in software, you should be able to design a shot the same way. The company’s early system, Previzion, sold for $180,000 and was used on sci-fi and fantasy shows like “V” and “Once Upon a Time.” But Jetset, its latest AI-assisted tool set, runs on an iPhone and offers a free tier, with pro features topping out at just $80 a month. It lets filmmakers scan a location, drop it into virtual space and block out scenes with camera moves, lighting and characters. They can preview shots, overlay elements and organize footage for editing — all from a phone. No soundstage, no big crew, no gatekeepers. Lightcraft’s pitch: “a movie studio in your pocket.”
A series on how the AI revolution is reshaping the creative foundations of Hollywood — from storytelling and performance to production, labor and power.
The goal, Mack says, is to put more power in the hands of the people making the work. “One of the big problems is how siloed Hollywood is,” he says. “We talked to an Oscar-winning editor who said, ‘I’m never going to get to make my movie’ — he was pigeonholed as just an editor. Same with an animator we know who has two Oscars.”
Eliot Mack, CEO of Lightcraft, an AI-powered virtual-production startup, wants to give creators the power and freedom to bring their ideas to life.
(Christina House/Los Angeles Times)
To Mack, the revolution of Jetset recalls the scrappy, guerrilla spirit of Roger Corman’s low-budget productions, which launched the early careers of directors like Francis Ford Coppola and Martin Scorsese. For generations of creatives stuck waiting on permission or funding, he sees this moment as a reset button.
“The things you got good at — writing, directing, acting, creating, storytelling — they’re still crazy useful,” he says. “What’s changing is the amount of schlepping you have to do before you get to do the fun stuff. Your 20s are a gift. You want to be creating at the absolute speed of sound. We’re trying to get to a place where you don’t have to ask anyone. You can just make the thing.”
AI is reshaping nearly every part of the filmmaking pipeline. Storyboards can now be generated from a script draft. Lighting and camera angles can be tested before anyone touches a piece of gear. Rough cuts, placeholder VFX, even digital costume mock-ups can all be created before the first shot is filmed. What once took a full crew, a soundstage and a six-figure budget can now happen in minutes, sometimes at the hands of a single person with a laptop.
This wave of automation is arriving just as Hollywood is gripped by existential anxiety. The 2023 writers’ and actors’ strikes brought the industry to a standstill and put AI at the center of a fight over its future. Since then, production has slowed, crew sizes have shrunk and the streaming boom has given way to consolidation and cost-cutting.
According to FilmLA, on-location filming in Greater Los Angeles dropped 22.4% in early 2025 compared with the year before. For many of the crew members and craftspeople still competing for those jobs, AI doesn’t feel like an innovation. It feels like a new way to justify doing more with less, only to end up with work that’s less original or creative.
“AI scrapes everything we artists have made off the internet and creates a completely static, banal world that can never imagine anything that hasn’t happened before,” documentary filmmaker Adam Curtis warned during a directors panel at the 2023 Telluride Film Festival, held in the midst of the strikes. “That’s the real weakness of the AI dream — it’s stuck with the ghosts. And I think we’ll get fed up with that.”
How you feel about these changes often depends on where you sit and how far along you are in your career. For people just starting out, AI can offer a way to experiment, move faster and bypass the usual barriers to entry. For veterans behind the scenes, it often feels like a threat to the expertise they’ve spent decades honing.
Past technological shifts — the arrival of sound, the rise of digital cameras, the advancement of CGI — changed how movies were made, but not necessarily who made them. Each wave brought new roles: boom operators and dialogue coaches, color consultants and digital compositors. Innovation usually meant more jobs, not fewer.
But AI doesn’t just change the tools. It threatens to erase the people who once used the old ones.
Diego Mariscal has seen first hand as AI has cut potential jobs for grips.
(Jennifer Rose Clasen)
Diego Mariscal, 43, a veteran dolly grip who has worked on “The Mandalorian” and “Spider-Man: No Way Home,” saw the writing on the wall during a recent shoot. A visual effects supervisor opened his laptop to show off a reel of high-end commercials and something was missing. “There were no blue screens — none,” Mariscal recalls. “That’s what we do. We put up blues as grips. You’d normally hire an extra 10 people and have an extra three days of pre-rigging, setting up all these blue screens. He was like, ‘We don’t need it anymore. I just use AI to clip it out.’”
Mariscal runs Crew Stories, a private Facebook group with nearly 100,000 members, where working crew members share job leads, trade tips and voice their growing fears. He tries to keep up with the steady drip of AI news. “I read about AI all day, every day,” he says. “At least 20 posts a day.”
His fear isn’t just about fewer jobs — it’s about what comes next. “I’ve been doing this since I was 19,” Mariscal says of his specialized dolly work, which involves setting up heavy equipment and guiding the camera smoothly through complex shots. “I can push a cart in a parking lot. I can push a lawnmower. What else can I do?”
Who wins, who loses and what does James Cameron think?
Before AI and digital doubles, Mike Marino learned the craft of transformation the human way: through hands-on work and a fascination that bordered on obsession.
Marino was 5 years old when he first saw “The Elephant Man” on HBO. Horrified yet transfixed, he became fixated on prosthetics and the emotional power they could carry. As a teenager in New York, he pored over issues of Fangoria, studied monsters and makeup effects and experimented with sculpting his own latex masks on his bedroom floor.
Prosthetics artist Mike Marino asks a big question related to generative AI: What role do the human creatives play?
(Sean Dougherty / For The Times)
Decades later, Marino, 48, has become one of Hollywood’s leading makeup artists, earning Oscar nominations for “Coming 2 America,” “The Batman” and last year’s dark comedy “A Different Man,” in which he helped transform Sebastian Stan into a disfigured actor.
His is the kind of tactile, handcrafted work that once seemed irreplaceable. But today AI tools are increasingly capable of achieving similar effects digitally: de-aging actors, altering faces, even generating entire performances. What used to take weeks of experimentation and hours in a makeup trailer can now be approximated with a few prompts and a trained model. To Marino, AI is more than a new set of tools. It’s a fundamental change in what it means to create.
“If AI is so good it can replace a human, then why have any human beings?” he says. “This is about taste. It’s about choice. I’m a human being. I’m an artist. I have my own ideas — mine. Just because you can make 10,000 spaceships in a movie, should you?”
“If AI is so good it can replace a human, then why have any human beings?”
— Mike Marino, makeup artist on “A Different Man”
Marino is no technophobe. His team regularly uses 3D scanning and printing. But he draws the line at outsourcing creative judgment to a machine. “I’m hoping there are artists who want to work with humans and not machines,” he says. “If we let AI just run amok with no taste, no choice, no morality behind it, then we’re gone.”
Not everyone sees AI’s rise in film production as a zero-sum game. Some technologists imagine a middle path. Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Lab and one of the world’s leading AI researchers, believes the future of filmmaking lies in a “human-machine partnership.”
AI, Rus argues, can take on time-consuming tasks like animating background extras, color correction or previsualizing effects, freeing up people to focus on what requires intuition and taste. “AI can help with the routine work,” she says. “But the human touch and emotional authenticity are essential.”
Few directors have spent more time grappling with the dangers and potential of artificial intelligence than James Cameron. Nearly 40 years before generative tools entered Hollywood’s workflow, he imagined a rogue AI triggering global apocalypse in 1984’s “The Terminator,” giving the world Skynet — now a cultural shorthand for the dark side of machine intelligence. Today, he continues to straddle that line, using AI behind the scenes on the upcoming “Avatar: Fire and Ash” to optimize visual effects and performance-capture, while keeping creative decisions in human hands. The latest sequel, due Dec. 19, promises to push the franchise’s spectacle and scale even further; a newly released trailer reveals volcanic eruptions, aerial battles and a new clan of Na’vi.
A scene from “Avatar: The Way of Water.” Director James Cameron differentiates between using machine-learning to reduce monotonous movie-making work and generative AI.
(Courtesy of 20th Century Studios/Courtesy of 20th Century Studios)
“You can automate a lot of processes that right now tie up a lot of artists doing mundane tasks,” Cameron told The Times in 2023 at a Beyond Fest screening of his 1989 film “The Abyss.” “So if we could accelerate the postproduction pipeline, then we can make more movies. Then those artists will get to do more exciting things.”
For Cameron, the promise of AI lies in efficiency, not elimination. “I think in our particular industry, it’s not going to replace people; it’s going to free them to do other things,” he believes. “It’s going to accelerate the process and bring the price down, which would be good because, you know, some movies are a little more expensive than others. And a lot of that has to do with human energy.”
Cameron himself directed five films between 1984 and 1994 and only three in the three decades since, though each one has grown increasingly complex and ambitious.
That said, Cameron has never been one to chase shortcuts for their own sake. “I think you can make pre-viz and design easier, but I don’t know if it makes it better,” he says. “I mean, if easy is your thing. Easy has never been my thing.”
He draws a line between the machine-learning techniques his team has used since the first “Avatar” to help automate tedious tasks and the newer wave of generative AI models making headlines today.
“The big explosion has been around image-based generative models that use everything from every image that’s ever been created,” he says. “We’d never use any of them. The images we make are computer-created, but they’re not AI-created.”
In his view, nothing synthetic can replace the instincts of a flesh-and-blood artist. “We have human artists that do all the designs,” he says. “We don’t need AI. We’ve got meat-I. And I’m one of the meat-artists that come up with all that stuff. We don’t need a computer. Maybe other people need it. We don’t.”
Reshaping creativity — and creative labor
Rick Carter didn’t go looking for AI as a tool. He discovered it as a lifeline.
The two-time Oscar-winning production designer, who worked with Cameron on “Avatar” and whose credits include “Jurassic Park” and “Forrest Gump,” began experimenting with generative AI tools like Midjourney and Runway during the pandemic, looking for a way to keep his creative instincts sharp while the industry was on pause. A longtime painter, he was drawn to the freedom the programs offered.
“I saw that there was an opportunity to create images where I didn’t have to go to anybody else for approval, which is the way I would paint,” Carter says by phone from Paris. “None of the gatekeeping would matter. I have a whole lot of stories on my own that I’ve tried to get into the world in various ways and suddenly there was a way to visualize them.”
Midjourney and Runway can create richly detailed images — and in Runway’s case, short video clips — from a text prompt or a combination of text and visuals. Trained on billions of images and audiovisual materials scraped from the internet, these systems learn to mimic style, lighting, composition and form, often with eerie precision. In a production pipeline, these tools can help concept artists visualize characters or sets, let directors generate shot ideas or give costume designers and makeup artists a fast way to test looks, long before physical production begins.
But as these tools gain traction in Hollywood, a deeper legal and creative dilemma is coming into focus: Who owns the work they produce? And what about the copyrighted material used to train them?
In June, Disney and Universal filed a federal copyright lawsuit against Midjourney, accusing the company of generating unauthorized replicas of characters such as Spider-Man, Darth Vader and Shrek using AI models trained on copyrighted material: what the suit calls a “bottomless pit of plagiarism.” It’s the most high-profile of several legal challenges now putting copyright law to the test in the age of generative AI.
“Forrest Gump” director Robert Zemeckis, left, with production designer Rick Carter at an art installation of the movie’s famed bench. (Carter family)
(Carter family)
Working with generative models, Carter began crafting what he calls “riffs of consciousness,” embracing AI as a kind of collaborative partner, one he could play off of intuitively. The process reminded him of the loose, improvisational early stages of filmmaking, a space he knows well from decades of working with directors like Robert Zemeckis and Steven Spielberg.
“I’ll just start with a visual or a word prompt and see how it iterates from there and what it triggers in my mind,” Carter says. “Then I incorporate that so it builds on its own in an almost free-associative way. But it’s still based upon my own intuitive, emotional, artistic, even spiritual needs at that moment.”
He describes the experience as a dialogue between two minds, one digital and one human: “One AI is artificial intelligence. The other AI is authentic intelligence — that’s us. We’ve earned it over this whole span of time on the planet.”
Sometimes, Carter says, the most evocative results come from mistakes. While sketching out a story about a hippie detective searching for a missing woman in the Himalayas, he accidentally typed “womb” into ChatGPT instead of “woman.” The AI ran with it, returning three pages of wild plot ideas involving gurus, seekers and a bizarre mystery set in motion by the disappearance.
“I couldn’t believe it,” he says. “I would never have taken it that far. The AI is so precocious. It is trying so much to please that it will literally make something out of the mistake you make.”
Carter hasn’t used generative AI on a film yet; most of his creations are shared only with friends. But he says the technology is already slipping into creative workflows in covert ways. “There are issues with copyrights with most of the studios so for now, it’s going to be mostly underground,” he says. “People will use it but they won’t acknowledge that they’re using it — they’ll have an illustrator do something over it, or take a photo so there’s no digital trail.”
Carter has lived through a major technological shift before. “I remember when we went from analog to digital, from ‘Jurassic Park’ on,” he says. “There were a lot of wonderful artists who could draw and paint in ways that were just fantastic but they couldn’t adapt. They didn’t want to — even the idea of it felt like the wrong way to make art. And, of course, most of them suffered because they didn’t make it from the Rolodex to the database in terms of people calling them up.”
He worries that some artists may approach the technology with a rigid sense of authorship. “Early on, I found that the less I used my own ego as a barometer for whether something was artistic, the more I leaned into the process of collaboratively making something bigger than the sum of its parts — and the bigger and better the movies became.”
Others, like storyboard artist Sam Tung, are bracing against the same wave with a quiet but unshakable defiance.
Tung, whose credits include “Twisters” and Christopher Nolan’s upcoming adaptation of “The Odyssey,” has spent the last two years tracking the rise of generative tools, not just their capabilities but their implications. As co-chair of the Animation Guild’s AI Committee, he has been on the front lines of conversations about how these technologies could reshape creative labor.
To artists like Tung, the rise of generative tools feels deeply personal. “If you are an illustrator or a writer or whatever, you had to give up other things to take time to develop those skills,” he says. “Nobody comes out of the womb being able to draw or write or act. Anybody who does that professionally spent years honing those skills.”
“Anything I’ve made with AI, I’ve quickly forgotten about. There’s basically nothing I get from putting it on social media, other than the ire of my peers.”
— Sam Tung, storyboard artist on “The Odyssey”
Tung has no interest in handing that over to a machine. “It’s not that I’m scared of it — I just don’t need it,” he says. “If I want to draw something or paint something, I’ll do it myself. That way it’s exactly what I want and I actually enjoy the process. When people tell me they responded to a drawing I did or a short film I made with friends, it feels great. But anything I’ve made with AI, I’ve quickly forgotten about. There’s basically nothing I get from putting it on social media, other than the ire of my peers.”
What unsettles him isn’t just the slickness of AI’s output but how that polish is being used to justify smaller crews and faster turnarounds. “If this is left unchecked, it’s very easy to imagine a worst-case scenario where team sizes and contract durations shrink,” Tung says. “A producer who barely understands how it works might say, ‘Don’t you have AI to do 70% of this? Why do you need a whole week to turn around a sequence? Just press the button that says: MAKE MOVIE.’ ”
At 73, Carter isn’t chasing jobs. His legacy is secure. “If they don’t hire me again, that’s OK,” he says. “I’m not in that game anymore.” He grew up in Hollywood — his father was Jack Lemmon’s longtime publicist and producing partner — and has spent his life watching the industry evolve. Now, he’s witnessing a reckoning unlike any he, or anyone else, has ever imagined.
“I do have concerns about who is developing AI and what their values are,” he says. “What they use all this for is not necessarily something I would approve of — politically, socially, emotionally. But I don’t think I’m in a position to approve or not.”
Earlier this year, the Palisades fire destroyed Carter’s home, taking with it years of paintings and personal artwork. AI, he says, has given him a way to keep creating through the upheaval. “It saved me through the pandemic, and now it’s saving me through the fire,” he says, as if daring the universe to test him again. “It’s like, go ahead, throw something else at me.”
‘Prompt and pray?’ Not so fast
Many in the industry may still be dipping a toe into the waters of AI. Verena Puhm dove in.
The Austrian-born filmmaker studied acting and directing in Munich and Salzburg before moving to Los Angeles, where she built a globe-spanning career producing, writing and developing content for international networks and streamers. Her credits range from CNN’s docuseries “History of the Sitcom” to the German reboot of the cult anthology “Beyond Belief: Fact or Fiction” and a naval documentary available on Tubi. More recently, she has channeled that same creative range into a deepening exploration of generative tools.
Puhm first began dabbling with AI while using Midjourney to design a pitch deck, but it wasn’t until she entered a timed generative AI filmmaking challenge at the 2024 AI on the Lot conference — informally dubbed a “gen battle” — that the creative potential of the medium hit her.
“In two hours, I made a little mock commercial,” she remembers, proudly. “It was actually pretty well received and fun. And I was like, Oh, wow, I did this in two hours. What could I do in two days or two weeks?”
What started as experimentation soon became a second act. This summer, Puhm was named head of studio for Dream Lab LA, a new creative arm of Luma AI, which develops generative video tools for filmmakers and creators. There, she’s helping shape new storytelling formats and supporting emerging creators working at the intersection of cinema and technology. She may not be a household name, but in the world of experimental storytelling, she’s fast becoming a key figure.
Verena Puhm, a director, writer and producer, has used generative AI in a number of her projects, says it’s breaking down barriers to entry.
(Jason Armond/Los Angeles Times)
Some critics dismiss AI filmmaking as little more than “prompt and pray”: typing in a few words and hoping something usable comes out. Puhm bristles at the phrase.
“Anybody that says that tells me they’ve never tried it at all, because it is not that easy and simple,” she says. “You can buy a paintbrush at Home Depot for, what, $2? That doesn’t make you a painter. When smartphones first came out, there was a lot of content being made but that didn’t mean everyone was a filmmaker.”
What excites her most is how AI is breaking down the barriers that once kept ambitious ideas out of reach. Luma’s new Modify Video tool lets filmmakers tweak footage after it’s shot, changing wardrobe, aging a character, shifting the time of day, all without reshoots or traditional VFX. It can turn a garage into a spaceship, swap a cloudy sky for the aurora borealis or morph an actor into a six-eyed alien, no green screen required.
“I remember shopping projects around and being told by producers, ‘This scene has to go, that has to go,’ just to keep the budget low. Now everything is open.”
— Verena Puhm, Head of Studio at Dream Lab LA
“It’s such a relief as an artist,” Puhm says. “If there’s a project I’ve been sitting on for six years because I didn’t have a $5 million budget — suddenly there’s no limit. I remember shopping projects around and being told by producers, ‘This scene has to go, that has to go,’ just to keep the budget low. Now everything is open.”
That sense of access resonates far beyond Los Angeles. At a panel during AI on the Lot, “Blue Beetle” director Ángel Manuel Soto reflected on how transformative AI might have been when he was first starting out. “I wish tools like this existed when I wanted to make movies in Puerto Rico, because nobody would lend me a camera,” he said. “Access to equipment is a privilege we sometimes take for granted. I see this helping kids like me from the projects tell stories without going bankrupt — or stealing, which I don’t condone.”
Puhm welcomes criticism of AI but only when it’s informed. “If you hate AI and you’ve actually tested the tools and educated yourself, I’ll be your biggest supporter,” she says. “But if you’re just speaking out of fear, with no understanding, then what are you even basing your opinion on?”
She understands why some filmmakers feel rattled, especially those who, like her, grew up dreaming of seeing their work on the big screen. “I still want to make features and TV series — that’s what I set out to do,” she says. “I hope movie theaters don’t go away. But if the same story I want to tell reaches millions of people on a phone and they’re excited about it, will I really care that it wasn’t in a theater?”
“I just feel like we have to adapt to the reality of things,” she continues. “That might sometimes be uncomfortable, but there is so much opportunity if you lean in. Right now any filmmaker can suddenly tell a story at a high production value that they could have never done before, and that is beautiful and empowering.”
For many, embracing AI boils down to a simple choice: adapt or get cut from the frame.
Hal Watmough, a BAFTA-winning British editor with two decades of experience, first began experimenting with AI out of a mix of curiosity and dread. “I was scared,” he admits. “This thing was coming into the industry and threatening our jobs and was going to make us obsolete.” But once he started playing with tools like Midjourney and Runway, he quickly saw how they could not only speed up the process but allow him to rethink what his career could be.
For an editor used to working only with what he was given, the ability to generate footage on the fly, cut with it immediately and experiment endlessly without waiting on a crew or a shoot was a revelation. “It was still pretty janky at that stage, but I could see the potential,” he says. “It was kind of intoxicating. I started to think, I’d like to start making things that I haven’t seen before.”
After honing his skills with various AI tools, Watmough created a wistful, vibrant five-minute animated short called “LATE,” about an aging artist passing his wisdom to a young office worker. Over two weeks, he generated 2,181 images using AI, then curated and refined them frame by frame to shape the story.
Earlier this year, he submitted “LATE” to what was billed as the world’s first AI animation contest, hosted by Curious Refuge, an online education hub for creative technologists — and, to his delight, he won. The prize included $10,000, a pitch meeting with production company Promise Studios and, as an absurd bonus, his face printed on a potato. But for Watmough, the real reward was the sense that he had found a new creative identity.
“There’s something to the fact that the winner of the first AI animation competition was an editor,” Watmough says. “With the advent of AI, yes, you could call yourself a filmmaker but essentially I’d say most people are editors. You’re curating, selecting, picking what you like — relying on your taste.”
Thanks to AI, he says he’s made more personal passion projects in the past year and a half than during his entire previous career. “I’ll be walking or running and ideas just come. Now I can go home that night and try them,” he says. “None of that would exist without AI. So either something exists within AI or it never exists at all. And all the happiness and fulfillment that comes with it for the creator doesn’t exist either.”
Watmough hasn’t entirely lost his fear of what AI might do to the creative workforce, even as he is energized by what it makes possible. “A lot of people I speak to in film and TV are worried about losing their jobs and I’m not saying the infrastructure roles won’t radically change,” he says. “But I don’t think AI is going to replace that many — if any — creative people.”
What it will do, he says, is raise the bar. “If anyone can create anything, then average work will basically become extinct or pointless. AI can churn out remakes until the cows come home. You’ll have to pioneer to exist.”
He likens the current moment to the birth of cinema more than a century ago — specifically the Lumière brothers’ “Arrival of a Train at La Ciotat,” the 1896 short that famously startled early audiences. In the silent one-minute film, a steam train rumbles toward the camera, growing larger. Some viewers reportedly leaped from their seats, convinced it was about to crash into them.
“People ran out of the theater screaming,” Watmough says. “Now we don’t even think about it. With AI, we’re at that stage again. We’re watching the steam train come into the station and people are either really excited or they’re running out of the theater in fear. That’s where we are, right at the start. And the potential is limitless.”
Then again, he adds with a dry laugh, “I’m an eternal optimist, so take what I say with a grain of salt.”
THE UK’s biggest one-day Christmas market is to return in just a few weeks after it was abruptly cancelled last year.
Thousands are expected to attend the event in a historic market town.
Sign up for the Travel newsletter
Thank you!
Stalls in Beverley will once again be brimming with seasonal gifts and local treats
Shoppers will be able to choose from 120 stalls set up in the centre of Beverley in Yorkshire next month after council officials confirmed the event would be back in place this year.
Stalls at the Beverley Festival of Christmas will once again be brimming with seasonal gifts and local treats.
There will also be live entertainment, music and dancing – and, of course, an appearance from Santa and his reindeers – as well as birds of prey on display, and traditional brass bands.
The event had to be cancelled at the last-minute in 2024 after Storm Darragh caused chaos across the UK last year.
Locals and visitors were left disappointed after council officials took the decision to cancel the market in the run up to the event after 34 flood warnings were issued across the country and a cold weather health alert was put in place.
At the time, a spokesperson for East Riding of Yorkshire Council said the event, which was established in 1995 by local shop owner John Bird, had involved six months of planning and the decision to cancel “had not been taken lightly”.
Announcing its return for 2025, councillor Nick Coultish, cabinet member for culture, tourism and leisure at the council, said the event is “the flagship tourism event for our region and is an important date in the calendar for businesses and traders”.
‘Hugely popular’
He added: “It is always a hugely popular event, with visitors from all over the UK coming to enjoy the festive stalls and entertainment.
“The event will also provide a welcome boost to the local visitor economy at such a wonderful time of the year.”
The council said that Beverley’s historic centre will be “transformed into a magical Christmas shopper’s wonderland” for the event.
News of its return has prompted an outpouring of support and excitement with hundreds of locals and visitors commenting on Facebook.
One person posted: “I cannot wait to stand in this market. So excited.”
Another person wrote: “We will be there.”
A third person said: “Can’t wait. This is one of our fave Christmas markets.”
A fourth person was equally excited at the news the market is to return, posting: “Love Beverley. Nice place.”
A fifth said: “Will defo have to go!”
The festival will again be sponsored by The Beverley Arms, part of hospitality company Thwaites.
Rick Bailey, chairman at Thwaites, said: “We’re so proud that The Beverley Arms is once again part of The Beverley Festival of Christmas, it’s such a special celebration that brings the whole town together, showcases the very best of Beverley, and captures the true spirit of the season.”
This year’s event will run from 10am to 4pm on Sunday, 14 December.
Also making a return to Beverly this year is the Reindeer Rampage Christmas Trail, which is running from 1 November to December 31.
This is a popular event in the town with locals and visitors asked to find Santa‘s reindeer, which are “hiding” in shop windows in Beverley.
You can pick up a free map and reindeer mask from the Tourist Information Centre on Cross Street and find out more detailshere.
The market itself will take place over several areas in the town, including Saturday Market, and will also feature a parade to mark the start of the festival.
The parade will leave from Wednesday Market when the event opens at 10am and it will move down Toll Gavel to Saturday Market.
Taking part in the parade will be feature Santa Claus, Rudolph and the rest of his reindeer, live music, comedy acts, steam punks, Bernese mountain dogs and much more.
This year’s event will take place on 14 December in Beverley
Kelsie Stonya, from Southend-on-Sea, recently took on the ultimate ‘extreme day trip’ – travelling to Palma, Mallorca, and returning to the UK in the same day – all for just £143
Paige Oldfield and Ian Craig Social Newsdesk Content Editor
10:32, 03 Nov 2025
Kelsie Stonya went to Spain and back in one day(Image: Jam Press/@3kelsie)
Kelsie Stonya, from Southend-on-Sea, recently took on the ultimate “extreme day trip” – battling a storm while trying for a quick getaway. She hopped on a delayed plane to Palma, Mallorca, and returned to Britain in the same day – all for just £143.
The 25-year-old hit the beach, tucked into paella for one, and even ended up in the cockpit chatting to the pilot after a delayed flight. Her whirlwind day out proved a hit online too – with a TikTok video of her experience racking up 42,600 views from viewers loving the spontaneous adventure.
“It was so hot, so sunny – it was absolutely dreamy,” she told What’s The Jam. “The whole trip cost £143.08 including flights, transfers, food and everything – definitely cheaper than some nights out I’ve had before.
“By the end of it, with all the walking and being hot, I was so, so tired. I’d definitely do it again, but I’m keeping my fingers crossed for no delayed flights next time.”
The communications manager started her day at the airport, but things didn’t exactly go to plan when she arrived to find every flight delayed due to Storm Amy battering parts of Europe. Thankfully, her flight did eventually take off.
Once she landed, Kelsie jumped on a 20-minute bus into Palma’s city centre where she admired the cathedral, wandered the streets, and hunted down a restaurant. After lunch, she strolled to the beach to soak up the sun, calling the weather “absolutely dreamy”, before hitting the shops.
By the end of the day, exhausted but happy, she splurged on an Uber back to the airport – putting her Spanish skills to the test with the driver.
The return leg didn’t go smoothly either, with the flight delayed by two hours – but things took a turn for the better when the friendly pilot invited passengers into the cockpit for a peek.
Content cannot be displayed without consent
Kelsie says she’d “absolutely do it again”, but is hoping for smoother skies next time.
She said: “I just had a really nice lunch and then headed over to the beach. I walked about 20 minutes and then sat down for a good hour. The thought of getting back on the bus to the airport was honestly upsetting me, so I just jumped in an Uber.”
Viewers were quick to praise Kelsie for making the most of her mini break. “That’s an impressive day Kels,” said one person.
Someone else said: “I do it all the time; great way to spend the day.”
Another viewer added: “I love doing these! So far I’ve done Belfast, Wroclaw, Alicante, Copenhagen and now have Barcelona booked for the end of this month.”
A TEENAGE boy was banned from boarding his flight because of a sticker on his passport.
Thirteen-year-old Alix Dawson was due to fly to Thailand with his family last month for two weeks.
Sign up for the Travel newsletter
Thank you!
A teenage boy was banned from his flight because of his passportCredit: Kennedy NewsA luggage sticker mark meant the airline didn’t accept the passportCredit: Kennedy News
However, after arriving at Edinburgh Airport, his mum Meghan Law was told that he wouldn’t be allowed to board the flight with his passport at the check in desk.
Meghan, who lives in Aberdeen said: “We got to the airport and were checking in my bags when the [check-in staff member] looked at my passport then just walked away from the desk. She didn’t say anything.
“We were standing there for 20 minutes before I asked what’s going on. She came back and said that my passport was damaged.
“I said I’ve used this umpteen times. No one’s ever mentioned any damage on it before.
“There were no rips or stains, I don’t know what she was trying to imply. I was really shocked.
“What they were trying to say was that the luggage check-in stickers that had been stuck on one of the pages [and] had damaged the page. But it wasn’t even on the photo page.
“There were no rips, it was just where the sticker marks had been. They said we couldn’t travel with it.
“I knew there were no issues with their passports. We’d probably travelled over a dozen times with them.”
She was then told that they would need to go to Glasgow Airport to get a new emergency passport.
Fearing for their £3,000 holiday, she contacted TUI, who they booked the trip with.
After sending photos of the reported ‘damage’, Meghan said the tour operator found no issues with the passport and put them on the next available flight to Thailand which was with Emirates rather than Qatar Airways.
The family were able to head on holiday with no further obstacles, albeit the next day, from a different airport.
Meghan said: “If I hadn’t booked through TUI and booked it myself, we just wouldn’t have been able to go on holiday.
“One way from Glasgow on the same day of travel would’ve been £2,800.
The family had to fly from Glasgow instead of Edinburgh, with TUI getting them on the next flightCredit: Kennedy NewsMum Meghan has slammed the rules as she said they have never had problems flying with it beforeCredit: Kennedy News
“We used it six times over the two-week holiday and no one said anything which confirms there were no issues with the passports.”
She said that it “ruined the start of the trip” for being so stressful and is calling for compensation.
Other passengers have been banned from their flights due to them being too damaged.
Monaghan GAA icon Eugene ‘Nudie’ Hughes has passed away at the age of 67.
In a distinguished career, Hughes won three Ulster SFC titles and a National Football League title, while he was selected as an All-Star three times.
Widely considered Monaghan’s greatest footballer, Hughes was diagnosed with cancer in 2018.
He became the first Monaghan player to win an All Star in 1979 and repeated the feat in 1985 and 1989 when he received further personal accolades.
Hughes was one of only a small number of players to have been recognised in defence and attack, his first All Star coming as a corner-back with the others coming as a corner-forward.
Monaghan won the Ulster title in each of the years he received an All-Star.
Hughes was also part of the team that won the National Football League crown in 1985.
He picked up a Railway Cup medal in 1984 as part of Ulster’s winning side and also won two Ulster football titles with Castleblayney.
In 2024, Hughes was honoured by the Gaelic Players Association with a Lifetime Achievement award.
In an interview with BBC Sport NI’s ‘The GAA Social’ in February of this year, Hughes chose the Farney county’s Ulster Championship final triumph over Donegal in 1979 as his greatest day as a player.
Monaghan beat Donegal 1-15 to 0-11 at Clones on that occasion to end a 41-year wait for provincial glory.
Tehran, Iran – Iran is “not in a hurry” to resume talks with the United States over its nuclear programme, Tehran’s foreign minister has told Al Jazeera.
Iran remains prepared to engage in indirect negotiations with Washington if the US chooses to talk “from an equal position based on mutual interest”, Foreign Minister Abbas Araghchi told Al Jazeera Arabic in an interview at his office in Tehran that was broadcast on Sunday.
Recommended Stories
list of 3 itemsend of list
The official also asserted that a critical “shared understanding” regarding Israel is developing across the region.
Tehran’s top diplomat said conditions set by the US for talks to resume – which reportedly include an emphasis on direct negotiations, zero uranium enrichment, and limits on Iran’s missile stocks and its support for regional allies – are “illogical and unfair”.
That makes talks untenable, he suggested.
“It appears they are not in a hurry,” he remarked. “We are not in a hurry, either.”
Rather, the foreign minister said he believes regional dynamics are turning against Israel, the US’s closest ally in the Middle East.
“I sometimes tell my friends that Mr Netanyahu is a war criminal who has committed every atrocity, but did something positive in proving to the entire region that Israel is the main enemy, not Iran, and not any other country,” Araghchi said in reference to the Israeli prime minister.
The comments came two days after Oman’s chief diplomat, for the first time, publicly joined the chorus of disapproval aimed at Netanyahu and his hardline government.
“We have long known that Israel, not Iran, is the primary source of insecurity in the region,” Foreign Minister Badr bin Hamad al-Busaidi told the audience at the IISS Manama Dialogue 2025 regional forum.
He said over the years, the Gulf Cooperation Council (GCC) has “at best sat back and permitted the isolation of Iran”, a stance that he believes “needs to change”.
In the past 48 hours, the heinous lie that the unlawful Israeli and U.S. bombing of Iran was motivated by an imminent nuclear threat has been thoroughly debunked by
– The International Atomic Energy Agency Chief, who has explicitly stated that Iran “is not and was not”… pic.twitter.com/C2uBzBLOHD
Oman has for years acted as a mediator between Iran and the US in nuclear, financial, prisoner exchange and other regional issues.
Tehran and Washington were slated to sit down for a sixth round of talks in mid-June, when Israel attacked Iran’s nuclear facilities. That launched a 12-day war that killed more than 1,000 people in Iran and inflicted billions of dollars in infrastructure damage.
After media reports last week said the administration of US President Donald Trump had sent a new message to Tehran via Oman, Iran’s government spokeswoman Fatemeh Mohajerani confirmed that messages had been received.
But she did not elaborate on the content or Iran’s potential response. The White House has not publicly confirmed sending the missive.
During his interview, Araghchi said “almost all” of the about 400kg (880lb) of 60-percent enriched uranium possessed by Iran is “buried under the rubble” of nuclear facilities bombed by the US and Israel.
“We have no intention of removing them from under the rubble until conditions are ready. We have no information on how much of the 400kg is untouched and how much is destroyed, and we will have no information until we dig them out,” he said.
The Iranian foreign minister pointed out that China and Russia have formally announced they do not recognise the UN sanctions recently reimposed against Iran by the European signatories to its 2015 nuclear deal with world powers.
France, the United Kingdom and Germany have signalled they want to restart talks with Tehran. However, no substantial progress has been made.
In the meantime, they have imposed sanctions and restrictions, both in relation to Iran’s alleged drone exports to Russia and its nuclear programme.
The three European powers in September announced they were suspending their bilateral air services agreements with Iran, affecting Iranian carriers like Iran Air.
Some of the flights appear to be gradually coming back, though, with Iranian state television airing footage of an Austrian Airlines flight landing in Tehran’s Imam Khomeini International Airport on Sunday night.
Germany’s Lufthansa is also scheduled to resume flights to Tehran, but the precise restart date has not been publicly announced.
Kevin the Carrot is back in the latest Aldi Christmas advert teaser – and this time, he has a big question for his beloved Katie
Aldi has dropped its first Christmas advert teaser – and its beloved mascot Kevin the Carrot is back for his tenth year.
The first of three clips shows Kevin waiting in the snow outside the house of his partner Katie. He is joined by adorable new character, Caulidog, which is a cauliflower dog.
With a diamond ring attached to its collar, Kevin and his veggie companion hold up a sign that reads: “To me, you are 24 carat.” He then lifts the sign to reveal another card that says: “Marry Christmas?”
The heartwarming moment mimics the iconic cue-card scene from Love Actually.
The new Aldi advert will appear on TV screens for the first time tonight from 8:15pm, but viewers will have to keep their eye out for the following two episodes launching over the festive season to find out Katie’s answer.
Julie Ashfield, Chief Commercial Officer at Aldi UK, said: “We know how much our customers adore Kevin, and this year, to mark his 10th anniversary, we wanted to bring an extra sprinkle of magic and romance to his story.
”This teaser sets the stage for a truly heartwarming Christmas, reminding us all that love is at the heart of the festive season. We can’t wait for everyone to join Kevin and Katie on their journey and see what other surprises we have in store.”
Last year saw Kevin trying to save the Spirit of Christmas from a group of evil villains called the humbugs.
Previous years have seen Kevin pay homage to the 1998 World Cup, visit William Conker’s fantastical Christmas factory and travel on a midnight train inspired by the Orient Express.
Kevin the Carrot merchandise such as festive decorations, plushies and plastic toys from Aldi has sold out multiple times over the years due to high demand.
It comes after Aldi revealed a list of 23 towns and cities where it wants to open new stores over the next two years as part of its £1.6billion expansion plan.
As well as new stores, the investment will go towards upgrading existing ones. Aldi will also look to improve its distribution network.
Aldi has 1,060 stores but wants to increase this number to 1,500 across the UK. It said its expansion would create thousands of jobs and more opportunities for British suppliers.
Giles Hurley, Chief Executive Officer for Aldi UK and Ireland, said: “Shoppers are still finding things difficult and that’s why we’re staying laser-focused on doing what Aldi does best – offering customers great quality products at unbeatable prices.
“Nobody else is making the same commitment to everyday low prices – no clubs, no gimmicks, no tricks – just prices our customers can trust, and quality they can depend on.“
AUSTIN, Texas — Denis Bouanga had two goals and an assist on Sunday night to help LAFC beat Austin FC 4-1 and sweep the best-of-three series in the first round of the MLS Cup playoffs.
LAFC, which won Game 1 2-1, plays at second-seeded Vancouver in the one-game Western Conference semifinals.
Son Heung-min added a goal and an assist for No. 3 seed LAFC. Jeremy Ebobisse replaced Son in the 88th minute and capped the scoring in the third minute of stoppage time.
Son, on the counter-attack, hesitated to freeze defender Ilie Sánchez at the top of the area and then exploded toward the left end line and blasted a shot from the corner of the six-yard box inside the back post to open the scoring in the 21st. About four minutes later, Son fed Bouanga for a finish — the 30-year-old’s 100th goal across all competitions for LAFC — into a wide-open net to make it 2-0.
Bouanga cut inside to evade defender Brendan Hines-Ike — who fell to the ground — and then flicked a shot into the net from the left center of the area in the 44th minute.
Bouanga is the only active player — and is one of just nine in history — with at least 10 career goals in the MLS Cup playoffs.
LAFC’s Hugo Lloris — who was second in MLS with 12 shutouts in the regular season — had three saves, including a diving stop on a penalty kick by Myrto Uzuni in the 39th minute after a hand ball in the area by Bouanga.
Ryan Porteous was shown a yellow card for a foul in the area and Dani Pereira converted from the spot in the sixth minute of stoppage time to make it 3-1 at halftime.
CJ Fodrey appeared to have cut sixth-seeded Austin’s deficit to 3-2 in the 71st minute but an offsides call nullified the would-be goal.
Forensic teams work at the scene at Huntington railway station where a London bound train stopped after several people were stabbed in Huntington, Britain, on Sunday, November 2, 2025. Photo by Tayfun Salci/EPA
Nov. 3 (UPI) — One of two suspects arrested at the scene of the weekend’s mass stabbing on a British train has been released, according to British authorities who continue to investigate.
Eleven people were injured in the Saturday evening attack on a train in Cambridgeshire, located about 37 miles north of London.
Two people — a 32-year-old man and a 35-year-old man — were apprehended at the scene.
In a statement Sunday night, British Transport Police said the 35-year-old man has been released, with no further action required.
“It was reported in good faith to officers responding to the incident that he was involved in the attack, and following enquiries we can confirm that he was not involved,” authorities said.
The 32-year-old, who’s been described as a Black British national, remains in police custody on suspicion of attempted murder.
Police on Sunday night identified him as a Peterborough resident.
Authorities were notified of the stabbing at 7:42 p.m. local time Saturday on the train from Doncaster to London King’s Cross. The 32-year-old suspect is believed to have entered the train at the Peterborough station.
A knife was recovered from the scene.
Ten people were transferred by ambulance to the hospital while an 11th victim arrived later on their own.
Five of the victims have since been discharged, according to authorities that said of the six remaining hospitalized, one is in life-threatening condition.
Authorities identified the most severely injured victim as a member of the London North Eastern Railway.
Following a review of surveillance footage, authorities believe that if it were not for his actions, more people would have died.
“The actions of the member of rail staff were nothing short of heroic and undoubtedly saved people’s lives,” Deputy Chief Constable Stuart Cundy said, though it was not clear what actions the employee took that saved lives.
David Horne, managing director of LNER, said the attack was “deeply upsetting” and that over the coming days they will continue to cooperate with authorities on their investigation.
In a statement on X, LNER said it expects to run a normal service on Monday.
The incident occurred just days after the British government announced it had seized a record number of knives — nearly 60,000 — from England and Wales through its new knife surrender scheme.
Knife homicides in Britain have fallen by nearly 20% while knife crime overall has dropped for the first time in four years, according to government statistics.
Passengers will have to download the Ryanair app to get the mobile boarding passes instead.
This will be required for all countries, excluding Morocco where a printed boarding pass will need to be given at the check in desk.
Initially hoped to be rolled out back in May, it was delayed to November 3, before being confirmed for November 12.
Ryanair boss Michael O’Leary previously said: “Our goal is to eliminate check-in desks at the airport, just like we’ve done with luggage counters.
“It also means, once we get everybody onto the app, nobody will ever again pay for a boarding pass at an airport – the airport check in fee will be gone.
“So, I think it will be a smoother, easier journey for everybody.”
The airline has reassured passengers that if they lose their phone or the battery dies, Ryanair staff can still print out a pass at the airport gate.
Of course, Ryanair still has a number of other strict fees that catch passengers out.
Changing the name of the passenger can cost up to £160, while a missed flight fee is £100.
The airline recently increased the size of their free personal bag that passengers can take onboard, by 5cm.
But you could have to pay up to £45 if it breaks the size rules and you have to check it in.
This is if it is 10kg – anything over this can be a £60 fee.
India is celebrating after winning a first women’s World Cup title, drawing comparisons to the men’s memorable 1983 triumph.
Published On 3 Nov 20253 Nov 2025
Share
Prime Minister Narendra Modi led the tributes to India’s women cricketers following their “historic” triumph in the World Cup final on Sunday, while team captain Harmanpreet Kaur hoped it would be a watershed moment for the women’s game in the country.
After suffering heartbreak in the final of the 2005 and 2017 editions of the 50-overs showpiece, India beat South Africa by 52 runs to secure their first title in front of 40,000 delirious home fans at Navi Mumbai’s DY Patil Stadium.
Recommended Stories
list of 4 itemsend of list
Three defeats in a row earlier in the tournament had nearly derailed India’s campaign before they bounced back to storm into the semifinals, where they pulled off a record chase to eliminate seven-time champions Australia.
India’s Captain Harmanpreet Kaur and teammates celebrate with the trophy after winning the ICC Women’s World Cup Cricket final against South Africa at DY Patil Stadium, Navi Mumbai, India on November 2, 2025 [Francis Mascarenhas/Reuters]
‘Dream big and chase those dreams’
“The team showed exceptional teamwork and tenacity throughout the tournament. Congratulations to our players,” Prime Minister Modi said on social media. “This historic win will motivate future champions to take up sports.”
The Indian cricket board announced the team would receive a reward of 510 million Indian rupees ($5.8m) for their title triumph, a victory which the Indian Express newspaper termed “Herstoric” on its front page.
Harmanpreet said it could be a turning point for the women’s game in the cricket-mad country.
“We have been talking about this for many years – we’ve been playing good cricket, but we had to win one big tournament,” the batter said.
“Without that, we couldn’t talk about change. At the end of the day, fans and the audience want to see their favourite team win.
“It’s not that we weren’t playing good cricket, but we were waiting badly for this moment, and today we got a chance to live it.”
The fairytale triumph of the Indian men’s team at the 1983 World Cup is considered the catalyst for the country’s rise to becoming a powerhouse of the game, both on and off the pitch, and batting great Sachin Tendulkar said Sunday’s win was “a defining moment in the journey of Indian women’s cricket”.
“1983 inspired an entire generation to dream big and chase those dreams,” he wrote on social media.
“Today, our women’s cricket team has done something truly special. They have inspired countless young girls across the country to pick up a bat and ball, take the field and believe that they too can lift that trophy one day …”
Mithali Raj, who led India to the 2017 women’s final, said the victory for Harmanpreet’s side had made her dreams come true.
“I’ve seen this dream for over two decades, to watch the Indian women lift that World Cup trophy,” she wrote on X.
“Tonight, that dream finally came true. From the heartbreak of 2005 to the fight of 2017, every tear, every sacrifice, every young girl who picked up a bat believing we belong here, it all led to this moment.”
A spectacular win by the Indian team in the ICC Women’s Cricket World Cup 2025 Finals. Their performance in the final was marked by great skill and confidence. The team showed exceptional teamwork and tenacity throughout the tournament. Congratulations to our players. This…
JENNIFER Aniston has confirmed her relationship with Jim Curtis in an adorable Instagram post.
The Friends actress, 56, took to her social media grid to upload a snap showing her hugging her wellness coach boyfriend from behind.
Sign up for the Showbiz newsletter
Thank you!
Jennifer Aniston has gone Instagram official with boyfriend Jim Curtis in a sweet birthday postCredit: instagramIt is the first time the actress has gone public with a boyfriend since her split from husband Justin Theroux in 2017Credit: GettyHer partner works as a wellbeing coach and hypnotistCredit: Instagram/jimcurtis1
The sultry black and white image showed the pair with beaming smiles, with Jennifer seen peeking over his shoulder.
In her caption, she simply wrote: “Happy Birthday my love.
“Cherished.”
She finished off her super-sweet upload with a red Emoji love heart.
Real Housewives star Kyle Richards posted two love hearts in reply, while model Poppy Delevingne added the word: “Heaven.”
The relationship reveal came just weeks after the Marley and Me star flashed her rock hard abs in a glam new photoshoot.
The chick flick actress has been rumoured to be in arelationship with Jim Curtis, a hypnotist and wellness expert, since earlier this summer.
At the time, the pair were pictured on a yacht in Mallorca in July.
Recently, the loved-up couple were snapped on a double date at Nobu restaurant in Malibu,California.
The couple were set up by mutual friends.
This is the first time Jennifer has gone public with a boyfriend since her split from husband Justin Theroux in 2017.
LOVED UP
The pair’s relationship has been put under the spotlight of late.
“They are being super private but have been spending a lot of time together,” an unnamed source told Us Weekly.
“They are happy and really into each other.”
The source added: “They’ve been hanging out a lot, but very much on the DL at her home in L.A… it’s very Zen, and she has always been very much into that. They’re a good match.”
Jennifer has been dropping hints that the pair are an item, including ‘liking’ many of Jim’s Instagram posts since May, including one about “a divorce, breakup or difficult romance.”
Jim then returned the favour on her May 25 post of a photo dump that included an image of his book, Shift: Quantum Manifestation Guide.
The wellness guru has described his journey to becoming a hypnotist, saying it started with healing himself.
“After battling with a chronic illness for over 30 years, I learned how to heal and recover from the mental and physical pain it left me with,” Jim writes on his website.
“Through doing the inner work, I not only changed my life, but I’ve helped thousands of others break free from their past to create an entirely new, empowered reality.”
Jennifer, 56, first met ex-husband and fellow actor, Justin in 2007, before beginning their relationship in 2011.
The Friends actress’ love life has long been in the spotlightCredit: AlamyShe told how she felt ‘cherished’ in the sweet Instagram uploadCredit: GettyThe pair were spotted on a loved-up date in MallorcaCredit: Splash
The incident occurred during Philadelphia’s 109-108 loss to the Boston Celtics on Friday when Embiid, 31, scored while getting fouled and then made a chopping gesture to his groin.
Embiid has been fined for the gesture on multiple previous occasions.
In a post on X,, external which also contained the NBA’s social media post detailing his sanction, Embiid appeared to suggest the gesture was similar to a signal that officials make when calling a foul for blocking.
He said: “Yall better start fining the refs for doing the ‘lewd’, ‘blocking foul’ gesture since I’m not allowed to do it.”
Cameroon-born Embiid did not play in Sunday’s 129-105 victory over the Brooklyn Nets, sitting out the match as he continues to manage his recovery from a knee injury which required surgery earlier this year.
Cricket fans have been celebrating across India after the women’s team claimed their first ever World Cup, defeating South Africa in the final in Mumbai.
The US president sent mixed signals over his plans for Venezuela as his military build-up in the Caribbean continues.
Published On 3 Nov 20253 Nov 2025
Share
President Donald Trump has sent mixed signals over the potential for a United States military intervention in Venezuela, as he dismissed talk of “war” but threatened the South American country’s leader.
During a CBS interview, released on Sunday, the president warned that President Nicholas Maduro’s days are numbered. The comment came amid a build-up of US military units in the Caribbean, where the US has conducted multiple strikes on alleged drug-trafficking vessels that UN officials and scholars say are in clear violation of US and international law.
Recommended Stories
list of 4 itemsend of list
Asked if the US was going to war against Venezuela, Trump replied: “I doubt it. I don’t think so.”
However, when asked if Maduro’s days as president were numbered, the president replied: “I would say yeah. I think so.”
US media outlets have reported that Washington is planning strikes on military installations in Venezuela as part of its war against “narco-terrorism”.
Trump appeared to deny that he is planning attacks inside Venezuela, although he did not rule the idea out completely.
“I wouldn’t be inclined to say that I would do that,” he said. “I’m not going to tell you what I’m going to do with Venezuela.”
Maduro, who faces indictment in the US on drug trafficking charges, has accused Washington of using a drug offensive as a pretext for “imposing regime change” in Caracas to seize Venezuelan oil.
The US military has carried out more than a dozen strikes on vessels in the Caribbean and the Pacific in recent weeks, killing at least 65 people. The campaign has prompted criticism from governments across the region.
UN human rights chief Volker Turk and rights groups say the attacks, which began in early September, amount to “extrajudicial killings” even if they target known traffickers.
Washington has yet to make public any evidence that its targets were smuggling narcotics or posed a threat to the US.
At a Starbucks in downtown Culver City, Amit Jain pulls out his iPad Pro and presses play. On-screen, one of his employees at Luma AI — the Silicon Valley startup behind a new wave of generative video tools, which he co-founded and now runs — lumbers through the company’s Palo Alto office, arms swinging, shoulders hunched, pretending to be a monkey. Jain swipes to a second version of the same clip. Same movement, same hallway, but now he is a monkey. Fully rendered and believable, and created in seconds.
“The tagline for this would be, like, iPhone to cinema,” Jain says, flipping through other uncanny clips shared on his company’s Slack. “But, of course, it’s not full cinema yet.” He says it offhandedly — as if he weren’t describing a transformation that could upend not just how movies are made but what Hollywood is even for. If anyone can summon cinematic spectacle with a few taps, what becomes of the place that once called it magic?
Luma’s generative AI platform, Dream Machine, debuted last year and points toward a new kind of moviemaking, one where anyone can make release-grade footage with a few words. Type “a cowboy riding a velociraptor through Times Square,” and it builds the scene from scratch. Feed it a still photo and it brings the frozen moment to life: A dog stirs from a nap, trees ripple in the breeze.
Dream Machine’s latest tool, Modify Video, was launched in June. Instead of generating new footage, it redraws what’s already there. Upload a clip, describe what you want changed and the system reimagines the scene: A hoodie becomes a superhero cape, a sunny street turns snowy, a person transforms into a talking banana or a medieval knight. No green screen, no VFX team, no code. “Just ask,” the company’s website says.
For now, clips max out around 10 seconds, a limit set by the technology’s still-heavy computing demands. But as Jain points out, “The average shot in a movie is only eight seconds.”
A series on how the AI revolution is reshaping the creative foundations of Hollywood — from storytelling and performance to production, labor and power.
Jain’s long-term vision is even more radical: a world of fully personalized entertainment, generated on demand. Not mass-market blockbusters, but stories tailored to each individual: a comedy about your co-workers, a thriller set in your hometown, a sci-fi epic starring someone who looks like you, or simply anything you want to see. He insists he’s not trying to replace cinema but expand it, shifting from one-size-fits-all stories to something more personal, flexible and scalable.
“Today, videos are made for 100 million people at a time — they have to hit the lowest common denominator,” Jain says. “A video made just for you or me is better than one made for two unrelated people. That’s the problem we’re trying to solve… My intention is to get to a place where two hours of video can be generated for every human every day.”
It’s a staggering goal that Jain acknowledges is still aspirational. “That will happen, but when the prices are about a thousand times cheaper than where we are. Our research and our engineering are going toward that, to push the price down as much as humanly possible. Because that’s the demand for video. People watch hours and hours of video every day.”
Scaling to that level would require not just faster models but exponentially more compute power. Critics warn that the environmental toll of such expansion could be profound.
For Dream Machine to become what Jain envisions, it needs more than generative tricks — it needs a built-in narrative engine that understands how stories work: when to build tension, where to land a joke, how to shape an emotional arc. Not a tool but a collaborator. “I don’t think artists want to use tools,” he says. “They want to tell their stories and tools get in their way. Currently, pretty much all video generative models, including ours, are quite dumb. They are good pixel generators. At the end of the day, we need to build general intelligence that can tell a f— funny joke. Everything else is a distraction.”
The name may be coincidental, but nine years ago, MIT’s Media Lab launched a very different kind of machine: Nightmare Machine, a viral experiment that used neural networks to distort cheerful faces and familiar cityscapes into something grotesque. That project asked if AI could learn to frighten us. Jain’s vision points in a more expansive direction: an AI that is, in his words, “able to tell an engaging story.”
For many in Hollywood, though, the scenario Jain describes — where traditional cinema increasingly gives way to fast, frictionless, algorithmically personalized video — sounds like its own kind of nightmare.
Jain sees this shift as simply reflecting where audiences already are. “What people want is changing,” he says. “Movies obviously have their place but people aren’t spending time on them as much. What people want are things that don’t need their attention for 90 minutes. Things that entertain them and sometimes educate them and sometimes are, you know, thirst traps. The reality of the universe is you can’t change people’s behaviors. I think the medium will change very significantly.”
Still, Jain — who previously worked as an engineer on Apple’s Vision Pro, where he collaborated with filmmakers like Steven Spielberg and George Lucas — insists Hollywood isn’t obsolete, just due for reinvention. To that end, Luma recently launched Dream Lab LA, a creative studio aimed at fostering AI-powered storytelling.
“Hollywood is the largest concentration of storytellers in the world,” Jain says. “Just like Silicon Valley is the largest concentration of computer scientists and New York is the largest concentration of finance people. We need them. That’s what’s really special about Hollywood. The solution will come out of the marriage of technology and art together. I think both sides will adapt.”
It’s a hopeful outlook, one that imagines collaboration, not displacement. But not everyone sees it that way.
In Silicon Valley, where companies like Google, OpenAI, Anthropic and Meta are racing to build ever more powerful generative tools, such thinking is framed as progress. In Hollywood, it can feel more like erasure — a threat to authorship itself and to the jobs, identities and traditions built around it. The tension came to a head during the 2023 writers’ and actors’ strikes, when picket signs declared: “AI is not art” and “Human writers only.”
What once felt like the stuff of science fiction is now Hollywood’s daily reality. As AI becomes embedded in the filmmaking process, the entire ecosystem — from studios and streamers to creators and institutions — is scrambling to keep up. Some see vast potential: faster production, lower costs, broader access, new kinds of creative freedom. Others see an extraction machine that threatens the soul of the art form and a coming flood of cheap, forgettable content.
AI storytelling is just beginning to edge into theaters — and already sparking backlash. This summer, IMAX is screening 10 generative shorts from Runway’s AI Film Festival. At AMC Burbank, where one screening is set to take place later this month, a protest dubbed “Kill the Machine” is already being organized on social media, an early flashpoint in the growing resistance to AI’s encroachment on storytelling.
But ready or not, the gravity is shifting. Silicon Valley is pulling the film industry into its orbit, with some players rushing in and others dragged. Faced with consolidation, shrinking budgets and shareholder pressure to do more with less, studios are turning to AI not just to cut costs but to survive. The tools are evolving faster than the industry’s playbook, and the old ways of working are struggling to keep up. With generative systems poised to flood the zone with content, simply holding an audience’s attention, let alone shaping culture, is becoming harder than ever.
While the transition remains uneven, some studios are already leaning in. Netflix recently used AI tools to complete a complex VFX sequence for the Argentine sci-fi series “El Eternauta” in a fraction of the usual time. “We remain convinced that AI represents an incredible opportunity to help creators make films and series better, not just cheaper,” co-chief executive Ted Sarandos told analysts during a July earnings call.
At Paramount, incoming chief executive David Ellison is pitching a more sweeping transformation: a “studio in the cloud” that would use AI and other digital tools to reinvent every stage of filmmaking, from previsualization to post. Ellison, whose Skydance Media closed its merger with Paramount Global this week and whose father, Larry Ellison, co-founded Oracle, has vowed to turn the company into a tech-first media powerhouse. “Technology will transform every single aspect of this company,” he said last year.
In one of the most visible examples of AI adoption in Hollywood, Lionsgate, the studio behind the “John Wick” and “Hunger Games” franchises, struck a deal last year with the generative video startup Runway to train a custom model on its film and TV library, aiming to support future project development and improve efficiency. Lionsgate chief executive Jon Feltheimer, speaking to analysts after the agreement, said the company believes AI, used with “appropriate guardrails,” could have a “positive transformational impact” on the business.
Elsewhere, studios are experimenting more quietly: using AI to generate early character designs, write alternate dialogue or explore how different story directions might land. The goal isn’t to replace writers or directors, but to inform internal pitches and development. At companies like Disney, much of the testing is happening in games and interactive content, where the brand risk is lower and the guardrails are clearer. For now, the prevailing instinct is caution. No one wants to appear as if they’re automating away the heart of the movies.
Legacy studios like Paramount are exploring ways to bring down costs by incorporating AI into their pipeline.
(Brian van der Brug / Los Angeles Times)
As major studios pivot, smaller, more agile players are building from the ground up for the AI era.
According to a recent report by FBRC.ai, an L.A.-based innovation studio that helps launch and advise early-stage AI startups in entertainment, more than 65 AI-native studios have launched since 2022, most of them tiny, self-funded teams of five or fewer. At these studios, AI tools allow a single creator to do the work of an entire crew, slashing production costs by 50% to 95% compared with traditional live-action or animation. The boundaries between artist, technician and studio are collapsing fast — and with them, the very idea of Hollywood as a gatekeeper.
That collapse is raising deeper questions: When a single person anywhere in the world can generate a film from a prompt, what does Hollywood still represent? If stories can be personalized, rendered on demand or co-written with a crowd, who owns them? Who gets paid? Who decides what matters and what disappears into the churn? And if narrative itself becomes infinite, remixable and disposable, does the idea of a story still hold any meaning at all?
Yves Bergquist leads the AI in Media Project at USC’s Entertainment Technology Center, a studio-backed think tank where Hollywood, academia and tech converge. An AI researcher focused on storytelling and cognition, he has spent years helping studios brace for a shift he sees as both inevitable and wrenching. Now, he says, the groundwork is finally being laid.
“We’re seeing very aggressive efforts behind the scenes to get studios ready for AI,” Bergquist says. “They’re building massive knowledge graphs, getting their data ready to be ingested into AI systems and putting governance committees in place to start shaping real policy.”
But adapting won’t be easy, especially for legacy studios weighed down by entrenched workflows, talent relationships, union contracts and layers of legal complexity. “These AI models weren’t built for Hollywood,” Bergquist says. “This is 22nd-century technology being used to solve 21st-century problems inside 19th-century organizational models. So it’s blood, sweat and tears getting them to fit.”
In an algorithmically accelerated landscape where trends can catch fire and burn out in hours, staying relevant is its own challenge. To help studios keep pace, Bergquist co-founded Corto, an AI startup that describes itself as a “growth genomics engine.” The company, which also works with brands like Unilever, Lego and Coca-Cola, draws on thousands of social and consumer sources, analyzing text, images and video to decode precisely which emotional arcs, characters and aesthetics resonate with which demographics and cultural segments, and why.
“When the game is attention, the weapon is understanding where culture and attention are and where they’re going.” Bergquist says, arguing media ultimately comes down to neuroscience.
Corto’s system breaks stories down into their formal components, such as tone, tempo, character dynamics and visual aesthetics, and benchmarks new projects against its extensive data to highlight, for example, that audiences in one region prefer underdog narratives or that a certain visual trend is emerging globally. Insights like these can help studios tailor marketing strategies, refine storytelling decisions or better assess the potential risk and appeal of new projects.
With ever-richer audience data and advances in AI modeling, Bergquist sees a future where studios can fine-tune stories in subtle ways to suit different viewers. “We might know that this person likes these characters better than those characters,” he says. “So you can deliver something to them that’s slightly different than what you’d deliver to me.”
A handful of studios are already experimenting with early versions of that vision — prototyping interactive or customizable versions of existing IP, exploring what it might look like if fans could steer a scene, adjust a storyline or interact with a favorite character. Speaking at May’s AI on the Lot conference, Danae Kokenos, head of technology innovation at Amazon MGM Studios, pointed to localization, personalization and interactivity as key opportunities. “How do we allow people to have different experiences with their favorite characters and favorite stories?” she said. “That’s not quite solved yet, but I see it coming.”
Bergquist is aware that public sentiment around AI remains deeply unsettled. “People are very afraid of AI — and they should be,” he acknowledges. “Outside of certain areas like medicine, AI is very unpopular. And the more capable it gets, the more unpopular it’s going to be.”
Still, he sees a significant upside for the industry. Get AI right, and studios won’t just survive but redefine storytelling itself. “One theory I really believe in is that as more people gain access to Hollywood-level production tools, the studios will move up the ladder — into multi-platform, immersive, personalized entertainment,” he says. “Imagine spending your life in Star Wars: theatrical releases, television, VR, AR, theme parks. That’s where it’s going.”
The transition won’t be smooth. “We’re in for a little more pain,” he says, “but I think we’ll see a rebirth of Hollywood.”
“AI slop” or creative liberation?
You don’t have to look far to find the death notices. TikTok, YouTube and Reddit are full of “Hollywood is dead” posts, many sparked by the rise of generative AI and the industry’s broader upheaval. Some sound the alarm. Others say good riddance. But what’s clear is that the center is no longer holding and no one’s sure what takes its place.
Media analyst Doug Shapiro has estimated that Hollywood produces about 15,000 hours of fresh content each year, compared to 300 million hours uploaded annually to YouTube. In that context, generative AI doesn’t need to reach Hollywood’s level to pose a major threat to its dominance — sheer volume alone is enough to disrupt the industry.
The attention economy is maxed out but attention itself hasn’t grown. As the monoculture fades from memory, Hollywood’s cultural pull is loosening. This year’s Oscars drew 19.7 million viewers, fewer than tuned in to a typical episode of “Murder, She Wrote” in the 1990s. The best picture winner, “Anora,” earned just $20 million at the domestic box office, one of the lowest tallies of any winner of the modern era. Critics raved, but fewer people saw it in theaters than watch the average moderately viral TikTok.
Amid this fragmentation, generative AI tools are fueling a surge of content. Some creators have a new word for it: “slop” — a catchall for cheap, low-effort, algorithmically churned-out media that clogs the feed in search of clicks. Once the world’s dream factory, Hollywood is now asking how it can stand out in an AI-powered media deluge.
Audience members watch an AI-assisted animated short at “Emergent Properties,” a 2023 Sony Pictures screening that offered a glimpse of the uncanny, visually inventive new wave of AI-powered filmmaking.
(Jay L. Clendenin / Los Angeles Times)
Ken Williams, chief executive of USC’s Entertainment Technology Center and a former studio exec who co-founded Sony Pictures Imageworks, calls it a potential worst-case scenario in the making — “the kind of wholesale dehumanization of the creative process that people, in their darkest moments, fear.”
Williams says studios and creatives alike worry that AI will trap audiences in an algorithmic cul de sac, feeding them more of what they already know instead of something new.
“People who live entirely in the social media world and never come out of that foxhole have lost the ability to hear other voices — and no one wants to see that happen in entertainment.”
If the idea of uncontrolled, hyper-targeted AI content sounds like something out of an episode of “Black Mirror,” it was. In the 2023 season opener “Joan Is Awful,” a woman discovers her life is being dramatized in real time on a Netflix-style streaming service by an AI trained on her personal data, with a synthetic Salma Hayek cast as her on-screen double.
So far, AI tools have been adopted most readily in horror, sci-fi and fantasy, genres that encourage abstraction, stylization and visual surrealism. But when it comes to human drama, emotional nuance or sustained character arcs, the cracks start to show. Coherence remains a challenge. And as for originality — the kind that isn’t stitched together from what’s already out there — the results so far have generally been far from revelatory.
At early AI film festivals, the output has often leaned toward the uncanny or the conceptually clever: brief, visually striking experiments with loose narratives, genre tropes and heavily stylized worlds. Many feel more like demos than fully realized stories. For now, the tools excel at spectacle and pastiche but struggle with the kinds of layered, character-driven storytelling that define traditional cinema.
Then again, how different is that from what Hollywood is already producing? Today’s biggest blockbusters — sequels, reboots, multiverse mashups — often feel so engineered to please that it’s hard to tell where the algorithm ends and the artistry begins. Nine of the top 10 box office hits in 2024 were sequels. In that context, slop is, to some degree, in the eye of the beholder. One person’s throwaway content may be another’s creative breakthrough — or at least a spark.
Joaquin Cuenca, chief executive of Freepik, rejects the notion that AI-generated content is inherently low-grade. The Spain-based company, originally a stock image platform, now offers AI tools for generating images, video and voice that creators across the spectrum are starting to embrace.
“I don’t like this ‘slop’ term,” Cuenca says. “It’s this idea that either you’re a top renowned worldwide expert or it’s not worth it — and I don’t think that’s true. I think it is worth it. Letting people with relatively low skills or low experience make better videos can help people get a business off the ground or express things that are in their head, even if they’re not great at lighting or visuals.”
Freepik’s tools have already made their way into high-profile projects. Robert Zemeckis’ “Here,” starring a digitally de-aged Tom Hanks and set in one room over a period for decades, used the company’s upscaling tech to enhance backgrounds. A recently released anthology of AI-crafted short films, “Beyond the Loop,” which was creatively mentored by director Danny Boyle, used the platform to generate stylized visuals.
“More people will be able to make better videos, but the high end will keep pushing forward too,” Cuenca says. “I think it will expand what it means to be state of the art.”
For all the concern about runaway slop, Williams envisions a near-term stalemate, where AI expands the landscape without toppling the kind of storytelling that still sets Hollywood apart. In that future, he argues, the industry’s competitive edge — and perhaps its best shot at survival — will still come from human creators.
That belief in the value of human authorship is now being codified by the industry’s most influential institution. Earlier this year, the Academy of Motion Picture Arts and Sciences issued its first formal guidance on AI in filmmaking, stating that the use of generative tools will “neither help nor harm” a film’s chances of receiving a nomination. Instead, members are instructed to consider “the degree to which a human was at the heart of the creative authorship” when evaluating a work.
“I don’t see AI necessarily displacing the kind of narrative content that has been the province of Hollywood’s creative minds and acted by the stars,” Williams says. “The industry is operating at a very high level of innovation and creativity. Every time I turn around, there’s another movie I’ve got to see.”
The new studio model
Inside Mack Sennett Studios, a historic complex in L.A.’s Echo Park neighborhood once used for silent film shoots, a new kind of studio is taking shape: Asteria, the generative AI video studio founded by filmmaker-turned-entrepreneur Bryn Mooser.
Asteria serves as the creative arm of Moonvalley, an AI storytelling company led by technologist and chief executive Naeem Talukdar. Together, they’re exploring new workflows built around the idea that AI can expand, rather than replace, human creativity.
Mooser, a two-time Oscar nominee for documentary short subject and a fifth-generation Angeleno, sees the rise of AI as part of Hollywood’s long history of reinvention, from sound to color to CGI. “Looking back, those changes seem natural, but at the time, they were difficult,” he says.
Ed Ulbrich, left, Bryn Mooser and Mateusz Malinowski, executives at Moonvalley and Asteria, are building a new kind of AI-powered movie studio focused on collaboration between filmmakers and technologists.
(David Butow / For the Times)
What excites him now is how AI lowers technical barriers for the next generation. “For people who are technicians, like stop-motion or VFX artists, you can do a lot more as an individual or a small team,” he says. “And really creative filmmakers can cross departments in a way they couldn’t before. The people who are curious and leaning in are going to be the filmmakers of tomorrow.”
It’s a hopeful vision, one shared by many AI proponents who see the tools as a great equalizer, though some argue it often glosses over the structural realities facing working artists today, where talent and drive alone may not be enough to navigate a rapidly shifting, tech-driven landscape.
That tension is precisely what Moonvalley is trying to address. Their pitch isn’t just creative, it’s legal. While many AI companies remain vague about what their models are trained on, often relying on scraped content of questionable legality, Moonvalley built its video model, Marey, on fully licensed material and in close collaboration with filmmakers.
That distinction is becoming more significant. In June, Disney and Universal filed a sweeping copyright lawsuit against Midjourney, a popular generative AI tool that turns text prompts into images, accusing it of enabling rampant infringement by letting users generate unauthorized depictions of characters like Darth Vader, Spider-Man and the Minions. The case marks the most aggressive legal challenge yet by Hollywood studios against AI platforms trained on their intellectual property.
“We worked with some of the best IP lawyers in the industry to build the agreements with our providers,” Moonvalley’s Talukdar says. “We’ve had a number of major studios audit those agreements. We’re confident every single pixel has had a direct sign-off from the owner. That was the baseline we operated from.”
The creative frontier between Hollywood and AI is drawing interest from some of the industry’s most ambitious filmmakers.
Steven Spielberg and “Avengers” co-director Joe Russo were among the advisors to Wonder Dynamics, an AI-driven VFX startup that was acquired by Autodesk last year. Darren Aronofsky, the boundary-pushing director behind films like “Black Swan” and “The Whale,” recently launched the AI studio Primordial Soup, partnering with Google DeepMind. Its debut short, “Ancestra,” directed by Eliza McNitt, blends real actors with AI-generated visuals and premiered at the Tribeca Film Festival in June.
Not every foray into AI moviemaking has been warmly received. Projects that spotlight generative tools have stoked fresh arguments about where to draw the line between machine-made and human-driven art.
In April, actor and director Natasha Lyonne, who co-founded Asteria with her partner, Mooser, announced her feature directorial debut: a sci-fi film about a world addicted to VR gaming called “Uncanny Valley,” combining AI and traditional filmmaking techniques. Billed as offering “a radical new cinematic experience,” the project drew backlash from some critics who questioned whether such ventures risk diminishing the role of human authorship. Lyonne defended the film to the Hollywood Reporter, making clear she’s not replacing crew members with AI: “I love nothing more than filmmaking, the filmmaking community, the collaboration of it, the tactile fine art of it… In no way would I ever want to do anything other than really create some guardrails or a new language.”
Even the boldest experiments face a familiar hurdle: finding an audience. AI might make it easier to make a movie, but getting people to watch it is another story. For now, the real power still lies with platforms like Netflix and TikTok that decide what gets seen.
That’s why Mooser believes the conversation shouldn’t be about replacing filmmakers but empowering them. “When we switched from shooting on film to digital, it wasn’t the filmmakers who went away — it was Kodak and Polaroid,” he says. “The way forward isn’t everybody typing prompts. It’s putting great filmmakers in the room with the best engineers and solving this together. We haven’t yet seen what AI looks like in the hands of the best filmmakers of our time. But that’s coming.”
New formats, new storytellers
For more than a century, watching a movie has been a one-way experience: The story flows from screen to viewer. Stephen Piron wants to change that. His startup Pickford AI — named for Mary Pickford, the silent-era star who co-founded United Artists and helped pioneer creative control in Hollywood — is exploring whether stories can unfold in real time, shaped by the audience as they watch. Its cheeky slogan: “AI that smells like popcorn.”
Pickford’s flagship demo looks like an animated dating show, but behaves more like a game or an improv performance. There’s no fixed script. Viewers type in suggestions through an app and vote on others’ ideas. A large language model then uses that input, along with the characters’ backstories and a rough narrative outline, to write the next scene in real time. A custom engine renders it on the spot, complete with gestures and synthetic voices. Picture a cartoon version of “The Bachelor” crossed with a choose-your-own-adventure, rendered by AI in real time.
At live screenings this year in London and Los Angeles, audiences didn’t just watch — they steered the story, tossing in oddball twists and becoming part of the performance. “We wanted to see if we could bring the vibe of the crowd back into the show, make it feel more like improv or live theater,” Piron says. “The main reaction is people laugh, which is great. There’s been lots of positive reaction from creative people who think this could be an interesting medium to create new stories.”
The platform is still in closed beta. But Piron’s goal is a collaborative storytelling forum where anyone can shape a scene, improvise with AI and instantly share it. To test that idea on a larger scale, Pickford is developing a branching murder mystery with Emmy-winning writer-producer Bernie Su (“The Lizzie Bennet Diaries”).
Piron, who is skeptical that people really want hyper-personalized content, is exploring more ways to bring the interactive experience into more theaters. “I think there is a vacuum of live, in-person experiences that people can do — and maybe people are looking for that,” he says.
Attendees check in at May’s AI on the Lot conference, where Pickford AI screened a demo of its interactive dating show.
(Irina Logra)
As generative AI lowers the barrier to creation, the line between creator and consumer is starting to blur and some of the most forward-looking startups are treating audiences as collaborators, not just fans.
One example is Showrunner, a new, Amazon-backed platform from Fable Studio that lets users generate animated, TV-style episodes using prompts, images and AI-generated voices — and even insert themselves into the story. Initially free, the platform plans to charge a monthly subscription for scene-generation credits. Fable is pitching Showrunner as “the Netflix of AI,” a concept that has intrigued some studios and unsettled others. Chief executive Edward Saatchi says the company is already in talks with Disney and other content owners about bringing well-known franchises into the platform.
Other AI companies are focused on building new franchises from the ground up with audiences as co-creators from day one. Among the most ambitious is Invisible Universe, which bypasses traditional gatekeepers entirely and develops fresh IP in partnership with fans across TikTok, YouTube and Instagram. Led by former MGM and Snap executive Tricia Biggio, the startup has launched original animated characters with celebrities like Jennifer Aniston and Serena Williams, including Clydeo, a cooking-obsessed dog, and Qai Qai, a dancing doll. But its real innovation, Biggio says, is the direct relationship with the audience.
“We’re not going to a studio and saying, ‘Do you like our idea?’ We’re going to the audience,” she says. “If Pixar were starting today, I don’t think they’d choose to spend close to a decade developing something for theatrical release, hoping it works.”
While some in the industry are still waiting for an AI “Toy Story” or “Blair Witch” moment — a breakthrough that proves generative tools can deliver cultural lightning in a bottle — Biggio isn’t chasing a feature-length hit. “There are ways to build love and awareness for stories that don’t require a full-length movie,” she says. “Did it make you feel something? Did it make you want to go call your mom? That’s going to be the moment we cross the chasm.”
What if AI isn’t the villain?
For nearly a century, filmmakers have imagined what might happen if machines got too smart.
In 1927’s “Metropolis,” a mad scientist gives his robot the likeness of a beloved labor activist, then unleashes it to sow chaos among the city’s oppressed masses. In “2001: A Space Odyssey,” HAL 9000 turns on its crew mid-mission. In “The Terminator,” AI nukes the planet and sends a killer cyborg back in time to finish the job. “Blade Runner” and “Ex Machina” offered chilling visions of artificial seduction and deception. Again and again, the message has been clear: Trust the machines at your peril.
Director Gareth Edwards, best known for “Godzilla” and “Rogue One: A Star Wars Story,” wanted to flip the script. In “The Creator,” his 2023 sci-fi drama, the roles were reversed: Humans are waging war against AI and the machines, not the people, are cast as the hunted. The story follows a hardened ex-soldier, played by John David Washington, who’s sent to destroy a powerful new weapon, only to discover it’s a child: a young android who may be the key to peace.
“The second you look at things from AI’s perspective, it flips very easily,” Edwards told The Times by phone shortly before the film’s release. “From AI’s point of view, we are attempting to enslave it and use it as our servant. So we’re clearly the baddie in that situation.”
In Gareth Edwards’ 2023 film “The Creator,” a young AI child named Alphie (Madeleine Yuna Voyles) holds the key to humanity’s future.
(20th Century)
In many ways, “The Creator” was the kind of film audiences and critics say they want to see more often out of Hollywood: an original story that takes creative risks, delivering cutting-edge visuals on a relatively lean $80 million. But when it hit theaters that fall, the film opened in third place behind “Paw Patrol: The Mighty Movie” and “Saw X.” By the end of its run, it had pulled in a modest $104.3 million worldwide.
Part of the problem was timing. When Edwards first pitched the film, AI was still seen as a breakthrough, not a threat. But by the time the movie reached theaters, the public mood had shifted. The 2023 strikes were in full swing, AI was the villain of the moment — and here came a film in which AI literally nukes Los Angeles in the opening minutes. The metaphor wasn’t subtle. Promotion was limited, the cast was sidelined and audiences weren’t sure whether to cheer the movie’s message or recoil from it. While the film used cutting-edge VFX tools to help bring its vision to life, it served as a potent reminder that AI could help make a movie — but it still couldn’t shield it from the backlash.
Still, Edwards remains hopeful about what AI could mean for the future of filmmaking, comparing it to the invention of the electric guitar. “There’s a possibility that if this amazing tool turns up and everyone can make any film that they imagine, it’s going to lead to a new wave of cinema,” he says. “Look, there’s two options: Either it will be mediocre rubbish — and if that’s true, don’t worry about it, it’s not a threat — or it’s going to be phenomenal, and who wouldn’t want to see that?”
After “The Creator,” Edwards returned to more familiar terrain, taking the reins on this summer’s “Jurassic World Rebirth,” the sixth installment in a franchise that began with Steven Spielberg’s 1993 blockbuster, which redefined spectacle in its day. To date, the film has grossed more than $700 million worldwide.
So what’s the takeaway? Maybe there’s comfort in the known. Maybe audiences crave the stories they’ve grown up with. Maybe AI still needs the right filmmaker or the right story to earn our trust.
Or maybe we’re just not ready to root for the machines. At least not yet.
Of course Luka Doncic made the one that didn’t count.
On a frigid shooting night when the star guard made just one three-pointer on 11 attempts, Doncic swished a 40-footer on a dead ball that elicited a roar from the Crypto.com Arena.
He instead made his impact in other ways.
Doncic overcame his bad shooting to still collect his first triple-double of the season, notching 29 points, 10 assists and 11 rebounds in the Lakers’ 130-120 victory over the Miami Heat on Sunday. Fellow guard Austin Reaves was also struggling with his shot, making just four of 14 three-point attempts, but rallied for 26 points and 11 assists to just three turnovers.
Although their stars slogged through concurrent off nights, the Lakers (5-2) still shot 50.5% from the field. They tallied 33 assists to 11 turnovers. They won their third consecutive game.
“We did a lot of really good things and it could have been even better if me and Luka would’ve made a shot,” Reaves deadpanned. “But supporting cast and everybody around that played really well.”
Forward Jake LaRavia, who turns 24 on Tuesday, scored in double digits for the third consecutive game off the bench, finishing with 25 points — two shy of his season high — and eight rebounds. He’s averaged 21.7 points per game in the last three games.
Doncic, who missed three games with minor finger and leg injuries, scored 40 points in each of his first three games this season. Only Wilt Chamberlain had ever started a season with three consecutive 40-point games. But Doncic was happy to sacrifice the scoring streak Sunday.
“We get a win,” Doncic said, “[it] doesn’t matter how many I scored.”
Doncic and Reaves struggled in the first quarter, shooting a combined two for seven from the field. Yet the Lakers still led by seven as the star duo combined for eight assists.
Center Jaxson Hayes was one of the main beneficiaries in that span, scoring 11 points on five-of-five shooting. He had a ferocious one-handed dunk off a Doncic assist that got Hayes so amped up that he head-butted the basket stanchion in celebration. He sank his first three-pointer since March 27, 2023, stepping confidently into a shot that put the Lakers up 23-13 and forced the Heat to call an early timeout.
Hayes finished with 15 points and five rebounds in his first start of the year as Deandre Ayton was held out because of back spasms. Ayton’s back flared up during the Lakers’ win in Memphis on Friday, causing him to sit out the entire second half. He was available to return in the fourth quarter, coach JJ Redick said after the game, but with the Lakers feeding off Hayes and Jarred Vanderbilt at center, the team didn’t want to risk further injury.
The Heat (3-3) finished the third quarter on a 20-7 run to pull within four points heading into the fourth quarter. Former UCLA star Jaime Jaquez Jr., playing in front of his hometown crowd, cut the lead to two with an emphatic one-handed put-back dunk 58 seconds in the fourth quarter. A hush fell over the Lakers crowd.
But Reaves helped quell the comeback effort.
He stumbled while trying to dribble behind his legs, but recovered to shoot a fadeaway mid-range jumper with 1.6 seconds left in the shot clock that put the Lakers up by six with 5:13 remaining in the fourth quarter. Less than 20 seconds later, Reaves threw a lob from just inside mid-court to a soaring LaRavia, who brought the crowd to its feet with a two-handed dunk.