OpenAI

Judge temporarily blocks OpenAI from using ‘Cameo’ in video-making app Sora

A federal judge has temporarily blocked OpenAI’s use of several monikers, including “Cameos” and “CameoVideo,” for elements of its Sora artificial intelligence video generation products and marketing.

U.S. District Judge Eumi K. Lee on Friday issued a temporary restraining order to prevent the San Francisco AI giant from using names that are part of an ongoing trademark dispute.

The Northern California judge also set a Dec. 19 hearing to delve further into the matter.

The lawsuit was brought late last month by Chicago-based tech business Baron App, which also goes by the name of its product, Cameo. The eight-year-old firm sued OpenAI, alleging trademark infringement and unfair competition.

In its Oct. 28 lawsuit, Baron said it has secured several U.S. Trademark Registrations for its Cameo product, which enables fans to engage celebrities to make personalized videos to wish friends a happy birthday or other greetings.

Snoop Dogg, Tony Hawk, Jon Bon Jovi and Donald Trump Jr. are among celebrities who have participated, connecting with fans through Cameo, the company said in its complaint against Open AI. Cameo said its posts have been popular, attracting more than 100 million views in the past year.

The legal dispute began after OpenAI announced an update to its text-to-video tool Sora in September. The update included the launch of a new Sora feature that it called Cameos.

OpenAI’s fall product update gave consumers on the Sora app the ability to scan their faces and allow others to manipulate their facial images in AI-generated environments. YouTube influencer and boxer Jake Paul, who is an investor in OpenAI, participated in OpenAI’s Cameos’ rollout. In less than five days, the Sora app hit more than 1 million downloads.

“OpenAI is now using Cameo’s own mark, CAMEO, to compete directly with Cameo,” Baron wrote in its lawsuit against OpenAI.

Lawyers for the two companies argued their positions in a Tuesday hearing.

Lee’s decision forbids OpenAI and its “officers, directors and employees from using the mark ‘Cameo,’ or any other mark that includes or is confusingly similar to ‘Cameo,’ ” according to her order. “Defendants are ordered to show cause why a preliminary injunction should not [be] issue[d].”

The temporary restraining order expires Dec. 22.

“While the court’s order is temporary, we hope that OpenAI will agree to stop using our mark permanently to avoid any further harm to the public or Cameo,” Cameo CEO Steven Galanis said in a Saturday statement. “We would like nothing more than to put this behind us so that we can focus our full attention on bringing talent and fans together as we head into the holidays.”

An OpenAI spokesperson responded in a statement: “We disagree with the complaint’s assertion that anyone can claim exclusive ownership over the word ‘cameo’, and we look forward to continuing to make our case to the court.”

The move comes as OpenAI has faced blowback in Hollywood as images of celebrities and dead newsmakers were manipulated without their consent.

Source link

Ex-treasury chief Larry Summers resigns OpenAI board over Epstein emails

1 of 2 | Larry Summers (R), then-director of the U.S. National Economic Council, pictured Feb. 2010 next to then-U.S. Homeland Security Secretary Janet Napolitano at the White House in Washington, D.C. Summers, 70, revealed Monday that he will “step back” from all public duties, but it was unclear if that was to include his role with the artificial intelligence firm. File Photo by Andrew Harrer/UPI | License Photo

Nov. 19 (UPI) — Former U.S. Treasury Secretary Larry Summers will resign from the OpenAI board of directors following intensified scrutiny over emails between him and sex offender Jeffrey Epstein, he announced Wednesday.

“I am grateful for the opportunity to have served, excited about the potential of the company, and look forward to following their progress,” Summers told CNBC and CNN in a statement.

Summers, 70, revealed Monday that he will “step back” from all public duties, but it was initially unclear if that was to include his role with the artificial intelligence startup.

This week, Summers said he was “deeply ashamed” after emails released last week revealed years of correspondence with the late billionaire financier and convicted sexual predator Epstein.

The AI company said it respected his decision.

“We appreciate his many contributions and the perspective he brought to the board,” the OpenAI board of directors said in a statement.

Summers, former secretary of the United States Treasury under former U.S. President Bill Clinton, was later president of Ivy League Harvard University from 2001 to 2006 and director of the National Economic Council under then-President Barack Obama.

On Tuesday, Congress overwhelmingly passed a bill to release the Epstein files.

But it remains to be seen if President Donald Trump will sign the Epstein bill or if the White House will fully comply.

Source link

‘South Park’ addresses AI fakes, copyright with Totoro and Bluey

“South Park” is known for pushing the envelope, but the latest episode has been described by fans online as “nightmare fuel.”

In what is arguably one of the most disturbing episodes of the year, “South Park” creators Trey Parker and Matt Stone explore just how easy it is to create artificial intelligence-generated videos — and how easy it is for some people to fall for them, or to be convinced that real videos are fake.

The conversation about Wednesday’s episode has largely revolved around President Trump and Vice President JD Vance being depicted having sex. They tell reporters the leaked video of their affair was a fake, created with Sora 2, the latest version of OpenAI’s video generator.

In the same episode, titled “Sora Not Sorry,” the children of South Park generate revenge-porn videos using Sora as a means of getting back at each other. Butters creates an explicit video of Red with Santa Claus, and then she creates a similarly explicit video featuring Butters and Totoro, of the Studio Ghibli classic “My Neighbor Totoro.”

Chaos then ensues as the children make a frenzy of AI-generated videos featuring well-known (and copyright-protected) animated characters. The South Park police force is dumbfounded by the videos, believing they are real.

Droopy Dog, Rocky, Bullwinkle, Popeye and even the beloved preschool character Bluey are mentioned or make appearances in the episode. Representatives for Studio Ghibli also appear, offering a voice of reason in the madness, saying, “You cannot just do whatever you want with someone else’s IP.”

It echoes the real response Studio Ghibli had when Sora 2 emerged, arguing that OpenAI likely used its content and other Japanese art as machine learning data. The Content Overseas Distribution Assn., an anti-piracy organization representing Japanese IP holders, released a letter in late October saying the organization believes OpenAI’s actions “may constitute copyright infringement.” CODA demanded that OpenAI stop using Japanese content for machine learning and requested that the company respond to claims of copyright infringement “sincerely.”

The latest update to Sora, released at the end of September, is “more physically accurate, realistic, and more controllable than prior systems,” according to OpenAI, and it also features synchronized dialogue and sound effects.

The company sparked swift backlash as it rolled out the new version because it was operating under a system where intellectual property owners had to opt out of the app, which meant users could create videos featuring popular actors, characters, voices and likenesses until the rights holders made the opt-out request.

Unauthorized deepfakes of celebrities, dead figures and copyrighted characters began circulating quickly, including videos of Robin Williams, Michael Jackson and Martin Luther King Jr. in what the company called “disrespectful depictions of Dr. King’s image.” OpenAI and King’s estate released a joint statement in October saying the app would block the ability to create generations featuring King as the company “strengthens guardrails for historical figures.”

After a slew of Hollywood studios and agencies expressed their frustration with this policy, OpenAI Chief Executive Sam Altman wrote in a blog post that the AI company will be giving rights holders “more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls.”

Some studios have expressed interest in the user-generated content space, with Disney Chief Executive Bob Iger saying on a recent earnings call that the company was having “productive conversations” with unnamed AI companies in an effort to reach an agreement that would also “reflect our need to protect the IP.”

Back at South Park Elementary, as the battle of disturbing AI-generated videos heats up, Kyle questions how creating that kind of content, especially with copyrighted characters, is legal. “Nothing is sacred, Kyle,” Butters tells him. “All you can do is fight fire with fire.”

The episode seems to address Parker and Stone’s own frustrations with their work being replicated, as evidenced by a line from billionaire Peter Thiel, who also appears in the episode: “With Sora 2, I can make the South Park kid do just about anything.”

Source link

California backs down on AI laws so more tech leaders don’t flee the state

California’s tech companies, the epicenter of the state’s economy, sent politicians a loud message this year: Back down from restrictive artificial intelligence regulation or they’ll leave.

The tactic appeared to have worked, activists said, because some politicians weakened or scrapped guardrails to mitigate AI’s biggest risks.

California Gov. Gavin Newsom rejected a bill aimed at making companion chatbots safer for children after the tech industry fought it. In his veto message, the governor raised concerns about placing broad limits on AI, which has sparked a massive investment spree and created new billionaires overnight around the San Francisco Bay Area.

Assembly Bill 1064 would have barred companion chatbot operators from making these AI systems available to minors unless the chatbots weren’t “foreseeably capable” of certain conduct, including encouraging a child to engage in self-harm. Newsom said he supported the goal, but feared it would unintentionally bar minors from using AI tools and learning how to use technology safely.

“We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” he wrote in his veto message.

The bill’s veto was a blow to child safety advocates who had pushed it through the state Legislature and a win for tech industry groups that fought it. In social media ads, groups such as TechNet had urged the public to tell the governor to veto the bill because it would harm innovation and lead to students falling behind in school.

Organizations trying to rein in the world’s largest tech companies as they advance the powerful technology say the tech industry has become more empowered at the national and state levels.

Meta, Google, OpenAI, Apple and other major tech companies have strengthened their relationships with the Trump administration. Companies are funding new organizations and political action committees to push back against state AI policy while pouring money into lobbying.

In Sacramento, AI companies have lobbied behind the scenes for more freedom. California’s massive pool of engineering talent, tech investors and companies make it an attractive place for the tech industry, but companies are letting policymakers know that other states are also interested in attracting those investments and jobs. Big Tech is particularly sensitive to regulations in the Golden State because so many companies are headquartered there and must abide by its rules.

“We believe California can strike a better balance between protecting consumers and enabling responsible technological growth,” Robert Boykin, TechNet’s executive director for California and the Southwest, said in a statement.

Common Sense Media founder and Chief Executive Jim Steyer said tech lobbyists put tremendous pressure on Newsom to veto AB 1064. Common Sense Media, a nonprofit that rates and reviews technology and entertainment for families, sponsored the bill.

“They threaten to hurt the economy of California,” he said. “That’s the basic message from the tech companies.”

Advertising is among the tactics tech companies with deep pockets use to convince politicians to kill or weaken legislation. Even if the governor signs a bill, companies have at times sued to block new laws from taking effect.

“If you’re really trying to do something bold with tech policy, you have to jump over a lot of hurdles,” said David Evan Harris, senior policy advisor at the California Initiative for Technology and Democracy, which supported AB 1064. The group focuses on finding state-level solutions to threats that AI, disinformation and emerging technologies pose to democracy.

Tech companies have threatened to move their headquarters and jobs to other states or countries, a risk looming over politicians and regulators.

The California Chamber of Commerce, a broad-based business advocacy group that includes tech giants, launched a campaign this year that warned over-regulation could stifle innovation and hinder California.

“Making competition harder could cause California companies to expand elsewhere, costing the state’s economy billions,” the group said on its website.

From January to September, the California Chamber of Commerce spent $11.48 million lobbying California lawmakers and regulators on a variety of bills, filings to the California secretary of state show. During that period, Meta spent $4.13 million. A lobbying disclosure report shows that Meta paid the California Chamber of Commerce $3.1 million, making up the bulk of their spending. Google, which also paid TechNet and the California Chamber of Commerce, spent $2.39 million.

Amazon, Uber, DoorDash and other tech companies spent more than $1 million each. TechNet spent around $800,000.

The threat that California companies could move away has caught the attention of some politicians.

California Atty. Gen. Rob Bonta, who has investigated tech companies over child safety concerns, indicated that despite initial concern, his office wouldn’t oppose ChatGPT maker OpenAI’s restructuring plans. The new structure gives OpenAI’s nonprofit parent a stake in its for-profit public benefit corporation and clears the way for OpenAI to list its shares.

Bonta blessed the restructuring partly because of OpenAI’s pledge to stay in the state.

“Safety will be prioritized, as well as a commitment that OpenAI will remain right here in California,” he said in a statement last week. The AG’s office, which supervises charitable trusts and ensures these assets are used for public benefit, had been investigating OpenAI’s restructuring plan over the last year and a half.

OpenAI Chief Executive Sam Altman said he’s glad to stay in California.

“California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued,” he posted on X.

Critics — which included some tech leaders such as Elon Musk, Meta and former OpenAI executives as well as nonprofits and foundations — have raised concerns about OpenAI’s restructuring plan. Some warned it would allow startups to exploit charitable tax exemptions and let OpenAI prioritize financial gain over public good.

Lawmakers and advocacy groups say it’s been a mixed year for tech regulation. The governor signed Assembly Bill 56, which requires platforms to display labels for minors that warn about social media’s mental health harms. Another piece of signed legislation, Senate Bill 53, aims to make AI developers more transparent about safety risks and offers more whistleblower protections.

The governor also signed a bill that requires chatbot operators to have procedures to prevent the production of suicide or self-harm content. But advocacy groups, including Common Sense Media, removed their support for Senate Bill 243 because they said the tech industry pushed for changes that weakened its protections.

Newsom vetoed other legislation that the tech industry opposed, including Senate Bill 7, which requires employers to notify workers before deploying an “automated decision system” in hiring, promotions and other employment decisions.

Called the “No Robo Bosses Act,” the legislation didn’t clear the governor, who thought it was too broad.

“A lot of nuance was demonstrated in the lawmaking process about the balance between ensuring meaningful protections while also encouraging innovation,” said Julia Powles, a professor and executive director of the UCLA Institute for Technology, Law & Policy.

The battle over AI safety is far from over. Assemblymember Rebecca Bauer-Kahan (D-Orinda), who co-wrote AB 1064, said she plans to revive the legislation.

Child safety is an issue that both Democrats and Republicans are examining after parents sued AI companies such as OpenAI and Character.AI for allegedly contributing to their children’s suicides.

“The harm that these chatbots are causing feels so fast and furious, public and real that I thought we would have a different outcome,” Bauer-Kahan said. “It’s always fascinating to me when the outcome of policy feels to be disconnected from what I believe the public wants.”

Steyer from Common Sense Media said a new ballot initiative includes the AI safety protections that Newsom vetoed.

“That was a setback, but not an overall defeat,” he said about the veto of AB 1064. “This is a David and Goliath situation, and we are David.”

Source link

OpenAI, Amazon sign $38bn AI deal | Technology News

The announcement comes less than week after Amazon laid off 14,000 people.

OpenAI has signed a new deal valued at $38bn with Amazon that will allow the artificial intelligence giant to run AI workloads across Amazon Web Services (AWS) cloud infrastructure.

The seven-year deal announced on Monday is the first big AI push for the e-commerce giant after a restructuring last week.

Recommended Stories

list of 4 itemsend of list

The new deal will give the ChatGPT maker access to thousands of Nvidia graphics processors to train and run its artificial intelligence models.

Experts say this does not mean that it will allow OpenAI to train its model on websites hosted by AWS – which includes the websites of The New York Times, Reddit and United Airlines.

“Running OpenAI training inside AWS doesn’t change their ability to scrape content from AWS-hosted websites [which they could already do for anything publicly readable]. This is strictly speaking about the economics of rent vs buy for GPU [graphics processing unit] capacity,” Joshua McKenty, CEO of the AI detection company PolyguardAI, told Al Jazeera.

The deal is also a major vote of confidence for the e-commerce giant’s cloud unit, AWS, which some investors feared had fallen behind rivals Microsoft and Google in the artificial intelligence (AI) race. Those fears were somewhat eased by the strong growth the business reported in the September quarter.

 

OpenAI will begin using AWS immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond.

Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses and train OpenAI’s next wave of models, the companies said.

Amazon already offers OpenAI models on Amazon Bedrock, which offers multiple AI models for businesses using AWS.

OpenAI’s sweeping restructuring last week moved it further away from its non-profit roots and also removed Microsoft’s first right to refusal to supply services in the new arrangement.

Image hurdles

Amazon’s announcement about an investment in AI comes only days after the company laid off 14,000 people despite CEO Andy Jassy’s comment in an earnings call on Thursday saying the layoffs were not driven by AI.

“The announcement that we made a few days ago was not really financially driven, and it’s not even really AI-driven, not right now at least,” Jassy said.

OpenAI CEO Sam Altman has said the startup is committed to spending $1.4 trillion to develop 30 gigawatts of computing resources – enough to roughly power 25 million United States homes.

“Scaling frontier AI requires massive, reliable compute,” said Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

This comes amid growing concerns about the sheer amount of energy demand that AI data centres need to operate. The Lawrence Berkeley National Laboratory estimates that AI data centres will use up to 12 percent of US electricity by 2028.

An AP/NORC poll from October found that 41 percent of Americans are extremely concerned about AI’s impact on the environment, while another 30 percent say they are somewhat concerned as the industry increases its data centre footprint around the US.

Signs of a bubble

Surging valuations of AI companies and their massive spending commitments, which total more than $1 trillion for OpenAI, have raised fears that the AI boom may be turning into a bubble.

OpenAI has already tapped Alphabet’s Google to supply it with cloud services, as Reuters reported in June. It also reportedly struck a deal to buy $300bn in computing power for about five years.

While OpenAI’s relationship with Microsoft, which the two forged in 2019, has helped push Microsoft to the top spot among its Big Tech peers in the AI race, both companies have been making moves recently to reduce reliance on each other.

Neither OpenAI nor Amazon were immediately available for comment.

On Wall Street, Amazon’s stock is surging on the news of the new deal. As of 11:15am in New York (16:15 GMT), it is up by 4.7 percent.

Source link