OpenAI

California backs down on AI laws so more tech leaders don’t flee the state

California’s tech companies, the epicenter of the state’s economy, sent politicians a loud message this year: Back down from restrictive artificial intelligence regulation or they’ll leave.

The tactic appeared to have worked, activists said, because some politicians weakened or scrapped guardrails to mitigate AI’s biggest risks.

California Gov. Gavin Newsom rejected a bill aimed at making companion chatbots safer for children after the tech industry fought it. In his veto message, the governor raised concerns about placing broad limits on AI, which has sparked a massive investment spree and created new billionaires overnight around the San Francisco Bay Area.

Assembly Bill 1064 would have barred companion chatbot operators from making these AI systems available to minors unless the chatbots weren’t “foreseeably capable” of certain conduct, including encouraging a child to engage in self-harm. Newsom said he supported the goal, but feared it would unintentionally bar minors from using AI tools and learning how to use technology safely.

“We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” he wrote in his veto message.

The bill’s veto was a blow to child safety advocates who had pushed it through the state Legislature and a win for tech industry groups that fought it. In social media ads, groups such as TechNet had urged the public to tell the governor to veto the bill because it would harm innovation and lead to students falling behind in school.

Organizations trying to rein in the world’s largest tech companies as they advance the powerful technology say the tech industry has become more empowered at the national and state levels.

Meta, Google, OpenAI, Apple and other major tech companies have strengthened their relationships with the Trump administration. Companies are funding new organizations and political action committees to push back against state AI policy while pouring money into lobbying.

In Sacramento, AI companies have lobbied behind the scenes for more freedom. California’s massive pool of engineering talent, tech investors and companies make it an attractive place for the tech industry, but companies are letting policymakers know that other states are also interested in attracting those investments and jobs. Big Tech is particularly sensitive to regulations in the Golden State because so many companies are headquartered there and must abide by its rules.

“We believe California can strike a better balance between protecting consumers and enabling responsible technological growth,” Robert Boykin, TechNet’s executive director for California and the Southwest, said in a statement.

Common Sense Media founder and Chief Executive Jim Steyer said tech lobbyists put tremendous pressure on Newsom to veto AB 1064. Common Sense Media, a nonprofit that rates and reviews technology and entertainment for families, sponsored the bill.

“They threaten to hurt the economy of California,” he said. “That’s the basic message from the tech companies.”

Advertising is among the tactics tech companies with deep pockets use to convince politicians to kill or weaken legislation. Even if the governor signs a bill, companies have at times sued to block new laws from taking effect.

“If you’re really trying to do something bold with tech policy, you have to jump over a lot of hurdles,” said David Evan Harris, senior policy advisor at the California Initiative for Technology and Democracy, which supported AB 1064. The group focuses on finding state-level solutions to threats that AI, disinformation and emerging technologies pose to democracy.

Tech companies have threatened to move their headquarters and jobs to other states or countries, a risk looming over politicians and regulators.

The California Chamber of Commerce, a broad-based business advocacy group that includes tech giants, launched a campaign this year that warned over-regulation could stifle innovation and hinder California.

“Making competition harder could cause California companies to expand elsewhere, costing the state’s economy billions,” the group said on its website.

From January to September, the California Chamber of Commerce spent $11.48 million lobbying California lawmakers and regulators on a variety of bills, filings to the California secretary of state show. During that period, Meta spent $4.13 million. A lobbying disclosure report shows that Meta paid the California Chamber of Commerce $3.1 million, making up the bulk of their spending. Google, which also paid TechNet and the California Chamber of Commerce, spent $2.39 million.

Amazon, Uber, DoorDash and other tech companies spent more than $1 million each. TechNet spent around $800,000.

The threat that California companies could move away has caught the attention of some politicians.

California Atty. Gen. Rob Bonta, who has investigated tech companies over child safety concerns, indicated that despite initial concern, his office wouldn’t oppose ChatGPT maker OpenAI’s restructuring plans. The new structure gives OpenAI’s nonprofit parent a stake in its for-profit public benefit corporation and clears the way for OpenAI to list its shares.

Bonta blessed the restructuring partly because of OpenAI’s pledge to stay in the state.

“Safety will be prioritized, as well as a commitment that OpenAI will remain right here in California,” he said in a statement last week. The AG’s office, which supervises charitable trusts and ensures these assets are used for public benefit, had been investigating OpenAI’s restructuring plan over the last year and a half.

OpenAI Chief Executive Sam Altman said he’s glad to stay in California.

“California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued,” he posted on X.

Critics — which included some tech leaders such as Elon Musk, Meta and former OpenAI executives as well as nonprofits and foundations — have raised concerns about OpenAI’s restructuring plan. Some warned it would allow startups to exploit charitable tax exemptions and let OpenAI prioritize financial gain over public good.

Lawmakers and advocacy groups say it’s been a mixed year for tech regulation. The governor signed Assembly Bill 56, which requires platforms to display labels for minors that warn about social media’s mental health harms. Another piece of signed legislation, Senate Bill 53, aims to make AI developers more transparent about safety risks and offers more whistleblower protections.

The governor also signed a bill that requires chatbot operators to have procedures to prevent the production of suicide or self-harm content. But advocacy groups, including Common Sense Media, removed their support for Senate Bill 243 because they said the tech industry pushed for changes that weakened its protections.

Newsom vetoed other legislation that the tech industry opposed, including Senate Bill 7, which requires employers to notify workers before deploying an “automated decision system” in hiring, promotions and other employment decisions.

Called the “No Robo Bosses Act,” the legislation didn’t clear the governor, who thought it was too broad.

“A lot of nuance was demonstrated in the lawmaking process about the balance between ensuring meaningful protections while also encouraging innovation,” said Julia Powles, a professor and executive director of the UCLA Institute for Technology, Law & Policy.

The battle over AI safety is far from over. Assemblymember Rebecca Bauer-Kahan (D-Orinda), who co-wrote AB 1064, said she plans to revive the legislation.

Child safety is an issue that both Democrats and Republicans are examining after parents sued AI companies such as OpenAI and Character.AI for allegedly contributing to their children’s suicides.

“The harm that these chatbots are causing feels so fast and furious, public and real that I thought we would have a different outcome,” Bauer-Kahan said. “It’s always fascinating to me when the outcome of policy feels to be disconnected from what I believe the public wants.”

Steyer from Common Sense Media said a new ballot initiative includes the AI safety protections that Newsom vetoed.

“That was a setback, but not an overall defeat,” he said about the veto of AB 1064. “This is a David and Goliath situation, and we are David.”

Source link

OpenAI, Amazon sign $38bn AI deal | Technology News

The announcement comes less than week after Amazon laid off 14,000 people.

OpenAI has signed a new deal valued at $38bn with Amazon that will allow the artificial intelligence giant to run AI workloads across Amazon Web Services (AWS) cloud infrastructure.

The seven-year deal announced on Monday is the first big AI push for the e-commerce giant after a restructuring last week.

Recommended Stories

list of 4 itemsend of list

The new deal will give the ChatGPT maker access to thousands of Nvidia graphics processors to train and run its artificial intelligence models.

Experts say this does not mean that it will allow OpenAI to train its model on websites hosted by AWS – which includes the websites of The New York Times, Reddit and United Airlines.

“Running OpenAI training inside AWS doesn’t change their ability to scrape content from AWS-hosted websites [which they could already do for anything publicly readable]. This is strictly speaking about the economics of rent vs buy for GPU [graphics processing unit] capacity,” Joshua McKenty, CEO of the AI detection company PolyguardAI, told Al Jazeera.

The deal is also a major vote of confidence for the e-commerce giant’s cloud unit, AWS, which some investors feared had fallen behind rivals Microsoft and Google in the artificial intelligence (AI) race. Those fears were somewhat eased by the strong growth the business reported in the September quarter.

 

OpenAI will begin using AWS immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond.

Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses and train OpenAI’s next wave of models, the companies said.

Amazon already offers OpenAI models on Amazon Bedrock, which offers multiple AI models for businesses using AWS.

OpenAI’s sweeping restructuring last week moved it further away from its non-profit roots and also removed Microsoft’s first right to refusal to supply services in the new arrangement.

Image hurdles

Amazon’s announcement about an investment in AI comes only days after the company laid off 14,000 people despite CEO Andy Jassy’s comment in an earnings call on Thursday saying the layoffs were not driven by AI.

“The announcement that we made a few days ago was not really financially driven, and it’s not even really AI-driven, not right now at least,” Jassy said.

OpenAI CEO Sam Altman has said the startup is committed to spending $1.4 trillion to develop 30 gigawatts of computing resources – enough to roughly power 25 million United States homes.

“Scaling frontier AI requires massive, reliable compute,” said Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

This comes amid growing concerns about the sheer amount of energy demand that AI data centres need to operate. The Lawrence Berkeley National Laboratory estimates that AI data centres will use up to 12 percent of US electricity by 2028.

An AP/NORC poll from October found that 41 percent of Americans are extremely concerned about AI’s impact on the environment, while another 30 percent say they are somewhat concerned as the industry increases its data centre footprint around the US.

Signs of a bubble

Surging valuations of AI companies and their massive spending commitments, which total more than $1 trillion for OpenAI, have raised fears that the AI boom may be turning into a bubble.

OpenAI has already tapped Alphabet’s Google to supply it with cloud services, as Reuters reported in June. It also reportedly struck a deal to buy $300bn in computing power for about five years.

While OpenAI’s relationship with Microsoft, which the two forged in 2019, has helped push Microsoft to the top spot among its Big Tech peers in the AI race, both companies have been making moves recently to reduce reliance on each other.

Neither OpenAI nor Amazon were immediately available for comment.

On Wall Street, Amazon’s stock is surging on the news of the new deal. As of 11:15am in New York (16:15 GMT), it is up by 4.7 percent.

Source link

OpenAI restructures into public-benefit firm, Microsoft takes 27% stake | Technology News

The deal removes a major constraint on raising capital for OpenAI, the maker of ChatGPT, and values the firm at $500bn.

Microsoft and OpenAI have reached a deal to allow the ChatGPT maker to restructure itself into a public-benefit corporation, valuing OpenAI at $500bn and giving it more freedom in its business operations.

The deal, unveiled on Tuesday, removes a major constraint on raising capital for OpenAI that has existed since 2019.

Recommended Stories

list of 4 itemsend of list

At the time, it had signed an agreement with Microsoft that gave the tech giant rights over much of OpenAI’s work in exchange for costly cloud computing services needed to carry it out. As its ChatGPT service exploded in popularity, those limitations had become a notable source of tension between the two companies.

Microsoft will still hold a stake of about $135bn, or 27 percent, in OpenAI Group PBC, which will be controlled by the OpenAI Foundation, a nonprofit, the companies said.

Microsoft, based in Redmond, Washington in the United States, has invested $13.8bn in OpenAI, with Tuesday’s deal implying that the firm had generated a return of nearly 10 times its investment.

Shares of Microsoft rose 2.5 percent, sending its market value above $4 trillion again.

The deal keeps the two firms intertwined until at least 2032, with a massive cloud computing contract and with Microsoft retaining some rights to OpenAI products and artificial intelligence (AI) models until then – even if OpenAI reaches artificial general intelligence (AGI), the point at which AI systems can match a well-educated human adult.

Simplified corporate structure

With more than 700 million weekly users as of September, ChatGPT has exploded in popularity to become the face of AI for many consumers after OpenAI’s founding as a nonprofit AI safety group.

As the company grew, the Microsoft deal constrained OpenAI’s ability to raise funds from outside investors and secure computing contracts as the crush of ChatGPT users and its research into new models caused its computing needs to skyrocket.

“OpenAI has completed its recapitalization, simplifying its corporate structure,” Bret Taylor, the OpenAI Foundation’s board chair, said in a blog post. “The nonprofit remains in control of the for-profit, and now has a direct path to major resources before AGI arrives.”

Microsoft’s previous 2019 agreement had many provisions that rested on when OpenAI reached that point, and the new deal requires an independent panel to verify OpenAI’s claims it has reached AGI.

“OpenAI still faces ongoing scrutiny around transparency, data usage, and safety oversight. But overall, this structure should provide a clearer path forward for innovation and accountability,” said Adam Sarhan, CEO of 50 Park Investments.

Gil Luria, head of technology research at DA Davidson, said the deal “resolves the longstanding issue of OpenAI being organized as a not-for-profit [organisation] and settles the ownership rights of the technology vis-a-vis Microsoft. The new structure should provide more clarity on OpenAI’s investment path, thus facilitating further fundraising.”

Microsoft also said that it has secured a deal with OpenAI where the ChatGPT maker will purchase $250bn of Microsoft Azure cloud computing services. In exchange, Microsoft will no longer have a right of first refusal to provide computing services to OpenAI.

Microsoft also said that it will not have any rights to hardware produced by OpenAI. In March, OpenAI bought longtime Apple design chief Jony Ive’s startup io Products in a $6.5bn deal.

Source link

OpenAI announces new AI-powered Atlas browser

1 of 3 | An introduction page of ChatGPT is pictured in 2023. On Tuesday, Open AI unveiled an early version of its new AI-powered ChatGPT Atlas web browser. File Photo by Wu Hao/EPA

Oct. 21 (UPI) — OpenAI unveiled the early version of its AI-powered ChatGPT Atlas web browser on Tuesday, offering many powerful features that seek to interlace the company’s technology into daily internet use.

The new browser is currently only available on macOS, with future versions coming to Windows and mobile devices, according to a post by OpenAI. While other tech companies, including Microsoft and Google, have incorporated AI into their products, OpenAI called Atlas a step closer “to a true super-assistant” that follows users across the web.

“It’s a new kind of browser for the next era of the web,” OpenAI CEO Sam Altman said in a video, where staff demonstrated how Atlas could be used to complete a grocery order, help project management at work and other tasks.

Atlas will draw on user’s previous interactions with the powerful chat bot, meaning it will have a back-and-forth deeper than Google’s box of AI-generated results that accompanies web searches.

If Atlas is popular, it could be “a serious threat to Google’s dominance,” according to TechCrunch. It could also provide valuable information to targeted advertising should OpenAI change its business model. But the tech website concluded that “It’s still early days for Atlas and a lot will depend on the product itself — and whether users really want what OpenAI is offering here.”

Users of the paid version of ChatGPT can use “agent” mode that allows Atlas to perform some tasks independently.

“Despite all of the power and awesome capabilities that you get with sharing your browser with ChatGPT that also poses an entirely new set of risks,” OpenAI’s Pranav Vishnu said during the video announcing Atlas. He said that there are safeguards that keep the agent operating on Atlas tabs and prevents it from accessing users’ computer files.

Marketing experts have warned that AI could soon be used to make purchases for consumers using their data. Users of Atlas can limit what data is saved, according to an OpenAI page explaining user controls.

Source link

Airbnb CEO says ChatGPT isn’t ready

Airbnb Inc. Chief Executive Officer Brian Chesky said he didn’t integrate his company’s online travel app with OpenAI’s ChatGPT because the startup’s connective tools aren’t “quite ready” yet.

Airbnb will monitor the development of ChatGPT’s app integrations and may consider a tie-up in the future similar to those of its peers Booking Holdings Inc. and Expedia Group Inc., Chesky said in an interview.

“I didn’t think it was quite ready,” he said of ChatGPT’s integration abilities.

Because Airbnb is a community with verified members, OpenAI will have to build a platform so robust that Airbnb’s app can work within the ChatGPT chatbot in an “almost self-contained” manner, Chesky said.

Chesky, who is close friends with OpenAI CEO Sam Altman, said he advised the AI company on its new capability for third-party developers to make their apps available within the ChatGPT chatbot. The AI company announced those features earlier this month. Airbnb wasn’t among the first apps that are available on the popular chatbot.

An OpenAI spokesperson declined to comment on Chesky’s remarks, but referred to the company’s blog post earlier this month that described the app integration technology as a developer preview, with more features coming soon.

While Airbnb has set aside a possible integration with ChatGPT, the company Tuesday announced that it had updated its in-app artificial intelligence tools to let customers take more actions without the need of a live representative.

The company’s AI customer service agent, which it rolled out to all US users in English in May, now displays action buttons and links that can help people complete, say, a reservation change or cancellation.

That has led to a 15% reduction in users needing a live representative, cutting average resolution time to six seconds from nearly three hours, Airbnb said. The company plans to add Spanish and French language support this fall, and 56 more languages next year.

The agent is built upon 13 different AI models, including those from OpenAI, Alibaba Group Holding Ltd., Alphabet Inc.’s Google and open source providers, Chesky said.

“We’re relying a lot on Alibaba’s Qwen model. It’s very good. It’s also fast and cheap,” he said. “We use OpenAI’s latest models, but we typically don’t use them that much in production because there are faster and cheaper models.”

Airbnb, which expanded its business beyond accommodations into tours and individual services earlier this year, also is adding new social features to encourage user connections and eventually make better travel recommendations within the app.

The company unveiled an option for guests to share their Airbnb profile with other travelers after they book an experience. Users who have gone on the same tours can also now directly message one another — privacy safeguards are implemented where the conversation can only continue if the recipient accepts a message request, Airbnb said.

More social features are coming next year, and Chesky said that longer term these features could lend themselves to user-generated content on the app, where people can seek travel inspiration without leaving the Airbnb site.

“I think the social features, the community, that’s probably the most differentiated part of Airbnb,” he said. “People are the reason why I think Airbnb is such a sticky service.”

Lung writes for Bloomberg.

Source link

Hollywood-AI battle heats up, as OpenAI and studios clash over copyrights and consent

A year after tech firm OpenAI roiled Hollywood with the release of its Sora AI video tool, Chief Executive Sam Altman was back — with a potentially groundbreaking update.

Unlike the generic images Sora could initially create, the new program allows users to upload videos of real people and put them into AI-generated environments, complete with sound effects and dialogue.

In one video, a synthetic Michael Jackson takes a selfie video with an image of “Breaking Bad” star Bryan Cranston. In another, a likeness of SpongeBob SquarePants speaks out from behind the White House’s Oval Office desk.

“Excited to launch Sora 2!” Altman wrote on social media platform X on Sept. 30. “Video models have come a long way; this is a tremendous research achievement.”

But the enthusiasm wasn’t shared in Hollywood, where the new AI tools have created a swift backlash. At the core of the dispute is who controls the copyrighted images and likenesses of actors and licensed characters — and how much they should be compensated for their use in AI models.

The Motion Picture Assn. trade group didn’t mince words.

“OpenAI needs to take immediate and decisive action to address this issue,” Chairman Charles Rivkin said in a statement Monday. “Well-established copyright law safeguards the rights of creators and applies here.”

By the end of the week, multiple agencies and unions, including SAG-AFTRA, chimed in with similar statements, marking a rare moment of consensus in Hollywood and putting OpenAI on the defensive.

“We’re engaging directly with studios and rightsholders, listening to feedback, and learning from how people are using Sora 2,” Varun Shetty, OpenAI’s vice president of media partnerships, said in a statement. “Many are creating original videos and excited about interacting with their favorite characters, which we see as an opportunity for rightsholders to connect with fans and share in that creativity.”

For now, the skirmish between well-capitalized OpenAI and the major Hollywood studios and agencies appears to be only just the beginning of a bruising legal fight that could shape the future of AI use in the entertainment business.

“The question is less about if the studios will try to assert themselves, but when and how,” said Anthony Glukhov, senior associate at law firm Ramo, of the clash between Silicon Valley and Hollywood over AI. “They can posture all they want; but at the end of the day, there’s going to be two titans battling it out.”

Before it became the focus of ire in the creative community, OpenAI quietly tried to make inroads into the film and TV business.

The company’s executives went on a charm offensive last year. They reached out to key players in the entertainment industry — including Walt Disney Co. — about potential areas for collaboration and trying to assuage concerns about its technology.

This year, the San Francisco-based AI startup took a more assertive approach.

Before unveiling Sora 2 to the general public, OpenAI executives had conversations with some studios and talent agencies, putting them on notice that they need to explicitly declare which pieces of intellectual property — including licensed characters — were being opted-out of having their likeness depicted on the AI platform, according to two sources familiar with the matter who were not authorized to comment. Actors would be included in Sora 2 unless they opted out, the people said.

OpenAI disputes the claim and says that it was always the company’s intent to give actors and other public figures control over how their likeness is used.

The response was immediate.

Beverly Hills talent agency WME, which represents stars such as Michael B. Jordan and Oprah Winfrey, told OpenAI its actions were unacceptable, and that all of its clients would be opting out.

Creative Artists Agency and United Talent Agency also argued that their clients had the right to control and be compensated for their likenesses.

Studios, including Warner Bros., echoed the point.

“Decades of enforceable copyright law establishes that content owners do not need to ‘opt out’ to prevent infringing uses of their protected IP,” Warner Bros. Discovery said in a statement. “As technology progresses and platforms advance, the traditional principles of copyright protection do not change.”

Unions, including SAG-AFTRA — whose members were already alarmed over the recent appearance of a fake, AI-generated composite named Tilly Norwood — also expressed alarm.

“OpenAI’s decision to honor copyright only through an ‘opt-out’ model threatens the economic foundation of our entire industry and underscores the stakes in the litigation currently working through the courts,” newly elected President Sean Astin and National Executive Director Duncan Crabtree-Ireland said in a statement.

The dispute underscores a clash of two very different cultures. On one side is the brash, Silicon Valley “move fast and break things” ethos, where asking for forgiveness is seen as preferable to asking for permission. On the other is Hollywood’s eternal wariness over the effect of new technology, and its desire to retain control over increasingly valuable intellectual property rights.

“The difficulty, as we’ve seen, is balancing the capabilities with the prior rights owned by other people,” said Rob Rosenberg, a partner with law firm Moses and Singer LLP and a former Showtime Networks general counsel. “That’s what was driving the entire entertainment industry bonkers.”

Amid the outcry, Sam Altman posted on his blog days after the Sora 2 launch that the company would be giving more granular controls to rights holders and is working on a way to compensate them for video generation.

OpenAI said it has guardrails to block the generation of well-known characters and a team of reviewers who are taking down material that doesn’t follow its updated policy. Rights holders can also request removal of content.

The strong pushback from the creative community could be a strategy to force OpenAI into entering licensing agreements for the content they need, legal experts said.

Existing law is clear — a copyright holder has full control over their copyrighted material, said Ray Seilie, entertainment litigator at law firm Kinsella Holley Iser Kump Steinsapir.

“It’s not your job to go around and tell other people to stop using it,” he said. “If they use it, they use it at their own risk.”

Disney, Universal and Warner Bros. Discovery have previously sued AI firms MiniMax and Midjourney, accusing them of copyright infringement.

One challenge is figuring out a way that fairly compensates talent and rights holders. Several people who work within the entertainment industry ecosystem said they don’t believe a flat fee works.

“Bring monetization that is not a one size fits all,” said Dan Neely, chief executive of Chicago-based Vermillio, which works with Hollywood talent and studios and protects how their likenesses and characters are used in AI. “That’s what will move the needle for talent and studios.”

Visiting journalist Nilesh Christopher contributed to this report.

Source link