tech industry

California backs down on AI laws so more tech leaders don’t flee the state

California’s tech companies, the epicenter of the state’s economy, sent politicians a loud message this year: Back down from restrictive artificial intelligence regulation or they’ll leave.

The tactic appeared to have worked, activists said, because some politicians weakened or scrapped guardrails to mitigate AI’s biggest risks.

California Gov. Gavin Newsom rejected a bill aimed at making companion chatbots safer for children after the tech industry fought it. In his veto message, the governor raised concerns about placing broad limits on AI, which has sparked a massive investment spree and created new billionaires overnight around the San Francisco Bay Area.

Assembly Bill 1064 would have barred companion chatbot operators from making these AI systems available to minors unless the chatbots weren’t “foreseeably capable” of certain conduct, including encouraging a child to engage in self-harm. Newsom said he supported the goal, but feared it would unintentionally bar minors from using AI tools and learning how to use technology safely.

“We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” he wrote in his veto message.

The bill’s veto was a blow to child safety advocates who had pushed it through the state Legislature and a win for tech industry groups that fought it. In social media ads, groups such as TechNet had urged the public to tell the governor to veto the bill because it would harm innovation and lead to students falling behind in school.

Organizations trying to rein in the world’s largest tech companies as they advance the powerful technology say the tech industry has become more empowered at the national and state levels.

Meta, Google, OpenAI, Apple and other major tech companies have strengthened their relationships with the Trump administration. Companies are funding new organizations and political action committees to push back against state AI policy while pouring money into lobbying.

In Sacramento, AI companies have lobbied behind the scenes for more freedom. California’s massive pool of engineering talent, tech investors and companies make it an attractive place for the tech industry, but companies are letting policymakers know that other states are also interested in attracting those investments and jobs. Big Tech is particularly sensitive to regulations in the Golden State because so many companies are headquartered there and must abide by its rules.

“We believe California can strike a better balance between protecting consumers and enabling responsible technological growth,” Robert Boykin, TechNet’s executive director for California and the Southwest, said in a statement.

Common Sense Media founder and Chief Executive Jim Steyer said tech lobbyists put tremendous pressure on Newsom to veto AB 1064. Common Sense Media, a nonprofit that rates and reviews technology and entertainment for families, sponsored the bill.

“They threaten to hurt the economy of California,” he said. “That’s the basic message from the tech companies.”

Advertising is among the tactics tech companies with deep pockets use to convince politicians to kill or weaken legislation. Even if the governor signs a bill, companies have at times sued to block new laws from taking effect.

“If you’re really trying to do something bold with tech policy, you have to jump over a lot of hurdles,” said David Evan Harris, senior policy advisor at the California Initiative for Technology and Democracy, which supported AB 1064. The group focuses on finding state-level solutions to threats that AI, disinformation and emerging technologies pose to democracy.

Tech companies have threatened to move their headquarters and jobs to other states or countries, a risk looming over politicians and regulators.

The California Chamber of Commerce, a broad-based business advocacy group that includes tech giants, launched a campaign this year that warned over-regulation could stifle innovation and hinder California.

“Making competition harder could cause California companies to expand elsewhere, costing the state’s economy billions,” the group said on its website.

From January to September, the California Chamber of Commerce spent $11.48 million lobbying California lawmakers and regulators on a variety of bills, filings to the California secretary of state show. During that period, Meta spent $4.13 million. A lobbying disclosure report shows that Meta paid the California Chamber of Commerce $3.1 million, making up the bulk of their spending. Google, which also paid TechNet and the California Chamber of Commerce, spent $2.39 million.

Amazon, Uber, DoorDash and other tech companies spent more than $1 million each. TechNet spent around $800,000.

The threat that California companies could move away has caught the attention of some politicians.

California Atty. Gen. Rob Bonta, who has investigated tech companies over child safety concerns, indicated that despite initial concern, his office wouldn’t oppose ChatGPT maker OpenAI’s restructuring plans. The new structure gives OpenAI’s nonprofit parent a stake in its for-profit public benefit corporation and clears the way for OpenAI to list its shares.

Bonta blessed the restructuring partly because of OpenAI’s pledge to stay in the state.

“Safety will be prioritized, as well as a commitment that OpenAI will remain right here in California,” he said in a statement last week. The AG’s office, which supervises charitable trusts and ensures these assets are used for public benefit, had been investigating OpenAI’s restructuring plan over the last year and a half.

OpenAI Chief Executive Sam Altman said he’s glad to stay in California.

“California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued,” he posted on X.

Critics — which included some tech leaders such as Elon Musk, Meta and former OpenAI executives as well as nonprofits and foundations — have raised concerns about OpenAI’s restructuring plan. Some warned it would allow startups to exploit charitable tax exemptions and let OpenAI prioritize financial gain over public good.

Lawmakers and advocacy groups say it’s been a mixed year for tech regulation. The governor signed Assembly Bill 56, which requires platforms to display labels for minors that warn about social media’s mental health harms. Another piece of signed legislation, Senate Bill 53, aims to make AI developers more transparent about safety risks and offers more whistleblower protections.

The governor also signed a bill that requires chatbot operators to have procedures to prevent the production of suicide or self-harm content. But advocacy groups, including Common Sense Media, removed their support for Senate Bill 243 because they said the tech industry pushed for changes that weakened its protections.

Newsom vetoed other legislation that the tech industry opposed, including Senate Bill 7, which requires employers to notify workers before deploying an “automated decision system” in hiring, promotions and other employment decisions.

Called the “No Robo Bosses Act,” the legislation didn’t clear the governor, who thought it was too broad.

“A lot of nuance was demonstrated in the lawmaking process about the balance between ensuring meaningful protections while also encouraging innovation,” said Julia Powles, a professor and executive director of the UCLA Institute for Technology, Law & Policy.

The battle over AI safety is far from over. Assemblymember Rebecca Bauer-Kahan (D-Orinda), who co-wrote AB 1064, said she plans to revive the legislation.

Child safety is an issue that both Democrats and Republicans are examining after parents sued AI companies such as OpenAI and Character.AI for allegedly contributing to their children’s suicides.

“The harm that these chatbots are causing feels so fast and furious, public and real that I thought we would have a different outcome,” Bauer-Kahan said. “It’s always fascinating to me when the outcome of policy feels to be disconnected from what I believe the public wants.”

Steyer from Common Sense Media said a new ballot initiative includes the AI safety protections that Newsom vetoed.

“That was a setback, but not an overall defeat,” he said about the veto of AB 1064. “This is a David and Goliath situation, and we are David.”

Source link

Tech companies under pressure as California governor weighs AI bills

California lawmakers want Gov. Gavin Newsom to approve bills they passed that aim to make artificial intelligence chatbots safer. But as the governor weighs whether to sign the legislation into law, he faces a familiar hurdle: objections from tech companies that say new restrictions would hinder innovation.

Californian companies are world leaders in AI and have spent hundreds of billions of dollars to stay ahead in the race to create the most powerful chatbots. The rapid pace has alarmed parents and lawmakers worried that chatbots are harming the mental health of children by exposing them to self-harm content and other risks.

Parents who allege chatbots encouraged their teens to harm themselves before they died by suicide have sued tech companies such as OpenAI, Character Technologies and Google. They’ve also pushed for more guardrails.

Calls for more AI regulation have reverberated throughout the nation’s capital and various states. Even as the Trump administration’s “AI Action Plan” proposes to cut red tape to encourage AI development, lawmakers and regulators from both parties are tackling child safety concerns surrounding chatbots that answer questions or act as digital companions.

California lawmakers this month passed two AI chatbot safety bills that the tech industry lobbied against. Newsom has until mid-October to approve or reject them.

The high-stakes decision puts the governor in a tricky spot. Politicians and tech companies alike want to assure the public they’re protecting young people. At the same time, tech companies are trying to expand the use of chatbots in classrooms and have opposed new restrictions they say go too far.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

Meanwhile, if Newsom runs for president in 2028, he might need more financial support from wealthy tech entrepreneurs. On Sept. 22, Newsom promoted the state’s partnerships with tech companies on AI efforts and touted how the tech industry has fueled California’s economy, calling the state the “epicenter of American innovation.”

He has vetoed AI safety legislation in the past, including a bill last year that divided Silicon Valley’s tech industry because the governor thought it gave the public a “false sense of security.” But he also signaled that he’s trying to strike a balance between addressing safety concerns and ensuring California tech companies continue to dominate in AI.

“We have a sense of responsibility and accountability to lead, so we support risk-taking, but not recklessness,” Newsom said at a discussion with former President Clinton at a Clinton Global Initiative event on Wednesday.

Two bills sent to the governor — Assembly Bill 1064 and Senate Bill 243 — aim to make AI chatbots safer but face stiff opposition from the tech industry. It’s unclear if the governor will sign both bills. His office declined to comment.

AB 1064 bars a person, business and other entity from making companion chatbots available to a California resident under the age of 18 unless the chatbot isn’t “foreseeably capable” of harmful conduct such as encouraging a child to engage in self-harm, violence or disordered eating.

SB 243 requires operators of companion chatbots to notify certain users that the virtual assistants aren’t human.

Under the bill, chatbot operators would have to have procedures to prevent the production of suicide or self-harm content and put in guardrails, such as referring users to a suicide hotline or crisis text line.

They would be required to notify minor users at least every three hours to take a break, and that the chatbot is not human. Operators would also be required to implement “reasonable measures” to prevent companion chatbots from generating sexually explicit content.

Tech lobbying group TechNet, whose members include OpenAI, Meta, Google and others, said in a statement that it “agrees with the intent of the bills” but remains opposed to them.

AB 1064 “imposes vague and unworkable restrictions that create sweeping legal risks, while cutting students off from valuable AI learning tools,” said Robert Boykin, TechNet’s executive director for California and the Southwest, in a statement. “SB 243 establishes clearer rules without blocking access, but we continue to have concerns with its approach.”

A spokesperson for Meta said the company has “concerns about the unintended consequences that measures like AB 1064 would have.” The tech company launched a new Super PAC to combat state AI regulation that the company thinks is too burdensome, and is pushing for more parental control over how kids use AI, Axios reported on Tuesday.

Opponents led by the Computer & Communications Industry Assn. lobbied aggressively against AB 1064, stating it would threaten innovation and disadvantage California companies that would face more lawsuits and have to decide if they wanted to continue operating in the state.

Advocacy groups, including Common Sense Media, a nonprofit that sponsored AB 1064 and recommends that minors shouldn’t use AI companions, are urging Newsom to sign the bill into law. California Atty. Gen. Rob Bonta also supports the bill.

The Electronic Frontier Foundation said SB 243 is too broad and would run into free-speech issues.

Several groups, including Common Sense Media and Tech Oversight California, removed their support for SB 243 after changes were made to the bill, which they said weakened protections. Some of the changes limited who receives certain notifications and included exemptions for certain chatbots in video games and virtual assistants used in smart speakers.

Lawmakers who introduced chatbot safety legislation want the governor to sign both bills, arguing that they can both “work in harmony.”

Sen. Steve Padilla (D-Chula Vista), who introduced SB 243, said that even with the changes he still thinks the new rules will make AI safer.

“We’ve got a technology that has great potential for good, is incredibly powerful, but is evolving incredibly rapidly, and we can’t miss a window to provide commonsense guardrails here to protect folks,” he said. “I’m happy with where the bill is at.”

Assemblymember Rebecca Bauer-Kahan (D-Orinda), who co-wrote AB 1064, said her bill balances the benefits of AI while safeguarding against the dangers.

“We want to make sure that when kids are engaging with any chatbot that it is not creating an unhealthy emotional attachment, guiding them towards suicide, disordered eating, any of the things that we know are harmful for children,” she said.

During the legislative session, lawmakers heard from grieving parents who lost their children. AB 1064 highlights two high-profile lawsuits: one against San Francisco ChatGPT maker OpenAI and another against Character Technologies, the developer of chatbot platform Character.AI.

Character.AI is a platform where people can create and interact with digital characters that mimic real and fictional people. Last year, Florida mom Megan Garcia alleged in a federal lawsuit that Character.AI’s chatbots harmed the mental health of her son Sewell Setzer III and accused the company of failing to notify her or offer help when he expressed suicidal thoughts to virtual characters.

More families sued the company this year. A Character.AI spokesperson said they care very deeply about user safety and “encourage lawmakers to appropriately craft laws that promote user safety while also allowing sufficient space for innovation and free expression.”

In August, the California parents of Adam Raine sued OpenAI, alleging that ChatGPT provided the teen information about suicide methods, including the one the teen used to kill himself.

OpenAI said it’s strengthening safeguards and plans to release parental controls. Its chief executive, Sam Altman, wrote in a September blog post that the company believes minors need “significant protections” and the company prioritizes “safety ahead of privacy and freedom for teens.” The company declined to comment on the California AI chatbot bills.

To California lawmakers, the clock is ticking.

“We’re doing our best,” Bauer-Kahan said. “The fact that we’ve already seen kids lose their lives to AI tells me we’re not moving fast enough.”

Source link

‘Extremely chaotic.’ Tech industry rattled by Trump’s $100,000 H-1B visa fee

President Trump’s new sky-high visa fees have shaken Silicon Valley’s tech giants as they contemplate a surge in the cost of hiring global talent and a new tactic the White House can use to keep Silicon Valley in line.

The tech industry was already navigating an economy with higher and unpredictable tariffs, when last week the Trump administration threw another curveball aimed directly at its bottom line: a $100,000 fee for the visas used to hire certain skilled foreign workers. The industry relies heavily on the H-1B visa program to bring in a wide range of engineers, coders, and other top talent to the United States.

The rollout has sparked confusion among businesses, immigration lawyers and current H-1B visa holders.

Over the weekend, the Trump administration clarified that the new fee will apply to new visas, isn’t annual and doesn’t prevent current H-1B visa holders from traveling in and outside of the country. Companies would have to pay the fee with any new H-1B visa petitions submitted after a specific time on Sept. 21, the White House said.

On Monday, the Trump administration also clarified that certain professions, such as doctors, may be exempt from the fee. Some observers are concerned that a selective application of the fee could be a way the White House can reward its friends and punish its detractors.

Meta, Apple, Google, Amazon and Microsoft have been strengthening their ties with the Trump administration by committing to invest hundreds of billions of dollars in the United States.

Still, immigration has long been a contentious issue between the Trump administration and tech executives, some of whom were on a H-1B visa before they co-founded or led some of the world’s largest tech companies.

One of the most vocal supporters of the H-1B visas: Elon Musk, who backed Trump but has publicly sparred with him after he led the federal government’s efforts to slash spending. Musk, who runs multiple companies, including Tesla, SpaceX and xAI, is a naturalized U.S. citizen born in South Africa and has held an H-1B visa.

Tech executives have said the H-1B visa program has been crucial for hiring skilled workers. Competition to attract the world’s best talent has been intensifying since the popularity of OpenAI’s ChatGPT sparked a fierce race to rapidly advance artificial intelligence.

The new fee could slow California’s development and the United States’ position in the AI race by making it tougher for companies — especially startups with less money — to bring in international employees, experts said.

So far this fiscal year, more than 7,500 companies in Californiahave applied forH-1B visas and 61,841 have been approved, data from the U.S. Citizenship and Immigration Services shows.

Tech companies use the visa program to hire computer scientists and engineers because the U.S. isn’t producing enough workers with the skills needed, said Darrell West, a senior fellow in the Center for Technology Innovation at the Brookings Institution.

Trump “likes to talk tough on immigration, but he fails to recognize how important immigrants are to our economy,” he said. “Companies in technology, agriculture, hotels, restaurants and construction rely heavily on immigrants, and slowing that flow is going to be devastating for companies in those areas.”

In his executive order, the Trump administration noted that some companies, such as information technology firms, have allegedly misused the program, citing mass layoffs in the tech industry and the difficulty young college graduates face in landing jobs.

“President Trump promised to put American workers first, and this commonsense action does just that by discouraging companies from spamming the system and driving down American wages,” Taylor Rogers, a White House spokesperson, said in a statement.

Economists and tech executives, though, have pointed to other factors affecting hiring, including economic uncertainty from tariffs, a shift in investments and the rise of AI tools that could complete tasks typically filled by entry-level workers.

California’s unemployment rate of 5.5% in August was higher than the U.S. unemployment rate of 4.3%, according to the U.S. Bureau of Labor Statistics.

The rollout of the new changes has been “extremely chaotic,” and while the White House has tried to clear up some of the confusion, tech companies still have a lot of questions about how the fee would work, said Adam Kovacevich, chief executive of the Chamber of Progress, a center-left tech industry policy coalition.

“You never know what you’re gonna end up with the final policy in Trump world,” he said. “Somebody within the administration drives an announcement, there’s blowback, and then they end up modifying their plans.”

Tech companies have been trying to navigate a fine line in their relationship with Trump.

During Trump’s first term, high-profile tech executives, including those from Meta, Amazon, Google and Apple, spoke out about his administration’s order to restrict travel from several majority-Muslim countries. But in his second term, those same executives have cozied up to the Trump administration as they seek to influence AI policy and strike lucrative partnerships with the government.

They’ve contributed to his inauguration fund, appeared at high-profile press events, and attended a White House dinner, where Trump asked them how much they’re investing in the United States.

Microsoft declined to comment. Meta, Google and Apple didn’t immediately respond to a request for comment.

Changes to the H-1B program could also worsen relations with other countries, such as India, that send skilled tech workers to the U.S., experts said.

Indian nationals are the largest beneficiaries of the H-1B visa program, accounting for 71% of approved petitions, followed by those from China, at approximately 12%.

Some Indian venture capitalists and research institutes see a silver lining in this murky future. On social media, some have posted that the uncertainty surrounding H-1B visa rules could encourage talented engineers to return home to build startups, thereby fueling India’s tech sector. That would mean more competition for U.S. tech companies.

Kunal Bahl, an Indian tech investor and entrepreneur, posted “Come, build in India!” on social media. His firm, Titan Capital, launched a seed funding and mentorship program aimed at attracting students and professionals rethinking their future in the U.S. after the visa troubles.

Global tech companies might also consider opening more centers abroad where workers can work remotely and not have to move to the U.S., said Phil Fersht, the founder and chief executive of HFS Research.

“The more the U.S. makes itself a less attractive place to bring in talent,” he said, “the more it is going to harm its economy.”

Source link