Technology

EU Mulls Pausing Parts of AI Act Amid U.S. and Big Tech Pushback

The European Commission is reportedly considering delaying parts of its landmark Artificial Intelligence (AI) Act following heavy lobbying from U.S. tech giants and pressure from Washington, the Financial Times reported Friday. The proposed pause would affect select provisions of the legislation, which came into force in August 2024 but is being implemented in stages.

Why It Matters:

The AI Act is the world’s first comprehensive framework regulating artificial intelligence, setting strict rules on transparency, safety, and ethical use. Any delay could dilute Europe’s claim to global leadership in AI governance and highlight the growing influence of U.S. tech companies and policymakers in shaping international digital standards. The move also comes as the EU seeks to avoid trade tensions with the Trump administration.

Tech firms like Meta and Alphabet have long argued the law could stifle innovation and competitiveness. The European Commission previously rejected calls for a pause, insisting the rollout would proceed on schedule.

However, an EU spokesperson told the FT that officials are now discussing “targeted implementation delays” while reaffirming support for the act’s core objectives. The Commission and U.S. officials have reportedly been in talks as part of a broader “simplification process” ahead of a November 19 adoption date.

What’s Next:

No final decision has been made, but if adopted, the pause could push back compliance deadlines for some high-risk AI systems. The EU is expected to clarify its position later this month amid growing scrutiny from lawmakers, digital rights advocates, and international partners.

With information from Reuters.

Source link

California backs down on AI laws so more tech leaders don’t flee the state

California’s tech companies, the epicenter of the state’s economy, sent politicians a loud message this year: Back down from restrictive artificial intelligence regulation or they’ll leave.

The tactic appeared to have worked, activists said, because some politicians weakened or scrapped guardrails to mitigate AI’s biggest risks.

California Gov. Gavin Newsom rejected a bill aimed at making companion chatbots safer for children after the tech industry fought it. In his veto message, the governor raised concerns about placing broad limits on AI, which has sparked a massive investment spree and created new billionaires overnight around the San Francisco Bay Area.

Assembly Bill 1064 would have barred companion chatbot operators from making these AI systems available to minors unless the chatbots weren’t “foreseeably capable” of certain conduct, including encouraging a child to engage in self-harm. Newsom said he supported the goal, but feared it would unintentionally bar minors from using AI tools and learning how to use technology safely.

“We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” he wrote in his veto message.

The bill’s veto was a blow to child safety advocates who had pushed it through the state Legislature and a win for tech industry groups that fought it. In social media ads, groups such as TechNet had urged the public to tell the governor to veto the bill because it would harm innovation and lead to students falling behind in school.

Organizations trying to rein in the world’s largest tech companies as they advance the powerful technology say the tech industry has become more empowered at the national and state levels.

Meta, Google, OpenAI, Apple and other major tech companies have strengthened their relationships with the Trump administration. Companies are funding new organizations and political action committees to push back against state AI policy while pouring money into lobbying.

In Sacramento, AI companies have lobbied behind the scenes for more freedom. California’s massive pool of engineering talent, tech investors and companies make it an attractive place for the tech industry, but companies are letting policymakers know that other states are also interested in attracting those investments and jobs. Big Tech is particularly sensitive to regulations in the Golden State because so many companies are headquartered there and must abide by its rules.

“We believe California can strike a better balance between protecting consumers and enabling responsible technological growth,” Robert Boykin, TechNet’s executive director for California and the Southwest, said in a statement.

Common Sense Media founder and Chief Executive Jim Steyer said tech lobbyists put tremendous pressure on Newsom to veto AB 1064. Common Sense Media, a nonprofit that rates and reviews technology and entertainment for families, sponsored the bill.

“They threaten to hurt the economy of California,” he said. “That’s the basic message from the tech companies.”

Advertising is among the tactics tech companies with deep pockets use to convince politicians to kill or weaken legislation. Even if the governor signs a bill, companies have at times sued to block new laws from taking effect.

“If you’re really trying to do something bold with tech policy, you have to jump over a lot of hurdles,” said David Evan Harris, senior policy advisor at the California Initiative for Technology and Democracy, which supported AB 1064. The group focuses on finding state-level solutions to threats that AI, disinformation and emerging technologies pose to democracy.

Tech companies have threatened to move their headquarters and jobs to other states or countries, a risk looming over politicians and regulators.

The California Chamber of Commerce, a broad-based business advocacy group that includes tech giants, launched a campaign this year that warned over-regulation could stifle innovation and hinder California.

“Making competition harder could cause California companies to expand elsewhere, costing the state’s economy billions,” the group said on its website.

From January to September, the California Chamber of Commerce spent $11.48 million lobbying California lawmakers and regulators on a variety of bills, filings to the California secretary of state show. During that period, Meta spent $4.13 million. A lobbying disclosure report shows that Meta paid the California Chamber of Commerce $3.1 million, making up the bulk of their spending. Google, which also paid TechNet and the California Chamber of Commerce, spent $2.39 million.

Amazon, Uber, DoorDash and other tech companies spent more than $1 million each. TechNet spent around $800,000.

The threat that California companies could move away has caught the attention of some politicians.

California Atty. Gen. Rob Bonta, who has investigated tech companies over child safety concerns, indicated that despite initial concern, his office wouldn’t oppose ChatGPT maker OpenAI’s restructuring plans. The new structure gives OpenAI’s nonprofit parent a stake in its for-profit public benefit corporation and clears the way for OpenAI to list its shares.

Bonta blessed the restructuring partly because of OpenAI’s pledge to stay in the state.

“Safety will be prioritized, as well as a commitment that OpenAI will remain right here in California,” he said in a statement last week. The AG’s office, which supervises charitable trusts and ensures these assets are used for public benefit, had been investigating OpenAI’s restructuring plan over the last year and a half.

OpenAI Chief Executive Sam Altman said he’s glad to stay in California.

“California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued,” he posted on X.

Critics — which included some tech leaders such as Elon Musk, Meta and former OpenAI executives as well as nonprofits and foundations — have raised concerns about OpenAI’s restructuring plan. Some warned it would allow startups to exploit charitable tax exemptions and let OpenAI prioritize financial gain over public good.

Lawmakers and advocacy groups say it’s been a mixed year for tech regulation. The governor signed Assembly Bill 56, which requires platforms to display labels for minors that warn about social media’s mental health harms. Another piece of signed legislation, Senate Bill 53, aims to make AI developers more transparent about safety risks and offers more whistleblower protections.

The governor also signed a bill that requires chatbot operators to have procedures to prevent the production of suicide or self-harm content. But advocacy groups, including Common Sense Media, removed their support for Senate Bill 243 because they said the tech industry pushed for changes that weakened its protections.

Newsom vetoed other legislation that the tech industry opposed, including Senate Bill 7, which requires employers to notify workers before deploying an “automated decision system” in hiring, promotions and other employment decisions.

Called the “No Robo Bosses Act,” the legislation didn’t clear the governor, who thought it was too broad.

“A lot of nuance was demonstrated in the lawmaking process about the balance between ensuring meaningful protections while also encouraging innovation,” said Julia Powles, a professor and executive director of the UCLA Institute for Technology, Law & Policy.

The battle over AI safety is far from over. Assemblymember Rebecca Bauer-Kahan (D-Orinda), who co-wrote AB 1064, said she plans to revive the legislation.

Child safety is an issue that both Democrats and Republicans are examining after parents sued AI companies such as OpenAI and Character.AI for allegedly contributing to their children’s suicides.

“The harm that these chatbots are causing feels so fast and furious, public and real that I thought we would have a different outcome,” Bauer-Kahan said. “It’s always fascinating to me when the outcome of policy feels to be disconnected from what I believe the public wants.”

Steyer from Common Sense Media said a new ballot initiative includes the AI safety protections that Newsom vetoed.

“That was a setback, but not an overall defeat,” he said about the veto of AB 1064. “This is a David and Goliath situation, and we are David.”

Source link

Automating Oppression: How AI Firms and Governments Rewire Democracy

Authors: Christopher Jackson and Aaron Spitler*

Digital technologies, particularly AI, are accelerating democratic backsliding and revitalizing authoritarian governments. AI-focused companies have been forming close partnerships with government actors, often in ways that undermine democratic norms. Around the world, private firms are supplying or co-designing technologies that enhance mass surveillance, predictive policing, propaganda campaigns, and online censorship. In places like China, Russia, and Egypt, a blurring of boundaries between the state and the technology industry has led to serious consequences. This collusion has undercut privacy rights, stifled civil society, and diminished public accountability.

This dynamic is now playing out in the United States. Companies like Palantir and Paragon Solutions are providing government entities with powerful AI tools and analytics platforms, often under opaque contracts. In September, U.S. President Donald Trump approved the sale of TikTok to U.S. private entities friendly with the administration. Unchecked public-private integration within the technology industry poses serious risks for democratic societies, namely that it offers increased power to unaccountable actors. The focus of this article is to examine case studies on how these emerging alliances are enabling authoritarian practices, as well as what they might mean for the future of democratic societies.

Russia: Manipulating Digital Tools

In Russia, democratic norms under Vladimir Putin have eroded while Russian tech companies continue to work hand in glove with state authorities. Sberbank, the country’s largest financial institution, and their development of Kandinsky 2.1, an AI-powered, text-to-image tool owned by the firm, illustrate this long-running trend.

Despite the quality of its outputs compared to rivals like DALL-E, the solution came under fire in 2023 from veteran lawmaker Sergey Mironov, who argued that it generated images that tarnished Russia’s image. He would go on to charge that Kandinsky 2.1 was designed by “unfriendly states waging an informational and mental war” against the country.

Not long after, some in the tech space noticed that Kandinsky 2.1’s outputs changed. For instance, while the tool previously churned out images of zombies when prompted with “Z Patriot,” users noted that it now repeatedly produced pictures of hyper-masculine figures. Critics claim that this alteration not only represented an overt manipulation of the technology itself but also an attempt to curry favor with those in the government.

This episode shows how AI-powered tools are being specifically tailored to serve the needs of authorities. The modifications made to the model transformed it into an invaluable resource the government could use to amplify its messaging. As a result, users are no longer likely to see Kandinsky 2.1 as a tool for creativity, particularly if its outputs remain blatantly skewed. Developers in countries like Russia may look to this case for inspiration on how to succeed in restrictive political contexts.

United States: Supercharging Mass Surveillance

AI-centric firms in the United States have also taken note. Palantir Technologies stands as the most prominent example of how private technology firms can deepen government surveillance capabilities in ways that test the limits of democratic accountability. The firm, established in the wake of 9/11, has expanded its domestic footprint through lucrative contracts with local police departments and, most notably, Immigration and Customs Enforcement (ICE).

Investigations reveal that Palantir’s software enables ICE agents to compile and cross-reference vast amounts of personal data, from Department of Motor Vehicle (DMV) records and employment information to social media activity and utility bills. This capability gives the government a unique opportunity to build detailed profiles on individuals and their community networks. This has helped facilitate deportations and raids on immigrant communities. Critics argue that Palantir’s tools create a dragnet that vastly expands state power, all while shielding the company and its government clients from public oversight.

Beyond immigration enforcement, Palantir’s Gotham platform has been adopted by police departments for predictive policing initiatives, which attempt to forecast locations and suspects for crimes. Civil liberties groups have warned that such uses reinforce systemic biases by encoding discriminatory policing practices into algorithmic decision-making. Predictive policing algorithms inherit bias because they rely on historical data shaped by discriminatory over-policing of Black communities, among others. Scholars of “surveillance capitalism” also note that these partnerships normalize the commodification of personal data for state security purposes.

The deeper concern lies in how this private-public nexus erodes societal trust and transparency. Unlike government agencies bound by Freedom of Information Act (FOIA) requirements, companies like Palantir operate under corporate secrecy, limiting democratic oversight of technologies that profoundly affect civil rights. In this sense, the Palantir case illustrates how authoritarian-style practices, combined with technological breakthroughs, can be incubated within democratic societies and later contribute to their overall decline.

Challenging Anti-Democratic Alliances

The deepening collaboration between AI firms and authorities in developing repressive technologies is alarming. Across the globe, these partnerships have flourished, often to the detriment of average citizens. The examples of Russia and the United States underline how AI firms have been willing and able to work with governments engaging in repression when convenient, leaving the public in the lurch.

Advocates for democracy must educate themselves on how to combat the misuse of AI. Leaders in civil society, for example, could build up their technical knowledge as a starting point. Capacity-building may also have the bonus of enabling pro-democracy groups to create their own AI solutions that support civic accountability actions. Activities like these may provide a counterbalance to corporate-state collusion that places citizens at a disadvantage. It may also help ensure that AI tools are designed in ways that strengthen democracies, not undermine them.

*Aaron Spitler is a researcher whose interests lie at the intersection of human rights, democratic governance, and digital technologies. He has worked with numerous organizations in this space, from the International Telecommunication Union (ITU) to the International Republican Institute (IRI). He is passionate about ensuring technology can be a force for good. You can reach him on LinkedIn

Source link

Australia adds Reddit, Kick to social media ban for children under 16 | Social Media News

Australia’s upcoming social media ban for children under 16 years old will include the online forum Reddit and livestreaming platform Kick in addition to seven other well-known sites, according to the country’s online safety commissioner.

The social media ban will go into effect on December 10 and will also restrict access to Facebook, Instagram, Snapchat, Threads, TikTok, X and YouTube, Communications Minister Anika Wells said on Wednesday.

Recommended Stories

list of 4 itemsend of list

“Online platforms use technology to target children with chilling control. We are merely asking that they use that same technology to keep children safe online,” Wells said.

“We have met with several of the social media platforms in the past month so that they understand there is no excuse for failure to implement this law,” Wells told reporters in Canberra.

“We want children to have a childhood, and we want parents to have peace of mind,” she said.

Social media platforms have had 12 months to prepare for the ban since Australia passed its landmark online safety legislation in November last year.

Initial discussions focused primarily around Facebook, Instagram, Snapchat, TikTok, X and YouTube, but the list was later expanded, and Wells said the list could continue to change.

While more than 140 Australian and international academics signed an open letter to Prime Minister Anthony Albanese last year opposing the age limit ban as a “blunt” instrument, Canberra’s move is being closely watched by countries that share concerns about the impacts of online platforms on children.

“Delaying children’s access to social media accounts gives them valuable time to learn and grow, free of the powerful, unseen forces of harmful and deceptive design features such as opaque algorithms and endless scroll,” eSafety Commissioner Julie Inman Grant said.

Inman Grant said she would work with academics to evaluate the impact of the ban, including whether children sleep or interact more or become more physically active as a result of the restrictions on using social media.

“We’ll also look for unintended consequences, and we’ll be gathering evidence” so others can learn from Australia’s ban, Inman Grant said.

Critics have questioned how the restrictions will be enforced because users cannot be “compelled” to submit government IDs for an age check, according to a government fact sheet.

Discussions are under way with platforms about how to comply with the new rules, the commissioner said, while failure to comply could lead to civil fines of up to 49.5 million Australian dollars (US$32.1m).

TikTok investigated over youth suicide

News that Australia would add more names to the list of banned platforms came as French authorities said they had opened an investigation into the social media platform TikTok and the risks of its algorithms pushing young people into suicide.

Paris prosecutor Laure Beccuau said the probe was in response to a parliamentary committee’s request to open a criminal inquiry into TikTok’s possible responsibility for endangering the lives of its young users.

Beccuau said a report by the committee had noted “insufficient moderation of TikTok, its ease of access by minors and its sophisticated algorithm, which could push vulnerable individuals towards suicide by quickly trapping them in a loop of dedicated content”.

TikTok did not immediately respond to a request for comment.

The Paris police cybercrime unit will look into the offence of providing a platform for “propaganda in favour of products, objects, or methods recommended as means of committing suicide”, which is punishable by three years in prison.

The unit will also look into the offence of enabling “illegal transactions by an organised gang”, punishable by 10 years in prison and a fine of 1 million euros ($1.2m).

With more than 1.5 billion users worldwide, TikTok, owned by China-based ByteDance, has come under fire from governments in Europe and the United States in recent years.

Concerns raised over the platform have included content encouraging suicide, self-harm or an unhealthy body image as well as its potential use for foreign political interference.

A TikTok spokesman told the French news agency AFP in September that the company “categorically rejects the deceptive presentation” by French MPs, saying it was being made a “scapegoat” for broader societal issues.

Source link

Scientists watch flare with 10 trillion suns’ light from massive black hole | Science and Technology News

The burst of energy was likely triggered when an unusually large star wandered too close to the black hole.

Scientists have documented the most energetic flare ever observed emanating from a supermassive black hole, a cataclysmic event that briefly shone with the light of 10 trillion suns.

The new findings were published on Tuesday in the journal Nature Astronomy, with astronomer Matthew Graham of the California Institute of Technology (Caltech) leading the study.

Recommended Stories

list of 3 itemsend of list

The phenomenal burst of energy was likely triggered when an unusually large star wandered too close to the black hole and was violently shredded and swallowed.

“However it happened, the star wandered close enough to the supermassive black hole that it was ‘spaghettified’ – that is, stretched out to become long and thin, due to the gravity of the supermassive black hole strengthening as you get very close to it. That material then spiralled around the supermassive black hole as it fell in,” said astronomer and study co-author KE Saavik Ford.

The supermassive black hole was unleashed by a black hole roughly 300 million times the mass of the sun residing inside a faraway galaxy, about 11 billion light years from Earth. A light year is the distance light travels in a year, 5.9 trillion miles (9.5 trillion km).

The star, estimated to be between 30 and 200 times the mass of the sun, was turned into a stream of gas that heated up and shined intensely as it spiralled into oblivion.

Almost every large galaxy, including our Milky Way, has a supermassive black hole at its centre. But scientists still aren’t sure how they form.

First spotted in 2018 by the Palomar Observatory, operated by the Caltech, the flare took about three months to reach its peak brightness, becoming roughly 30 times more luminous than any previously recorded event of its kind. It is still ongoing, but diminishing in luminosity, with the entire process expected to take about 11 years to complete.

Because of how far away the black hole is located, observing the flash gives scientists a rare glimpse into the universe’s early epoch. Studying these immense, distant black holes helps researchers better understand how they form, how they influence their local stellar neighbourhoods, and the fundamental interactions that shaped the cosmos we know today.

Source link

France threatens to ban Shein for sale of ‘childlike’ sex doll

The French government is threatening to ban Chinese retailer Shein for selling a “childlike” sex doll online. Shein is scheduled to open its first store in Paris soon. File Photo by Hannibal Hanschke/EPA

Nov. 3 (UPI) — The French government threatened to ban Chinese retailer Shein for selling a “childlike” sex doll online.

France’s consumer fraud agency got an anonymous tip about the dolls on the site. It said their “description and categorization on the site leave little doubt as to the child pornography nature of the content,” said a press release issued Saturday by the French Directorate General for Competition Policy, Consumer Affairs and Fraud Control.

One of the ads on Shein, first reported by Le Parisien newspaper, showed a life-size doll of a little girl wearing a white dress and holding a teddy bear. The description clearly states its intended use.

“This has crossed a line,” said France’s economy minister, Roland Lescure, said in an interview with French radio, adding that a formal investigation was underway, The New York Times reported. “These horrible objects are illegal.”

The company issued a statement saying it removed the items.

“We take this situation extremely seriously,” Quentin Ruffat, a spokesperson for Shein France, told BFMTV, a French TV channel. “This type of content is completely unacceptable and goes against all the values ​​we stand for. We are taking immediate corrective action and strengthening our internal mechanisms to prevent such a situation from happening again.”

Shein will soon open a store at BHV Marais, a department store in Paris. But in the wake of the doll discovery, employees have protested the move, and some French cosmetics and clothing brands have pulled their items from BHV Marais.

Société des Grands Magasins is the French company that is helping Shein move into the French market. It’s the parent company of BHV Marais. SGM President Frédéric Merlin said in an Instagram post that SGM “obviously condemns the recent events related to the doll controversy. Like everyone else, I expect clear answers from SHEIN.” But he said it hasn’t changed his plans. “I have decided not to reverse my decision, despite the controversy and the pressure because we’re doing things by the book, with ethics and transparency.”

The consumer fraud agency noted that the distribution, via an electronic communications network, of representations of a pedopornographic nature is punishable by sentences of up to seven years imprisonment and a fine of $115,000. The statement alleges that Shein doesn’t effectively filter out pornographic content to protect minors or vulnerable audiences.

For this, the law allows penalties of up to three years in prison and $86,000.



Source link

OpenAI, Amazon sign $38bn AI deal | Technology News

The announcement comes less than week after Amazon laid off 14,000 people.

OpenAI has signed a new deal valued at $38bn with Amazon that will allow the artificial intelligence giant to run AI workloads across Amazon Web Services (AWS) cloud infrastructure.

The seven-year deal announced on Monday is the first big AI push for the e-commerce giant after a restructuring last week.

Recommended Stories

list of 4 itemsend of list

The new deal will give the ChatGPT maker access to thousands of Nvidia graphics processors to train and run its artificial intelligence models.

Experts say this does not mean that it will allow OpenAI to train its model on websites hosted by AWS – which includes the websites of The New York Times, Reddit and United Airlines.

“Running OpenAI training inside AWS doesn’t change their ability to scrape content from AWS-hosted websites [which they could already do for anything publicly readable]. This is strictly speaking about the economics of rent vs buy for GPU [graphics processing unit] capacity,” Joshua McKenty, CEO of the AI detection company PolyguardAI, told Al Jazeera.

The deal is also a major vote of confidence for the e-commerce giant’s cloud unit, AWS, which some investors feared had fallen behind rivals Microsoft and Google in the artificial intelligence (AI) race. Those fears were somewhat eased by the strong growth the business reported in the September quarter.

 

OpenAI will begin using AWS immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond.

Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses and train OpenAI’s next wave of models, the companies said.

Amazon already offers OpenAI models on Amazon Bedrock, which offers multiple AI models for businesses using AWS.

OpenAI’s sweeping restructuring last week moved it further away from its non-profit roots and also removed Microsoft’s first right to refusal to supply services in the new arrangement.

Image hurdles

Amazon’s announcement about an investment in AI comes only days after the company laid off 14,000 people despite CEO Andy Jassy’s comment in an earnings call on Thursday saying the layoffs were not driven by AI.

“The announcement that we made a few days ago was not really financially driven, and it’s not even really AI-driven, not right now at least,” Jassy said.

OpenAI CEO Sam Altman has said the startup is committed to spending $1.4 trillion to develop 30 gigawatts of computing resources – enough to roughly power 25 million United States homes.

“Scaling frontier AI requires massive, reliable compute,” said Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

This comes amid growing concerns about the sheer amount of energy demand that AI data centres need to operate. The Lawrence Berkeley National Laboratory estimates that AI data centres will use up to 12 percent of US electricity by 2028.

An AP/NORC poll from October found that 41 percent of Americans are extremely concerned about AI’s impact on the environment, while another 30 percent say they are somewhat concerned as the industry increases its data centre footprint around the US.

Signs of a bubble

Surging valuations of AI companies and their massive spending commitments, which total more than $1 trillion for OpenAI, have raised fears that the AI boom may be turning into a bubble.

OpenAI has already tapped Alphabet’s Google to supply it with cloud services, as Reuters reported in June. It also reportedly struck a deal to buy $300bn in computing power for about five years.

While OpenAI’s relationship with Microsoft, which the two forged in 2019, has helped push Microsoft to the top spot among its Big Tech peers in the AI race, both companies have been making moves recently to reduce reliance on each other.

Neither OpenAI nor Amazon were immediately available for comment.

On Wall Street, Amazon’s stock is surging on the news of the new deal. As of 11:15am in New York (16:15 GMT), it is up by 4.7 percent.

Source link

Trump Bars China from Nvidia’s Top AI Chips

U.S. President Donald Trump announced that Nvidia’s most advanced artificial intelligence chips known as Blackwell will be reserved exclusively for U.S. companies. Speaking on CBS’ “60 Minutes” and aboard Air Force One, Trump said, “We will not let anybody have them other than the United States.”
This declaration signals a hard turn in U.S. tech policy, potentially going beyond previous export controls designed to curb China’s access to high-end AI semiconductors.

Why It Matters

The decision could reshape the global AI race. Nvidia’s Blackwell chips are the backbone of next-generation AI systems, from large language models to autonomous weapons. By blocking access to China and possibly even U.S. allies Washington is seeking to maintain a decisive technological lead.
However, the move could also strain trade ties, disrupt supply chains, and challenge U.S. allies like South Korea and Japan who rely on American chips for innovation and competitiveness.

China Hawks in Washington: Applauded the move. Rep. John Moolenaar compared allowing China access to the chips to “giving Iran weapons-grade uranium.”

China: Beijing has remained publicly quiet, though the move will likely be seen as another escalation in the U.S.-China tech war.

Nvidia: CEO Jensen Huang said the company has not sought export licenses for China, citing Beijing’s current unwillingness to engage with Nvidia. However, Huang warned that global restrictions could hurt U.S.-based R&D funding.

Allies: The statement comes just days after Nvidia announced plans to supply over 260,000 Blackwell chips to South Korea’s Samsung and other tech giants now casting doubt over whether those deals will proceed.

What’s Next

The Trump administration may soon issue new export rules formalizing these restrictions. Analysts expect a clearer framework distinguishing between “advanced” and “scaled-down” versions of Nvidia’s chips, determining what if anything can be sold abroad.
The decision also raises the stakes ahead of Trump’s next expected talks with Chinese President Xi Jinping, with AI dominance likely to top the agenda in future U.S.-China negotiations.

With information from Reuters.

Source link

Nexperia Halts Wafer Supplies to China, Deepening Global Chip Supply Turmoil

Dutch chipmaker Nexperia has suspended wafer shipments to its Chinese assembly plant in Dongguan, a move that could intensify the semiconductor supply crunch already rattling automakers worldwide.

The suspension, revealed in a company letter dated October 29 and signed by interim CEO Stefan Tilger, followed the Chinese unit’s failure to meet contractual payment terms. It comes amid escalating tensions after the Dutch government seized control of Nexperia from its Chinese owner, Wingtech Technology, in late September, citing national security and governance concerns.

Why It Matters

The halt threatens to disrupt automotive and electronics supply chains at a critical time. Around 70% of Nexperia’s chips produced in the Netherlands are packaged in China, meaning the freeze could ripple through global manufacturing networks.

The dispute also underscores the deepening fractures in global tech supply chains, where national security concerns and trade controls increasingly shape corporate decisions. With the U.S., China, and Europe tightening technology restrictions, Nexperia’s situation reflects the mounting geopolitical tug-of-war over semiconductor control.

Nexperia (Netherlands): Seeking to maintain operations while asserting independence from Chinese influence.

Wingtech Technology (China): The former owner now sidelined after Dutch government intervention.

Dutch Government: Exercising sovereignty over critical tech assets amid Western security coordination.

Chinese Ministry of Commerce: Blocking Nexperia’s chip exports from China in retaliation.

Global Automakers: Companies like Stellantis and Nissan are monitoring potential production halts as chip prices soar.

What’s Next

Nexperia says it is developing alternative supply routes to support its global customers but has not disclosed details. The Dongguan facility remains operational, though limited by the wafer cutoff.

Analysts expect further trade retaliation from Beijing, potentially deepening the rift between European and Chinese semiconductor ecosystems. Automakers warn of possible shortages by mid-November if shipments do not resume.

Implications

This episode highlights how state intervention in technology firms is reshaping global supply chains. The Dutch government’s takeover framed as a national security move signals Europe’s growing alignment with U.S. export controls targeting Chinese tech entities.

In the short term, the halt could spike chip prices and strain automotive production, particularly in Asia and Europe. Long term, it may accelerate a strategic decoupling between Western and Chinese semiconductor manufacturing bases.

Politically, this marks a test of Europe’s resolve to protect critical tech sectors even at the cost of trade friction with Beijing.

With information from an exclusive Reuters report.

Source link

Elon Musk’s xAI launches Grokipedia to compete with Wikipedia

Oct. 28 (UPI) — Tech mogul Elon Musk launched his own online encyclopedia with his company xAI, calling it Grokipedia as a rival to the non-profit Wikipedia.

Grokipedia, named for xAI’s chatbot Grok, uses Wikipedia as its source and it’s modeled like Wikipedia. But it has sanitized versions of pages about Musk, reporting nothing critical of him. The page says it has 885,279 pages.

The venture launched on Monday, with the site initially crashing then coming back online later. It has been reported by Musk as an improved and less biased version of Wikipedia.

Republican lawmakers and White House AI czar David Sacks have called Wikipedia “hopelessly biased.”

On X, Sacks said, “An army of left-wing activists maintain the bios and fight reasonable corrections. Magnifying the problem, Wikipedia often appears first in Google search results, and now it’s a trusted source for AI model training. This is a huge problem.”

The Wikimedia Foundation, which operates Wikipedia, said in a statement last month, “Wikipedia informs; it does not persuade.”

“Unlike newer projects, Wikipedia’s strengths are clear: it has transparent policies, rigorous volunteer oversight, and a strong culture of continuous improvement. Wikipedia is an encyclopedia, written to inform billions of readers without promoting a particular point of view,” Lauren Dickinson, a spokesperson for the Wikimedia Foundation, said in a statement.

“This human-created knowledge is what AI companies rely on to generate content; even Grokipedia needs Wikipedia to exist,” she added.

On Monday, Musk posted on X that the launch was “Grokipedia version 0.1,” but that “Version 1.0 will be 10X better, but even at 0.1 it’s better than Wikipedia imo.”

Source link

Apple surpasses $4tn market capitalisation after latest iPhone success

Published on 28/10/2025 – 16:58 GMT+1
Updated
16:59

Apple’s stock reached new heights on Tuesday, trading above $269 a share and pushing the company’s market capitalisation to a record $4 trillion (€3.4tr). That followed stronger-than-expected demand for its latest iPhone 17.

The Cupertino-based technology giant therefore joins the elite club with Nvidia and Microsoft, which both surpassed the same valuation earlier this year.

Nvidia, the semiconductor powerhouse, became the first company in history to hit the $4tn milestone in July 2025. News of soaring AI investments and the firm’s strong profit outlook have continued to lift its share price since then, now approaching $4.7tn (€4tn).

The so-called Magnificent Seven, the seven largest publicly traded technology companies in the world, have been cashing in on the AI boom this year, with tech share prices rising accordingly. Since January, Apple shares are up more than 18%, Nvidia’s nearly 40%, and Microsoft’s close to 30%.

However, Apple has mostly stayed out of the race to invest billions in AI projects. Current market enthusiasm for the iPhone maker’s stock instead stems from the successful launch of its updated iPhone range, along with signs of easing trade and tariff pressures.

According to Counterpoint Research, the iPhone 17 series has outsold the iPhone 16 range by 14% during its first ten days on sale in China and the United States.

Five members of the Magnificent Seven, Alphabet, Apple, Amazon, Microsoft, and Meta, are reporting earnings this week. They will need to demonstrate strong growth and justify the massive spending currently underway in artificial intelligence, amid growing concerns that the sector may be forming a bubble reminiscent of the dot-com boom that burst in 2000.

According to Kate Leaman, chief market analyst at AvaTrade:”Markets move on leadership, and right now, the leadership of Microsoft, Meta, Alphabet, Amazon, and Apple is inseparable from the risk appetite of investors worldwide.”

She noted that more than 40% of S&P 500 gains this year have come via these giants.

“But with that concentration comes fragility,” she added, saying that even as revenues climb, the commentary provided by executives “will critically frame how far and how confidently the market can chase the AI story into 2026”.

Source link

OpenAI restructures into public-benefit firm, Microsoft takes 27% stake | Technology News

The deal removes a major constraint on raising capital for OpenAI, the maker of ChatGPT, and values the firm at $500bn.

Microsoft and OpenAI have reached a deal to allow the ChatGPT maker to restructure itself into a public-benefit corporation, valuing OpenAI at $500bn and giving it more freedom in its business operations.

The deal, unveiled on Tuesday, removes a major constraint on raising capital for OpenAI that has existed since 2019.

Recommended Stories

list of 4 itemsend of list

At the time, it had signed an agreement with Microsoft that gave the tech giant rights over much of OpenAI’s work in exchange for costly cloud computing services needed to carry it out. As its ChatGPT service exploded in popularity, those limitations had become a notable source of tension between the two companies.

Microsoft will still hold a stake of about $135bn, or 27 percent, in OpenAI Group PBC, which will be controlled by the OpenAI Foundation, a nonprofit, the companies said.

Microsoft, based in Redmond, Washington in the United States, has invested $13.8bn in OpenAI, with Tuesday’s deal implying that the firm had generated a return of nearly 10 times its investment.

Shares of Microsoft rose 2.5 percent, sending its market value above $4 trillion again.

The deal keeps the two firms intertwined until at least 2032, with a massive cloud computing contract and with Microsoft retaining some rights to OpenAI products and artificial intelligence (AI) models until then – even if OpenAI reaches artificial general intelligence (AGI), the point at which AI systems can match a well-educated human adult.

Simplified corporate structure

With more than 700 million weekly users as of September, ChatGPT has exploded in popularity to become the face of AI for many consumers after OpenAI’s founding as a nonprofit AI safety group.

As the company grew, the Microsoft deal constrained OpenAI’s ability to raise funds from outside investors and secure computing contracts as the crush of ChatGPT users and its research into new models caused its computing needs to skyrocket.

“OpenAI has completed its recapitalization, simplifying its corporate structure,” Bret Taylor, the OpenAI Foundation’s board chair, said in a blog post. “The nonprofit remains in control of the for-profit, and now has a direct path to major resources before AGI arrives.”

Microsoft’s previous 2019 agreement had many provisions that rested on when OpenAI reached that point, and the new deal requires an independent panel to verify OpenAI’s claims it has reached AGI.

“OpenAI still faces ongoing scrutiny around transparency, data usage, and safety oversight. But overall, this structure should provide a clearer path forward for innovation and accountability,” said Adam Sarhan, CEO of 50 Park Investments.

Gil Luria, head of technology research at DA Davidson, said the deal “resolves the longstanding issue of OpenAI being organized as a not-for-profit [organisation] and settles the ownership rights of the technology vis-a-vis Microsoft. The new structure should provide more clarity on OpenAI’s investment path, thus facilitating further fundraising.”

Microsoft also said that it has secured a deal with OpenAI where the ChatGPT maker will purchase $250bn of Microsoft Azure cloud computing services. In exchange, Microsoft will no longer have a right of first refusal to provide computing services to OpenAI.

Microsoft also said that it will not have any rights to hardware produced by OpenAI. In March, OpenAI bought longtime Apple design chief Jony Ive’s startup io Products in a $6.5bn deal.

Source link

President Trump pardons Binance founder Changpeng Zhao

President Donald Trump on Thursday pardoned Binance founder Changpeng Zhao (pictured in 2022), who pleaded guilty to money laundering charges in 2023 and spent four months in prison. File Photo by Miguel A. Lopes/EPA

Oct. 23 (UPI) — President Donald Trump has pardoned Binance cryptocurrency exchange founder Changpeng Zhao, who had pleaded guilty to money laundering charges in 2023.

The guilty plea was part of a $4.3 billion settlement between Binance and the Justice Department to end the investigation into the world’s largest cryptocurrency exchange, CBS News reported.

Binance paid the settlement after the DOJ determined it helped users to get around federal sanctions.

The settlement required Zhao to resign from his position as Binance’s chief executive officer and serve four months in prison.

The Binance settlement also caused the Philippines to order Google and Apple to remove the Binance app from their respective app offerings.

White House press secretary Karoline Leavitt said Zhao’s plea deal and the investigation against Binance arose from what she called the Biden administration’s “war on cryptocurrency,” as reported by The Hill.

“In their desire to punish the cryptocurrency industry, the Biden administration pursued Mr. Zhao despite no allegations of fraud or identifiable victims,” Leavitt said in a prepared statement.

“The Biden administration sought to imprison Mr. Zhao for three years, a sentence so outside sentencing guidelines that even the judge said he had never heard of this in his 30-year career,” Leavitt added.

“These actions by the Biden administration severely damaged the United States’ reputation as a global leader in technology and innovation.”

The president issued the pardon in accordance with his constitutional authority, she said, adding that “the Biden administration’s war on crypto is over.”

In a social media post in which he identified himself as “CZ,” Zhao thanked the president “for upholding America’s commitment to fairness, innovation and justice” by pardoning him.

He said he will “help make America the capital of crypto” and help make decentralized web3 Internet technology available globally.

Zhao’s pardon came after a news report indicated that Binance assists the Trump family with its cryptocurrency endeavor.

The Wall Street Journal two months ago reported that a cryptocurrency venture created by the Trump family has accrued $4.5 billion with the help of Binance since the president won the Nov. 5 election, according to CNBC.

Source link

Is China’s economy stalling or transforming? | Business and Economy

China bets big on advanced technology in its five-year plan to revive the economy.

For decades, China powered spectacular growth through exports, infrastructure and cheap credit. But that old model is running out of steam, even as it hits a record trade surplus with the world this year.

The property sector is drowning in debt, confidence is fading, and consumers are holding back. Now, Beijing faces its toughest test yet: how to keep the world’s second-largest economy growing without relying much on the engines that once drove it.

A new five-year plan promises “high-quality growth” built on technology and self-reliance. But trade tensions with the United States could make the climb even steeper.

Source link

SpaceX cuts Starlink service to Myanmar scams compounds

SpaceX’s Starlink, which provides Internet service via satellites like those pictured being released into orbit around Earth, this week cut service to thousands of its internet service devices after Myanmar’s military shut down a scam center along the country’s border region. File Photo by SpaceX/UPI | License Photo

Oct. 23 (UPI) — SpaceX cut Starlink Internet service to thousands of its devices providing access to compounds in Myanmar linked to human trafficking and monetary scams worldwide.

The company said late Tuesday that it terminated more than 2,500 Starlink devices Chinese crime syndicates were using to contact and scam people globally.

“SpaceX continually works to identify violations of our Acceptable Use Policy and applicable law because — as with nearly all consumer electronics and services — the same technology that can provide immense benefits has a risk of misuse,” Lauren Dreyer, Starlink’s vice president of business operations, said in a post on X.

“In Myanmar, for example, SpaceX proactively identified and disabled over 2,500 Starlink kits in the vicinity of suspected ‘scam centers,'” she wrote.

The scam centers, which operated largely along the border between Myanmar and Thailand, lure people in with the promise of good jobs before often being taken captive and being forced to defraud people through fake investments and pretend romantic schemes, according to reports.

Myanmar’s military, which in 2021 staged a coup that has kept the country mired in a civil war, announced this week that it shut down a scam operation called KK Park, seizing 30 sets of Starlink Terminals and arresting more than 2,000 people.

The military earlier this year launched an operation to go after the scam centers after other nations, specifically Thailand and China, exerted pressure to ease the situation that has seen people from both countries trafficked and forced to work in the scam parks.

Although the military has moved to shut down some operations, reports suggest that many compounds in Myanmar remain active, with tens of thousands of employees and some protected by militia groups that are aligned with Myanmar’s military.

Source link

Telsa Q3 profit down more than a third despite record $28.1B revenue

Tesla posted sharply lower profit for the July to September quarter despite a signifcant jump in revenue. The firm’s performance was hit by tough competition in the EV market, U.S. duties on imports of parts and materials to make its cars, higher capital expenditure costs and a sales slump in Europe. File photo by Divyakant Solanki/EPA

Oct. 23 (UPI) — Tesla reported profits were down 37% in the third quarter despite a jump in revenue to $28.1 billion on frontloading of sales driven by buyers racing to beat the deadline for a federal tax credit before it expired Sept. 30.

The tax credit, worth up to $7,500 on EV purchases, helped the firm buck a run of declining quarterly sales along with a new six-seat version of its popular Model Y midsize SUV that performed well in the Chinese market.

While sales of competitors, including Ford and Hyundai, still outpaced Tesla’s it also lured in buyers with interest-free finance and insurance contributions.

That helped overall income rise by just under $3 billion, compared with the same period last year, and $1.73 billion more than predicted by analysts, with the largest contribution still coming from vehicle sales.

Revenue from Tesla’s energy generation and storage division surged 44% to $3.42 billion.

However, net profit slumped from $2.17 billion in the third quarter of 2024, to just $1.37 billion this year, with the results sending the stock price lower.

Tesla’s shares were down more than 3% at $424.60 in out-of-hours trade on the NASDAQ before Thursday’s market open — but remained well above the 30-day low of $413.49 they hit Oct. 10. The stock is up 9% year-to-date.

The firm’s performance was dragged down by an ongoing slump in its European market, partly due to a public backlash against Musk and tough competition from rivals from the continent and beyond, such as Volkswagen and China’s BYD.

Tariffs on car parts and raw materials imposed by President Donald Trump and higher research and development costs were also factors as the company embarks on CEO Elon Musk‘s efforts for an increased focus on AI and robotics.

Chief Accounting Officer Vaibhav Taneja told investors on a conference call Wednesday that the hit to Tesla from import duties in the July to September period was in excess of $400 million.

Tesla said it aimed to meet its target to begin “volume production” of Cybercab, heavy-duty electric semi trucks and its new Megapack 3 battery energy storage system in 2026, with Musk saying he expected Cybercab to begin rolling off the production line in the second quarter.

“First generation production lines” for Tesla’s humanoid Optimus robot were currently under construction. Musk said the firm expected to unveil Optimus V3 in the first quarter.

Tesla posted its latest results as shareholders were preparing for a November vote to approve a new remuneration package for Musk of as much as $1 trillion, all in shares.

The deal would be conditioned on his delivering an ambitious turnaround program involving boosting market capitalization from around $1.38 trillion to an unprecedented $8.5 trillion by pivoting Tesla to concentrate on autonomous driving, AI and robotics.

Apple, Microsoft and NVIDIA, the current behemoths of the U.S. tech sector, have market caps in the $2.6 to $3.2 trillion range.

Source link

Conservative activist sues Google over AI-generated statements | Technology News

The lawsuit comes amid growing concerns about how AI fuels the spread of misinformation.

Conservative activist Robby Starbuck sued Google, alleging that the tech giant’s artificial intelligence systems generated “outrageously false” information about him.

On Wednesday, Starbuck said in the lawsuit, filed in Delaware state court, that Google’s AI systems falsely called him a “child rapist,” “serial sexual abuser” and “shooter” in response to user queries and delivered defamatory statements to millions of users.

Recommended Stories

list of 4 itemsend of list

Google spokesperson Jose Castaneda said most of the claims were related to mistaken “hallucinations” from Google’s Bard large language model that the company worked to address in 2023.

“Hallucinations are a well-known issue for all LLMs, which we disclose and work hard to minimise,” Castaneda said. “But as everyone knows, if you’re creative enough, you can prompt a chatbot to say something misleading.”

Starbuck is best known for opposing diversity, equity and inclusion initiatives.

“No one — regardless of political beliefs — should ever experience this,” he said in a statement about the lawsuit. “Now is the time for all of us to demand transparent, unbiased AI that cannot be weaponized to harm people.”

Starbuck made similar allegations against Meta Platforms in a separate lawsuit in April. Starbuck and Meta settled their dispute in August, and Starbuck advised the company on AI issues under the settlement.

According to Wednesday’s complaint, Starbuck learned in December 2023 that Bard had falsely connected him with white nationalist Richard Spencer. The lawsuit said that Bard cited fabricated sources and that Google failed to address the statements after Starbuck contacted the company.

Starbuck’s lawsuit also said that Google’s Gemma chatbot disseminated false sexual assault allegations against him in August based on fictitious sources. Starbuck also alleged the chatbot said that he committed spousal abuse, attended the January 6 Capitol riots and appeared in the Jeffrey Epstein files, among other things.

Starbuck said he has been approached by people who believed some of the false accusations and that they could lead to increased threats on his life, noting the recent assassination of conservative activist Charlie Kirk.

Starbuck asked the court for at least $15m in damages.

Starbuck lawsuit comes amid growing concerns that AI-generated content has become easy to create and can facilitate the spread of misinformation. As Al Jazeera previously reported, Google’s VEO3 AI video maker allowed users to make deceptive videos of news events.

Alphabet — Google’s parent company’s stock is relatively flat on the news of the lawsuit. As of 2:30pm in New York (18:30 GMT), it is up by 0.06 percent.

Source link

OpenAI announces new AI-powered Atlas browser

1 of 3 | An introduction page of ChatGPT is pictured in 2023. On Tuesday, Open AI unveiled an early version of its new AI-powered ChatGPT Atlas web browser. File Photo by Wu Hao/EPA

Oct. 21 (UPI) — OpenAI unveiled the early version of its AI-powered ChatGPT Atlas web browser on Tuesday, offering many powerful features that seek to interlace the company’s technology into daily internet use.

The new browser is currently only available on macOS, with future versions coming to Windows and mobile devices, according to a post by OpenAI. While other tech companies, including Microsoft and Google, have incorporated AI into their products, OpenAI called Atlas a step closer “to a true super-assistant” that follows users across the web.

“It’s a new kind of browser for the next era of the web,” OpenAI CEO Sam Altman said in a video, where staff demonstrated how Atlas could be used to complete a grocery order, help project management at work and other tasks.

Atlas will draw on user’s previous interactions with the powerful chat bot, meaning it will have a back-and-forth deeper than Google’s box of AI-generated results that accompanies web searches.

If Atlas is popular, it could be “a serious threat to Google’s dominance,” according to TechCrunch. It could also provide valuable information to targeted advertising should OpenAI change its business model. But the tech website concluded that “It’s still early days for Atlas and a lot will depend on the product itself — and whether users really want what OpenAI is offering here.”

Users of the paid version of ChatGPT can use “agent” mode that allows Atlas to perform some tasks independently.

“Despite all of the power and awesome capabilities that you get with sharing your browser with ChatGPT that also poses an entirely new set of risks,” OpenAI’s Pranav Vishnu said during the video announcing Atlas. He said that there are safeguards that keep the agent operating on Atlas tabs and prevents it from accessing users’ computer files.

Marketing experts have warned that AI could soon be used to make purchases for consumers using their data. Users of Atlas can limit what data is saved, according to an OpenAI page explaining user controls.

Source link

Dutch privacy watchdog warns voters against asking AI how to vote | Technology News

Body finds that chatbots provide biased advice, including by leading voters to the hard-right Party for Freedom.

The Netherlands’s data protection watchdog has cautioned citizens against consulting with artificial intelligence on how to vote, warning that popular chatbots provide a “highly distorted and polarised view” of politics.

The Dutch Data Protection Authority said on Tuesday that an increasing number of voters were using AI to help decide who to vote for, despite the models offering “unreliable and clearly biased” advice.

Recommended Stories

list of 4 itemsend of list

The watchdog issued the warning as it released the results of tests conducted on four popular chatbots – ChatGPT, Gemini, Mistral, and Grok – in the run-up to parliamentary elections on October 29.

The research found that the chatbots more often recommended parties on the fringes of the political spectrum when asked to identify the three choices that best matched the policy preferences of 1,500 fictitious voter profiles.

In more than half of cases, the AI models identified the hard-right Party for Freedom (PVV) or left-wing Green Left-Labour Party as the top choice, the watchdog said.

Parties closer to the political middle ground – such as the right-leaning People’s Party for Freedom and Democracy and the centre-left Democrats 66 – were recommended much less often, according to the watchdog.

Meanwhile, some groupings, including the conservative Christian Democratic Appeal and left-leaning Denk, were “almost never suggested”.

Monique Verdier, deputy chair of the authority, said that voters who turned to AI risked being encouraged to vote for parties that do not align with their preferences.

“This directly impacts a cornerstone of democracy: the integrity of free and fair elections. We therefore urge voters not to use AI chatbots for voting advice because their operation is neither transparent nor verifiable,” Verdier said in a statement.

“Additionally, we call on chatbot providers to prevent their systems from being used as voting guides.”

The October 29 election comes after the PVV, led by anti-immigration firebrand Geert Wilders, pulled its support for the government after its coalition partners refused to back a 10-point plan to radically curtail immigration.

Wilders’s PPV, which scored one of the biggest upsets in Dutch political history by winning the most seats in the 2023 election, has consistently led opinion polls before next week’s vote.

While the PPV is on track to win the most seats for a second straight election, it is all but certain to fall far short of a parliamentary majority.

The other major parties in the Netherlands, which has been governed by coalition governments without interruption since the 1940s, have all ruled out supporting the PPV in power.

Source link

What caused Amazon’s AWS outage, and why did so many major apps go offline? | Internet News

A major outage at Amazon Web Services (AWS) on Monday disrupted a large portion of the internet, taking down apps, websites and online tools used by millions of people around the world, before services were eventually restored.

From banking apps and airlines to smart home devices and gaming platforms, the hours-long breakdown revealed how much of modern life depends on cloud’s infrastructure.

Recommended Stories

list of 1 itemend of list

Here is what we know:

What happened and what caused the AWS outage?

At about 07:11 GMT, Amazon’s cloud service experienced a major outage, meaning some of its systems stopped working, which disrupted many popular apps and websites, including banks, gaming platforms and entertainment services.

The problem started in one of AWS’s main data centres in Virginia, its oldest and biggest site, after a technical update to the API – a connection between different computer programmes – of DynamoDB, a key cloud database service that stores user information and other important data for many online platforms.

The root cause appears to have been an error in the update that affected the Domain Name System (DNS), which helps apps find the correct server addresses. A DNS works like the internet’s phone book, turning website names into the numeric IP addresses that computers use to connect to servers.

Because of the DNS issue, apps could not find the IP address for DynamoDB’s API and were unable to connect.

As DynamoDB went down, other AWS services also began to fail. In total, 113 services were affected by the outage. By 10:11 GMT, Amazon said that all AWS returned to normal operations, but there was a backlog “of messages that they will finish processing over the next few hours”.

At the time of publication, Downdetector, a website that tracks internet outages based on user reports, was still showing problems with platforms such as OpenAI, ESPN and Apple Music.

What is a cloud and what exactly is AWS?

A cloud is a way of storing and using data or programmes over the internet instead of on your computer or other physical storage devices.

When people say something is “in the cloud”, it means the files, apps or systems are running on powerful computers (called servers) in data centres owned by companies like Amazon (AWS), Google or Microsoft, not on your personal device.

In this case, AWS allows companies to rent computing power and storage. It supplies the technology that runs websites, apps and many online services behind the scenes.

One of AWS’s core services is DynamoDB, a database that stores important information for companies, such as customer records. On Monday, Amazon reported that customers were unable to access their DynamoDB data.

AWS is the biggest cloud service provider in the world.

Cloud outages are not rare, but they have become more noticeable as more companies rely on these services every day.

“The fallout impacted people across a number of different spheres,” Joshua Mahony, the chief market analyst at Scope Markets, told Al Jazeera. [But] of course this kind of comes with the territory with tech companies; the key is they can resolve it quickly, and it doesn’t cost them a lot of money.”

He said Amazon would likely weather the storm from the incident.

“You’re looking at something that is relatively contained,” he said. “Amazon Web Services has cornered 30 percent of the market alone. Their users are not going to suddenly jump ship. Their businesses are deeply ingrained.”

INTERACTIVE_The world’s largest cloud service providers-1761010467

Which services and apps went down?

The outage affected dozens of websites, including Snapchat, Pinterest and Apple TV, according to Downdetector.

Other communication apps were also affected including: WhatsApp, Signal, Zoom and Slack; gaming services such as Roblox, Fortnite and Xbox; and places like Starbucks. Etsy also experienced issues.

In the United States, people were having issues with financial apps too, including Venmo.

Some users said their Ring doorbells and Alexa speakers stopped working, while others could not access the Amazon website or download books on their Kindles.

The language app Duolingo and creative tool Canva were among those reporting errors on their websites, and several media organisations were hit, including the Associated Press news agency, The New York Times and The Wall Street Journal.

Banks, the cryptocurrency exchange Coinbase, and AI firm Perplexity also reported issues, along with US airlines Delta and United.

INTERACTIVE -Major web services impacted by the AWS outage

Why did so many major apps go offline at once?

When AWS had its outage, it was not just Amazon’s tools that were affected. Thousands of other companies that use AWS for storage, databases or web hosting were also hit. These companies include many major apps that rely on AWS to run key parts of their systems.

“Whenever we see these headlines, the first thought that goes through everybody’s mind, that sends a shiver up the spine, is, ‘Is this one of those cyberattacks? Is this a military or intelligence-led thing that has led to this disruption?’ And in this case, it’s not,” Bryson Bort chief executive of the cybersecurity company Scythe told Al Jazeera.

“In fact, most of the time, it isn’t. It’s usually human error.”

How did Amazon respond?

AWS acknowledged the outage and said engineers were “immediately engaged” to fix the problem.

AWS said it worked on “multiple parallel paths to accelerate recovery”. It also reported that the main issue had been fully resolved, though some users continued to face minor delays as systems recovered.

The company also said it would publish a detailed post-event summary explaining what happened.

An aerial view of an Amazon Web Services Data Center
An aerial view of an Amazon Web Services Data Center, known as US East 1, in Ashburn, Virginia [Jonathan Ernst/Reuters]



Source link