Technology

Global futures reopen after exchange operator CME hit by hours-long outage | Financial Markets News

CME blamed the outage, which halted trading for more than 11 hours, on a cooling failure at a data centre in Chicago.

Global futures markets were thrown into chaos for several hours after CME Group, the world’s largest exchange operator, suffered one of its longest outages in years, halting trading across stocks, bonds, commodities and currencies.

By 13:35 GMT on Friday, trading in foreign exchange, stock and bond futures as well as other products had resumed, after having been knocked out for more than 11 hours because of an outage at an important data centre, according to LSEG data.

Recommended Stories

list of 4 itemsend of list

CME blamed the outage on a cooling failure at data centres run by CyrusOne, which said its Chicago-area facility had affected services for customers, including CME.

The disruption stopped trading in major currency pairs on CME’s EBS platform, as well as benchmark futures for West Texas Intermediate crude, Nasdaq 100, Nikkei, palm oil and gold, according to LSEG data.

‘A black eye’

Trading volumes have been thinned out this week by the United States Thanksgiving holiday, and with dealers looking to close positions for the end of the month, there was a risk of volatility picking up sharply later on, market participants said.

“It’s a black eye to the CME and probably an overdue reminder of the importance of market structure and how interconnected all these are,” Ben Laidler, head of equity strategy at Bradesco BBI, said.

“We complacently take for granted that much of the timing is frankly not great. It’s month-end, a lot of things get rebalanced.”

“Having said that, it could have been a lot worse; it’ll be a very low-volume day. If you’re going to have it, there would have been worse days to have a breakdown like this,” he said.

Futures are a mainstay of financial markets and are used by dealers, speculators and businesses wishing to hedge or hold positions in a wide range of underlying assets. Without these and other instruments, brokers were left flying blind, and many were reluctant to trade contracts with no live prices for hours on end.

“Beyond the immediate risk of traders being unable to close positions – and the potential costs that follow – the incident raises broader concerns about reliability,” said Axel Rudolph, senior technical analyst at trading platform IG.

A few European brokerages said earlier in the day they had been unable to offer trading in some products on certain futures contracts.

Biggest exchange operator

CME is the biggest exchange operator by market value and says it offers the widest range of benchmark products, spanning rates, equities, metals, energy, cryptocurrencies and agriculture.

Average daily derivatives volume was 26.3 million contracts in October, CME said earlier this month.

The CME outage on Friday comes more than a decade after the operator had to shut electronic trading for some agricultural contracts in April 2014 due to technical problems, which at the time sent traders back onto the floor.

More recently, in 2024, outages at LSEG and Switzerland’s exchange operator briefly interrupted markets.

CME’s own shares were up 0.4 percent in premarket trading.

Source link

N. Korean hacking group Lazarus suspected behind recent crypto hacking: sources

North Korean hacking group Lazarus is suspected to be behind a breach of around $30.6 million worth of cryptocurrency from South Korea’s largest crypto exchange Upbit, sources said Friday. This photo, taken Thursday, shows the logo of Dunamu at the headquarters of Naver Corp. in Seoul. Photo by Yonhap

North Korean hacking group Lazarus is suspected to be behind a recent breach of around 45 billion won (US$30.6 million) worth of cryptocurrency from South Korea’s largest crypto exchange Upbit, sources said Friday.

According to government and business sources, authorities plan to carry out an on-site investigation at the crypto exchange with a belief that Lazarus was behind the hacking.

Dunamu, which operates Upbit, said Thursday it confirmed the transfer of 44.5 billion won worth of Solana-affiliated assets to an unauthorized wallet address and plans to cover the full amount with assets the company owns.

The hacking group had been suspected of stealing 58 billion won worth of Ethereum from Upbit in 2019.

Authorities said the methods used in the latest incident resembled those of the 2019 theft.

“Instead of attacking the server, it is possible that hackers compromised administrators’ accounts or posed as administrators to make the transfer,” a government official said.

Experts note the hacking incident came while Pyongyang is seeking to raise money amid a shortage of foreign currency.

“It is the tactic of Lazarus to transfer crypto to wallets at other exchanges and attempt money laundering,” a security official said, noting such methods make it impossible to track the transaction.

Others said hackers may have intentionally chosen Thursday for their attack, as Naver Corp., South Korea’s top search engine operator, announced its decision on the previous day to acquire Dunamu as a wholly owned subsidiary of Naver Financial through a share-swap deal.

“Hackers have a strong tendency toward self-display,” another security official said.

Source link

Australia clamps downs on ‘nudify’ sites used for AI-generated child abuse | Social Media News

Three websites used to create abuse imagery had received 100,000 monthly visits from Australians, watchdog says.

Internet users in Australia have been blocked from accessing several websites that used artificial intelligence to create child sexual exploitation material, the country’s internet regulator has announced.

The three “nudify” sites withdrew from Australia following an official warning, eSafety Commissioner Julie Inman Grant said on Thursday.

Recommended Stories

list of 4 itemsend of list

Grant’s office said the sites had been receiving approximately 100,000 visits a month from Australians and featured in high-profile cases of AI-generated child sex abuse imagery involving Australian school students.

Grant said such “nudify” services, which allow users to make images of real people appear naked using AI, have had a “devastating” effect in Australian schools.

“We took enforcement action in September because this provider failed to put in safeguards to prevent its services being used to create child sexual exploitation material and were even marketing features like undressing ‘any girl,’ and with options for ‘schoolgirl’ image generation and features such as ‘sex mode,’” Grand said in a statement.

The development comes after Grant’s office issued a formal warning to the United Kingdom-based company behind the sites in September, threatening civil penalties of up to 49.5 million Australian dollars ($32.2m) if it did not introduce safeguards to prevent image-based abuse.

Grant said Hugging Face, a hosting platform for AI models, had separately also taken steps to comply with Australian law, including changing its terms of service to require account holders to take steps to minimise the risks of misuse involving their platforms.

Australia has been at the forefront of global efforts to prevent the online harm of children, banning social media for under-16s and cracking down on apps used for stalking and creating deepfake images.

The use of AI to create non-consensual sexually explicit images has been a growing concern amid the rapid proliferation of platforms capable of creating photo-realistic material at the click of a mouse.

In a survey carried out by the US-based advocacy group Thorn last year, 10 percent of respondents aged 13-20 reported knowing someone who had deepfake nude imagery created of them, while 6 percent said they had been a direct victim of such abuse.

Source link

US group sues Apple over DR Congo conflict minerals | Business and Economy News

International Rights Advocates also sued Tesla for a similar issue, but that case was dismissed.

A United States-based advocacy group has filed a lawsuit in Washington, DC, accusing Apple of using minerals linked to conflict and human rights abuses in the Democratic Republic of the Congo (DRC) and Rwanda despite the iPhone maker’s denials.

International Rights Advocates (IRAdvocates) has previously sued Tesla, Apple and other tech firms over cobalt sourcing, but US courts dismissed that case last year.

Recommended Stories

list of 4 itemsend of list

French prosecutors in December also dropped a case filed by the DRC against Apple subsidiaries over conflict minerals, citing lack of evidence. A related criminal complaint in Belgium is still under investigation.

Apple denied any wrongdoing in response to the DRC’s legal cases, saying it had instructed its suppliers to halt the sourcing of material from the DRC and neighbouring Rwanda.

It did not immediately respond to requests for comment on the latest complaint.

IRAdvocates, a Washington, DC-based nonprofit that tries to use litigation to curtail rights abuses, said in the complaint filed on Tuesday in the Superior Court of the District of Columbia that Apple’s supply chain still includes cobalt, tin, tantalum and tungsten linked to child and forced labour as well as armed groups in the DRC and Rwanda.

The lawsuit seeks a determination by the court that Apple’s conduct violates consumer protection law, an injunction to halt alleged deceptive marketing and reimbursement of legal costs but does not seek monetary damages or class certification.

The lawsuit alleges that three Chinese smelters – Ningxia Orient, JiuJiang JinXin and Jiujiang Tanbre – processed coltan that United Nations and Global Witness investigators alleged was smuggled through Rwanda after armed groups seized mines in the eastern DRC and linked the material to Apple’s supply chain.

A University of Nottingham study published in 2025 found forced and child labour at DRC sites linked to Apple suppliers, the lawsuit said.

Ningxia Orient, JiuJiang JinXin and Jiujiang Tanbre did not immediately respond to requests for comment.

The DRC – which supplies about 70 percent of the world’s cobalt and significant volumes of tin, tantalum and tungsten used in phones, batteries and computers – did not immediately respond to a request for comment. Rwanda also did not immediately respond to a request for comment.

Apple has repeatedly denied sourcing minerals from conflict zones or using forced labour, citing audits and its supplier code of conduct. It said in December that there was “no reasonable basis” to conclude any smelters or refiners in its supply chain financed armed groups in the DRC or neighbouring countries.

Congolese authorities said armed groups in the eastern part of the country use mineral profits to fund a conflict that has killed thousands of people and displaced hundreds of thousands. The authorities have tightened controls on minerals to choke off funding, squeezing global supplies.

Apple says 76 percent of the cobalt in its devices was recycled in 2024, but the IRAdvocates lawsuit alleged its accounting method allows mixing with ore from conflict zones.

On Wall Street, Apple’s stock was up 0.8 percent.

Source link

California’s first partner pushes to regulate AI as Trump threatens to forbid regulations

California First Partner Jennifer Siebel Newsom recently convened a meeting that might rank among the top sweat-inducing nightmare scenarios for Silicon Valley’s tech bros — a group of the Golden State’s smartest, most powerful women brainstorming ways to regulate artificial intelligence.

Regulation is the last thing this particular California-dominated industry wants, and it’s spent a lot of cash at both the state and federal capitols to avoid it — including funding President Trump’s new ballroom. Regulation by a bunch of ladies, many mothers, with profit a distant second to our kids when it comes to concerns?

I’ll let you figure out how popular that is likely be with the Elon Musks, Peter Thiels and Mark Zuckerbergs of the world.

But as Siebel Newsom said, “If a platform reaches a child, it carries a responsibility to protect that child. Period. Our children’s safety can never be second to the bottom line.”

Agreed.

Siebel Newsom’s push for California to do more to regulate AI comes at the same time that Trump is threatening to stop states from overseeing the technology — and is ramping up a national effort that will open America’s coffers to AI moguls for decades to come.

Right now, the U.S. is facing its own nightmare scenario: the most powerful and world-changing technology we have seen in our lifetimes being developed and unleashed under almost no rules or restraints other than those chosen by the men who seek personal benefit from the outcome.

To put it simply, the plan right now seems to be that these tech barons will change the world as they see fit to make money for themselves, and we as taxpayers will pay them to do it.

“When decisions are mainly driven by power and profit instead of care and responsibility, we completely lose our way, and given the current alignment between tech titans and the federal administration, I believe we have lost our way,” Siebel Newsom said.

To recap what the way has been so far, Trump recently tried to sneak a 10-year ban on the ability of states to oversee the industry into his ridiculously named “Big Beautiful Bill,” but it was pulled out by a bipartisan group in the Senate — an early indicator of how inflammatory this issue is.

Faced with that unexpected blockade, Trump has threatened to sign a mysterious executive order crippling states’ ability to regulate AI and attempting to withhold funds from those that try.

Simultaneously, the most craven and cowardly among Republican congresspeople have suggested adding a 10-year ban to the upcoming defense policy bill that will almost certainly pass. Of course, Congress has also declined to move forward on any meaningful federal regulations itself, while technology CEOs including Trump frenemy Musk, Apple’s Tim Cook, Meta’s Zuckerberg and many others chum it up at fancy events inside the White House.

Which may be why this week, Trump announced the “Genesis Mission,” an executive order that seemingly will take the unimaginable vastness of government research efforts across disciplines and dump them into some kind of AI model that will “revolutionize the way scientific research is conducted.

While I am sure that nothing could possibly go wrong in that scenario, that’s not actually the part that is immediately alarming. This is: The project will be overseen by Trump science and technology policy advisor Michael Kratsios, who holds no science or engineering degrees but was formerly a top executive for Thiel and former head of another AI company that works on warfare-related projects with the Pentagon.

Kratsios is considered one of the main reasons Trump has embraced the tech bros with such adoration in his second term. Genesis will almost certainly mean huge government contracts for these private-sector “partners,” fueling the AI boom (or bubble) with taxpayer dollars.

Siebel Newsom’s message in the face of all this is that we are not helpless — and California, as the home of many of these companies and the world’s fourth-largest economy in its own right, should have a say in how this technology advances, and make sure it does so in a way that benefits and protects us all.

“California is uniquely positioned to lead the effort in showing innovation and responsibility and how they can go hand in hand,” she said. “I’ve always believed that stronger guardrails are actually good for business over the long term. Safer tech means better outcomes for consumers and greater consumer trust and loyalty.”

But the pressure to cave under the might of these companies is intense, as Siebel Newsom’s husband knows.

Gov. Gavin Newsom has spent the last few years trying to thread the needle on state legislation that offers some sort of oversight while allowing for the innovation that rightly keeps California and the United States competitive on the global front. The tech industry has spent millions in lobbying, legal fights and pressure campaigns to water down even the most benign of efforts, even threatening to leave the state if rules are enacted.

Last year, the industry unsuccessfully tried to stop Senate Bill 53, landmark legislation signed by Newsom. It’s a basic transparency measure on “frontier” AI models that requires companies to have safety and security protocols and report known “catastrophic” risks, such as when these models show tendencies toward behavior that could kill more than 50 people — which they have, believe it or not.

But the industry was able to stop other efforts. Newsom vetoed both Senate Bill 7, which would have required employers to notify workers when using AI in hiring and promotions; and Assembly Bill 1064, which would have barred companion chatbot operators from making these AI systems available to minors if they couldn’t prove they wouldn’t do things like encourage kids to self-harm, which again, these chatbots have done.

Still, California (along with New York and a few other states) has pushed forward, and speaking at Siebel Newsom’s event, the governor said that last session, “we took a number of at-bats at this and we made tremendous progress.”

He promised more.

“We have agency. We can shape the future,” he said. “We have a unique responsibility as it relates to these tools of technology, because, well, this is the center of that universe.”

If Newsom does keep pushing forward, it will be in no small part because of Siebel Newsom, and women like her, who keep the counter-pressure on.

In fact, it was another powerful mom, First Lady Melania Trump, who forced the federal government into a tiny bit of action this year when she championed the “Take It Down Act, which requires tech companies to quickly remove nonconsensual explicit images. I sincerely doubt her husband would have signed that particular bill without her urging.

So, if we are lucky, the efforts of women like Siebel Newsom may turn out to be the bit of powerful sanity needed to put a check on the world-domination fantasies of the broligarchy.

Because tech bros are not yet all-powerful, despite their best efforts, and certainly not yet immune to the power of moms.

Source link

The Microchip Cold War: The US-China Power Competition Over NVIDIA

US and China have long competed to become world powers, particularly in the technology sector. Since 2022, the US has systematically restricted the supply of high-performance NVIDIA chips to China. In today’s world, competition for power is no longer achieved through traditional means, such as military power. The US uses chips (semiconductors) as an instrument of political pressure. This policy is not just about economic or trade value, but has become part of technological statecraft designed to counter China’s military potential and its use of Artificial Intelligence (AI).

Semiconductors as a Provision of Power

The US policy of restricting high-end semiconductors to China shows a paradigm shift, chips (semiconductors) are not only industrial commodities, but have shifted to become a tool for achieving power. Export controls on high-performance chips and components that enable their production have been implemented by the Bureau of Industry and Security (BIS). These steps show that the US is restructuring the geopolitical arena of technology.

AI today relies heavily on chips that can process vast amounts of data. The US restricts the export of high-end chips, such as the NVIDIA H100 and A100. A country’s AI development capacity could be severely compromised without access to these chips. The H100 is more than just a technological component; it serves as a strategic enabler that determines a country’s ability to maintain military dominance.

NVIDIA and the Security Logic Behind Export Control

The Bureau of Industry and Security (BIS) on 2023 announcement expanded export oversight, not only targeting on specific chip models but also on component values, most notably in frontier algorithm development. The NVIDIA A100 and H100 are highly advanced datacenter and AI chips. The guidelines are particularly high for training complex AI models on supercomputers, even for military applications or demanding research.

To prevent misuse, the US government has implemented licensing requirements for chips like the A100 and H100 chips, which have put chips like the A300 and H800, made by NVIDIA, under increased scrutiny, despite being categorized as “weak service” chips. Export restrictions stem from concerns that NVIDIA GPUs could be used by China in training AI models related to the US military, not only to slow China’s technological progress but also to safeguard its own national interests.

The US understands very well that high-performance chips are “brain machines” that can accelerate the development of military superiority, intelligence analysis, and even autonomous systems. So it is very clear that limiting the capacity of computing and high-performance hardware is the way to go. To delay a rival’s capabilities without resorting to direct military confrontation. This is a concrete manifestation of the shift in the “battlefield” taking place in the technological and regulatory arenas.

Vulnerable Supply Chains and Dependence on Taiwan

In chip control, the US must recognize that there are undeniable realities. NVIDIA’s chip production goes through a fabrication process that is almost entirely carried out in Taiwan, a country that lies in the geopolitical conflict between Washington and Beijing. The Congressional Research Service (2024) shows that approximately 90% of global advanced semiconductor chip production is based in Taiwan, manufactured by the leading Taiwanese foundry, Taiwan Semiconductor Manufacturing Company Ltd. (TSMC). This creates a structural dependency that poses serious risks to US economic and technological security.

If semiconductor production were concentrated in a single region, it would create vulnerabilities that could destabilize the global technological system. Therefore, any tensions in the Taiwan Strait would disrupt US access to the computing infrastructure it maintains. Export restrictions are just one step in a much more complex strategy, requiring the US to diversify production locations and ensure that the chip supply chain is not concentrated in a single region.

Effectiveness and Adaptation Room for China

NVIDIA’s chip restrictions were intended to curb the pace of AI modernization in China, but China was still able to optimize the model’s efficiency. This demonstrates that limiting hardware performance doesn’t always equate to limiting innovation. On the other hand, unofficial market entities have emerged, allowing NVIDIA GPUs to remain accessible through third parties. This adaptation demonstrates that hardware control has limitations, especially when demand remains high and global distribution networks are not always transparent.

Looking at its overall effectiveness, US policy has been effective in slowing China’s computing capabilities, but it hasn’t stopped its strategic potential. Instead, it’s encouraging China to be self-sufficient in strengthening its technological foundation, even though the quality of local chips hasn’t yet matched NVIDA’s standards. In other words, restricting NVIDIA’s chip exports isn’t meant to end competition, but rather to transform it into a race toward technological independence. The policy’s effectiveness will only last as long as China finds a way to adapt, while China is working to fill that gap.

Policy Directions with Greater Strategic Opportunities

The effectiveness of the compute policy is based on a governance architecture that holds every allied country accountable to the same standards. Without a disciplined framework, export controls on China are merely an illusion that is easily penetrated by gaps in different economic and regulatory interests. By creating strategic alignment, which forces every democratic country to reduce the fragmentation of interests, it can open up greater policy opportunities to emerge. Many developing countries see this semiconductor race as a competition for dominance, not as an effort to maintain security.

In other words, a successful computing policy is not one that simply limits China’s space, but one that manages technological gaps without creating competing computing blocs. The geopolitical challenge is maintaining superiority without forcing the world into two technological divides that would be difficult to control. The US strategy to secure a leading position in future technologies requires flexibility in responding to global dynamics.

A Future Determined by Computational Capacity

The debate over NVIDIA chips demonstrates the growing integration of political and technological power. US policy aims not only to restrain the flow of strategic goods but also to build a new computing-based power architecture. However, this policy also presents challenges, including dependence on Taiwan, China’s flexibility, and economic pressure on US chip companies.

In a global world that continues to move toward an AI-driven economy, the future will be determined by who can manage geopolitical risks, understand supply chain dynamics, and design visionary policies. Ultimately, GPU regulation is no longer simply a matter of export control; it demonstrates how countries navigate a power struggle now measured in microchips.

Source link

Amazon begins rollout of Leo high-speed Internet service

Amazon shows off its new logo at a logistic and distribution center in Werne, Germany, in 2017. On Tuesday, the company announced the rollout of its satellite-based Amazon Leo Internet service for select enterprise customers, with a wider rollout planned in 2026. File Photo by Friedemann Vogel/EPA-EFE

Nov. 25 (UPI) — Online retailer Amazon has begun to roll out its Leo Internet service that offers gigabyte speed via its satellite network for businesses and other organizations.

Amazon’s enterprise customers will be the first to use the Amazon Leo Internet service that includes a new “Ultra” antenna, and a wider rollout is planned for 2026, Amazon announced on Tuesday.

Amazon officials said Leo is designed to extend reliable, high-speed Internet to those beyond the reach of existing networks, including millions of businesses, government entities and organizations that are located in areas where Internet service is unreliable.

“Amazon Leo represents a massive opportunity for businesses operating in challenging environments,” said Chris Webber, vice president of consumer and enterprise business for Amazon Leo.

“We’ve designed Amazon Leo to meet the needs of some of the most complex business and government customers out there,” Webber added.

“We’re excited to provide them with the tools they need to transform their operations, no matter where they are in the world.”

The Amazon Leo Internet service uses an innovative network design, satellites and “high-performance phased-array antennas” to support download speeds of up to 1 gigabyte per second and upload speeds of up to 400 megabytes per second.

A new antenna dubbed Leo Ultra enables users to attain such downloading and uploading speeds, which exceed those of the competing Starlink Performance Kit, according to The Verge.

SpaceX officials said a new V3 satellite will support faster uploading and downloading speeds next year.

Amazon also has more than 150 satellites orbiting the Earth to provide digital communications that are undergoing initial network testing that involves a small group of enterprise customers.

Commercial airline JetBlue is among Amazon Leo’s enterprise customers participating in the service’s initial rollout.

“We knew Amazon Leo would share our passion for customer-first innovation,” JetBlue President Marty St. George said.

“Choosing Amazon Leo reflects our commitment to staying ahead of what customers want most when traveling, such as fast, reliable performance and flexibility in our free in-flight Wi-Fi.”

Amazon Leo also enables enterprise customers to connect directly to their cloud-based accounts and establish private network interconnects so that they can connect and communicate with remote locations using their respective data centers and core networks.

Source link

China sends spacecraft to pick up stranded astronauts | Space News

Beijing carries out emergency launch to relieve space station crew left without working return capsule.

China has rushed to launch an uncrewed spacecraft to relieve three astronauts left on board the Tiangong space station without a passage to Earth.

State broadcaster CCTV showed a Long March-2F rocket carrying the Shenzhou-22 spacecraft lifting off from the Jiuquan Satellite Launch Centre shortly after noon local time (04:00 GMT) on Tuesday.

Recommended Stories

list of 4 itemsend of list

The Shenzhou-22 mission was originally planned to be crewed and take off in 2026.

However, the launch was brought forward after debris damaged the Shenzhou-20, which is currently attached to the Tiangong station, making it unsafe for carrying humans to Earth.

That disrupted the last crew change on the permanently crewed Chinese space station in November.

Unable to fly home in Shenzhou-20, the three astronauts who had arrived in April for their six-month stay were forced to use Shenzhou-21 to return to Earth.

That left the three astronauts currently on board Tiangong without a flightworthy vessel that could return them home in the event of an emergency.

The uncrewed Shenzhou-22 will fill that gap.

The crew at the space station – Zhang Lu, Wu Fei and Zhang Hongzhang – are “working normally”, Chinese officials emphasised.

The incident marks a rare setback for China’s rapidly growing space programme, which plans to send astronauts to the moon by 2030.

Beijing has poured billions into the sector in recent decades as it seeks to match the capabilities of the United States, Russia and Europe.

China became the third country to send humans into orbit after the US and the former Soviet Union in 2022.

Source link

Trump launches ‘Genesis Mission’ to harness AI for scientific breakthroughs | Technology News

Trump signs order to integrate supercomputers and data assets in order to create ‘AI experimentation platform”.

United States President Donald Trump has unveiled a national initiative to mobilise artificial intelligence (AI) for accelerating scientific breakthroughs.

Trump signed an executive order on Monday to establish “The Genesis Mission”, the latest iteration of his administration’s aggressive strategy for spurring AI development through deregulation, infrastructure investment and public-private collaboration.

Recommended Stories

list of 4 itemsend of list

Trump’s order directs US Energy Secretary Chris Wright to unite scientists and technologies at the country’s 17 national laboratories into “one cooperative system for research”.

Under the initiative, US supercomputers and data resources will be integrated to create a “closed-loop AI experimentation platform”, according to the order.

The White House, which likened the initiative to the Apollo programme that put the first man on the moon, said priority areas of focus would include the “greatest scientific challenges of our time,” such as nuclear fusion, semiconductors, critical materials and space exploration.

Michael Kratsios, the White House’s top science adviser, said the initiative took a “revolutionary approach” to scientific research.

“The Genesis Mission connects world-class scientific data with the most advanced American AI to unlock breakthroughs in medicine, energy, materials science, and beyond,” Kratsios said.

Chipmaker Nvidia and AI startup Anthropic said on Monday that they were partnering with the Trump administration on the initiative.

“Uniting the National Labs, USG, industry, and academia, this effort will connect America’s leading supercomputers, AI systems, and next-generation quantum machines into the most complex scientific instrument ever built – accelerating breakthroughs in energy, discovery, and national security,” Nvidia said in a social media post, referring to the US government (USG).

Since re-entering the White House, Trump has made cutting red tape to fast-track the development of AI a key plank of his economic agenda.

Last week, Trump called on the US Congress to pass legislation to create a national standard for AI, while criticising state governments over their laws regulating the emerging technology.

“Overregulation by the States is threatening to undermine this Growth Engine,” Trump said on his platform, Truth Social.

“We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes.”

Benjamin H Bratton, an AI expert at the University of California, San Diego, welcomed the initiative as a move towards the “diffusion” of the technology.

“It is less important ‘whose’ AI people have access to than they have universal access at all,” Bratton told Al Jazeera.

“Most attempts to throttle AI in the USA and EU [European Union] come from cultural, economic and political incumbents protecting their turf.”

“Those locked out of positions of artificially scarce social agency have the most to gain,” Bratton added. “I support diffusion, not any particular administration.”

Source link

U.S. Ties Steel Tariff Relief to ‘Balanced’ EU Digital Rules

The United States is asking the European Union (EU) to change its tech regulations before reducing U. S. tariffs on steel and aluminum from the EU. EU ministers wanted to discuss their July trade deal, which included cuts to U. S. tariffs on EU steel and removing them from goods like wine and spirits. However, U. S. Commerce Secretary Howard Lutnick stated that the EU must first create a more balanced approach to its digital sector rules.

After a meeting with EU ministers, Lutnick mentioned they could address steel and aluminum issues together if the EU improved its regulations. European Trade Commissioner Maros Sefcovic noted that he didn’t expect any immediate breakthroughs with the U. S. but was hopeful to begin discussions about steel solutions. The July trade agreement set U. S. tariffs at 15% on many EU goods, while the EU agreed to lower some of its duties on U. S. imports, with potential implementation not expected until March or April pending approval from European leaders.

The U. S. currently has a 50% tariff on metals and has also applied tariffs on related products, raising concerns in the EU about the impact on their trade agreement. The EU seeks to have more of its products subjected only to low tariffs and is open to discussing regulatory cooperation in various areas, including energy and economic security, particularly related to China.

With information from Reuters

Source link

Chilean city has fastest fixed broadband Internet in world, study says

Supporters of Chilean presidential candidate Jose Antonio Kast look at their phones while awaiting election results in Santiago on November 16, aided by a fast Internet. Photo by Ailen Diaz/EPA

Nov. 24 (UPI) — The Chilean city of Valparaíso has the fastest fixed broadband Internet in the world, according to the Speedtest Global Index, which ranks average connection speeds based on user tests.

The port city leads the latest ranking with an average download speed of 398.21 megabits per second, surpassing major cities such as Abu Dhabi in the United Arab Emirates, which placed second with 376 Mbps and Lyon, France, which ranked third.

In the United States, Los Angeles is the first city to appear in the ranking, in 11th place, followed by New York in 12th.

Valparaíso ranked ahead of Chile’s capital, Santiago, because it sits in a strategic location for technology companies that use the city as a hub for developing fiber-optic infrastructure for Chile, South America and connections to Oceania.

“Valparaíso is the landing point for submarine cables such as Google’s Curie, América Móvil’s Mistral and SAC, which add capacity and redundancy to the connectivity ecosystem, while Google’s Humboldt transpacific cable with the Chilean government is set to land in Valparaíso in 2027,” Danilo Bórquez, who holds a doctorate in complex systems engineering and is a professor at the Adolfo Ibáñez University’s engineering school, told UPI.

He added that residents of Valparaíso have faster and more stable Wi-Fi.

“With more than 300 Mbps you can have several users online at the same time. Video calls run smoothly and game or photo downloads and backups are much faster. You can also hold classes or use educational platforms without interruptions, with materials downloading in seconds or minutes,” Bórquez said.

At the national level, fiber-optic adoption is high. “In Chile, it accounts for about 70% of fixed connections, which drives the typical speeds measured by Speedtest. There are companies that can migrate or extend fiber to another 4.3 million households in Chile, which increases the base of users with high-speed plans.”

Marco Aravena, director of Modernization and Digital Transformation and a computer engineering professor at the University of Valparaíso, told UPI that service providers come to the city to expand fiber-optic Internet access.

“In Valparaíso you have Las Torpederas beach, where one of the submarine cables that brings fiber-optic connections from other parts of the world comes ashore. We are one of the technology hubs through which internet arrives in Chile. It’s not that users connect directly to that fiber, but they have more direct access to it,” he said.

Experts say these factors make Valparaíso attractive for people who want to work in hybrid or remote roles.

“Valparaíso is becoming a hub that allows people to come live and work here because of its strong connectivity. It also attracts students because there are many universities in the city,” Aravena said.

However, the city has significant investment in technology and networks but little investment in infrastructure or economic development.

According to the latest 2024 Urban Quality of Life Index from the Catholic University, Valparaíso scored medium-high in connectivity and mobility, but low in housing and surroundings and medium-low in health and the environment.

Source link

Amazon to invest $50bn in AI for US government customers | Business and Economy News

The federal government seeks to develop tailored artificial intelligence (AI) solutions and drive significant cost savings by leveraging AWS’s dedicated capacity.

Amazon is set to invest up to $50bn to expand artificial intelligence (AI) and supercomputing capacity for United States government customers, in one of the largest cloud infrastructure commitments targeted at the public sector.

The e-commerce giant announced the investment on Monday.

Recommended Stories

list of 4 itemsend of list

The project, expected to break ground in 2026, will add nearly 1.3 gigawatts of new AI and high-performance computing capacity across AWS Top Secret, AWS Secret and AWS GovCloud regions through new data centres equipped with advanced computing and networking systems.

One gigawatt of computing power is roughly enough to power about 750,000 US households on average.

“This investment removes the technology barriers that have held the government back”, Amazon Web Services (AWS) CEO Matt Garman said.

AWS is already a major cloud provider to the US government, serving more than 11,000 government agencies.

Amazon’s initiative aims to provide federal agencies with enhanced access to a comprehensive suite of AWS AI services. These include Amazon SageMaker for model training and customisation, Amazon Bedrock for deploying AI models and agents and foundational models such as Amazon Nova and Anthropic Claude.

The federal government seeks to develop tailored AI solutions and drive significant cost savings by leveraging AWS’s dedicated and expanded capacity.

The push also comes as the US, along with other countries such as China, intensifies efforts to advance AI development and secure leadership in the emerging technology.

Tech companies, including OpenAI, Alphabet and Microsoft, are pouring billions of dollars into building out AI infrastructure, boosting demand for computing power required to support the services.

On Wall Street, Amazon’s stock was up 1.7 percent in midday trading.

Other tech stocks surged amid the recent investments. Alphabet, Google’s parent company, closed in on a $4 trillion valuation on Monday and was set to become only the fourth company to enter the exclusive club. Its stock was up 4.7 percent.

Last week, Nvidia announced expectations of higher fourth-quarter revenue — a month after the tech giant announced a partnership to build supercomputers for the US Department of Energy — a deal that sent the company’s valuation topping $5 trillion.

Nvidia stock was up by 1.8 percent in midday trading.

Source link

Are tech companies using your private data to train AI models? | Technology News

Leading tech companies are in a race to release and improve artificial intelligence (AI) products, leaving users in the United States to puzzle out how much of their personal data could be extracted to train AI tools.

Meta (which owns Facebook, Instagram, Threads and WhatsApp), Google and LinkedIn have all rolled out AI app features that have the capacity to draw on users’ public profiles or emails. Google and LinkedIn offer users ways to opt out of the AI features, while Meta’s AI tool provides no means for its users to say “no, thanks.”

Recommended Stories

list of 3 itemsend of list

“Gmail just flipped a dangerous switch on October 10, 2025 and 99% of Gmail users have no idea,” a November 8 Instagram post said.

Posts warned that the platforms’ AI tool rollouts make most private information available for tech company harvesting. “Every conversation, every photo, every voice message, fed into AI and used for profit,” a November 9 X video about Meta said.

Technology companies are rarely fully transparent when it comes to the user data they collect and what they use it for, Krystyna Sikora, a research analyst for the Alliance for Securing Democracy at the German Marshall Fund, told PolitiFact.

“Unsurprisingly, this lack of transparency can create significant confusion that in turn can lead to fear mongering and the spread of false information about what is and is not permissible,” Sikora said.

The best – if tedious – way for people to know and protect their privacy rights is to read the terms and conditions, since it often explicitly outlines how the data will be used and whether it will be shared with third parties, Sikora said. The US doesn’t have any comprehensive federal laws on data privacy for technology companies.

Here’s what we learned about how each platform’s AI is handling your data:

Social media claim: “Starting December 16th Meta will start reading your DMs, every conversation, every photo, every voice message fed into AI and used for profit.” – November 9 X post with 1.6 million views as of November 19.

The facts: Meta announced a new policy to take effect December 16, but that policy alone does not result in your direct messages, photos and voice messages being fed into its AI tool. The policy involves how Meta will customise users’ content and advertisements based on how they interact with Meta AI.

For example, if a user interacts with Meta’s AI chatbot about hiking, Meta might start showing that person recommendations for hiking groups or hiking boots.

But that doesn’t mean your data isn’t being used for AI purposes. Although Meta doesn’t use people’s private messages in Instagram, WhatsApp or Messenger to train its AI, it does collect user content that is set to “public” mode. This can include photos, posts, comments and reels. If the user’s Meta AI conversations involve religious views, sexual orientation and racial or ethnic origin, Meta says the system is designed to avoid parlaying these interactions into ads. If users ask questions of Meta AI using its voice feature, Meta says the AI tool will use the microphone only when users give permission.

There is a caveat: The tech company says its AI might use information about people who don’t have Meta product accounts if their information appears in other users’ public posts. For example, if a Meta user mentions a non-user in a public image caption, that photo and caption could be used to train Meta AI.

Can you opt out? No. If you are using Meta platforms in these ways – making some of your posts public and using the chatbot – your data could be used by Meta AI. There is no way to deactivate Meta AI in Instagram, Facebook or Threads. WhatsApp users can deactivate the option to talk with Meta AI in their chats, but this option is available only per chat, meaning that you must deactivate the option in each chat’s advanced privacy settings.

The X post inaccurately advised people to submit this form to opt out. But the form is simply a way for users to report when Meta’s AI supplies an answer that contains someone’s personal information.

David Evan Harris, who teaches AI ethics at the University of California, Berkeley, told PolitiFact that because the US has no federal regulations about privacy and AI training, people have no standardised legal right to opt out of AI training in the way that people in countries such as Switzerland, the United Kingdom and South Korea do.

Even when social media platforms provide opt-out options for US customers, it’s often difficult to find the settings to do so, Harris said.

Deleting your Meta accounts does not eliminate the possibility of Meta AI using your past public data, Meta’s spokesperson said.

Google

Social media claim: “Did you know Google just gave its AI access to read every email in your Gmail – even your attachments?”  – November 8 Instagram post with more than 146,000 likes as of November 19.

The facts: Google has a host of products that interact with private data in different ways. Google announced on November 5 that its AI product, Gemini Deep Research, can connect to users’ other Google products, including Gmail, Drive and Chat. But, as Forbes reported, users must first give permission to employ the tool.

Users who want to allow Gemini Deep Research to have access to private information across products can choose what data sources to employ, including Google search, Gmail, Drive and Google Chat.

There are other ways Google collects people’s data:

  • Through searches and prompts in Gemini apps, including its mobile app, Gemini in Chrome or Gemini in another web browser
  • Any video or photo uploads that the user entered into Gemini
  • Through interactions with apps such as YouTube and Spotify, if users give permission
  • Through message and phone calls apps, including call logs and message logs, if users give permission.

A Google spokesperson told PolitiFact the company doesn’t use this information to train AI when registered users are under age 13.

Google can also access people’s data when they have smart features activated in their Gmail and Google Workplace settings (that are automatically on in the US), which gives Google consent to draw on email content and user activity data to help users compose emails or suggest Google Calendar events. With optional paid subscriptions, users can access additional AI features, including in-app Gemini summaries.

Turning off Gmail’s smart features can stop Google’s AI from accessing Gmail, but it doesn’t stop Google’s access to the Gemini app, which users can either download or access in a browser.

A California lawsuit accuses Gemini of spying on users’ private communications. The lawsuit says an October policy change gives Gemini default access to private content such as emails and attachments in people’s Gmail, Chat and Meet. Before October, users had to manually allow Gemini to access the private content; now, users must go into their privacy settings to disable it. The lawsuit claims the Google policy update violates California’s 1967 Invasion of Privacy Act, a law that prohibits unauthorised wiretapping and recording confidential communications without consent.

Can you opt out? If people don’t want their conversations used to train Google AI, they can use “temporary” chats or chat without signing into their Gemini accounts. Doing that means Gemini can’t save a person’s chat history, a Google spokesperson said. Otherwise, opting out of having Google’s AI in Gmail, Drive and Meet requires turning off smart features in settings.

LinkedIn

Social media claim: Starting November 3, “LinkedIn will begin using your data to train AI.” – November 2 Instagram post with more than 18,000 likes as of November 19.

The facts: LinkedIn, owned by Microsoft, announced on its website that starting November 3, it will use some US members’ data to train content-generating AI models.

The data the AI collects includes details from people’s profiles and public content that users post.

The training does not draw on information from people’s private messages, LinkedIn said.

LinkedIn also said, aside from the AI data access, that Microsoft started receiving information about LinkedIn members – such as profile information, feed activity and ad engagement – as of November 3 in order to target users with personalised ads.

Can you opt out? Yes. Autumn Cobb, a LinkedIn spokesperson, confirmed to PolitiFact that members can opt out if they don’t want their content used for AI training purposes. They can also opt out of receiving targeted, personalised ads.

To remove your data from being used for training purposes, go to data privacy, click on the option that says “Data for Generative AI Improvement” and then turn off the feature that says “use my data for training content creation AI models.”

And to opt out of personalised ads, go to advertising data in settings, and turn off ads on LinkedIn and the option that says “data sharing with our affiliates and select partners”.



Source link

In ‘A Sexual History of the Internet’ Mindy Seu reveals the unexpected

The technologist and professor Mindy Seu was having drinks when her friend casually referred to the phone as a sex toy. Think about it, her friend, Melanie Hoff, explained: We send nudes or watch porn, it’s vibrating and touch-sensitive — it’s practically an appendage.

“What exactly is sex, and what exactly is technology?” Seu wondered. “Neither can be cleanly defined.”

Around the same time, in 2023, Seu had just published “Cyberfeminism Index,” a viral Google Sheet-turned-Brat-green-doorstopper from Inventory Press. Critics and digital subcultures embraced the niche volume like a manifesto — and a marker of Seu’s arrival as a public intellectual whose archiving was itself a form of activism. The cool design didn’t hurt. “If you’re a woman who owns a pair of Tabis or Miistas, you are going to have this tome,” joked comedian Brian Park on his culture podcast “Middlebrow.”

Still, the knot between sexuality and technology tugged at her. “Recently, my practice has evolved toward technology-driven performance and publication,” she said. “It’s not exactly traditional performance art, but I believe that spaces like lectures and readings can be made performative.” Though she wasn’t yet finished exploring this theme, she wasn’t sure how to approach it next — until an experiment by Julio Correa, a former Yale graduate student, sparked an idea. Correa had devised an Instagram Stories-based lecture format, and she immediately saw its potential. She reached out to ask if she could “manipulate” his idea into a performance piece, and would he like to collaborate?

  • Share via

Thus, “A Sexual History of the Internet” was born. The work is two things at once: a participatory lecture-performance conducted through the audience’s phones, and an accompanying, palm-sized, 700-plus-page “script” examining how our devices serve as bodily extensions.

The book isn’t exhaustive but instead a curated miscellany of non-sequiturs and the kind of dinner-party lore Seu delights in. Did you know that the anatomical structure of the clitoris wasn’t fully mapped until a decade after the invention of the World Wide Web? Or that the first JPEG — introduced in 1992 at USC — cribbed a Playboy centerfold nicknamed “Lenna,” which journalist and the author of the 2018 “Brotopia” Emily Chang called “tech’s original sin.”

The metaverse, web3 and AI — none of this is new, Seu said in her loft this past Saturday, hours before her West Coast debut at the Geffen Contemporary at MOCA. “But understanding the arc is helpful, especially how it’s tied to militaristic origins rooted in power, and how those same people were also confronted with sexuality.”

She’s just returned from a whirlwind tour — Antwerp, New York, Oslo, Madrid — with Tokyo next month. She splits her time between L.A. and Berlin, where her boyfriend lives, but for now, she’s staying put in what she calls her “bachelor pad on the set of a ‘90s erotic thriller,” inherited from a friend, the artist Isabelle Albuquerque.

The floor-to-ceiling windows high in a historic Brutalist artists’ complex overlook MacArthur Park and the downtown skyline. She’s offset the building’s cement with a childhood baby grand piano and her grandmother’s lacquer vanity with pearl inlay. That Seu marries the feminine and the spartan in her space feels intentional — a reflection of the dualities that animate her life and work.

"A Sexual History of the Internet" by Mindy Seu

“A Sexual History of the Internet” by Mindy Seu

(Photography by Tim Schutsky | Art direction by Laura Coombs)

Though she moved from New York three years ago, she resists calling herself an Angeleno — partly, she admits, because she never learned to drive despite growing up in Orange County. Her parents ran a flower shop after immigrating from South Korea. The household was conservative, Presbyterian and promoted abstinence. Like with many millennials, her sexual awakening unfolded online.

“I asked Jeeves how to have an orgasm,” she writes. “I sexted with classmates on AOL Instant Messenger. Any curiosities were saved until I could sneak onto my family’s shared ice blue iMac G3 in the living room.”

At 34, the very-online academic holds a master’s from Harvard’s Graduate School of Design and has taught at Rutgers and Yale before joining her alma mater, UCLA, as one of the youngest tenured professors (and perhaps the only one who has modeled for JW Anderson and Helmut Lang). Her first three years at UCLA have each had their crises — encampments, fires, ICE raids — yet her Gen Z students give her hope. “They’re so principled and motivated, even if it’s in a nihilistic way,” she said.

Online, fans declare their “brain crushes” on Seu, whose ultra-detailed spreadsheets have become unlikely catnip for TikTok. Vanity Fair dubbed her the rare cybernaut who “lands soft-focus photoshoots in niche lifestyle publications.” Her unusual power is the ability to move through different fields, Trojan-horsing her theories across academia, the art world, the lit scene, tech, fashion, et al. Seu’s notoriety continued to swell after appearing on the popular internet talk show “Subway Takes” with the standout zinger: “Gossip is socially useful, especially to women and the marginalized.”

“Mindy’s really good at bridging different audiences who might not read an academic text about the history of the internet but are interested in Mindy’s practice,” said Correa, Seu’s student-turned-collaborator. When the two workshopped their performance last year on their finsta (a.k.a. fake Instagram), they encountered one major hurdle: censorship. They had to get creative with their algospeak (like changing “sex” to “s*x”) to keep from getting banned.

Mindy Seu in her MacArthur Park loft.

Mindy Seu in her MacArthur Park loft.

(Carlin Stiehl / For The Times)

“A Sexual History of the Internet,” designed by Laura Coombs, carries that collaborative ethos into its financial structure. Seu’s first book went through traditional publishing, where authors often receive about 10% and contributors receive fixed fees. This time, she wanted a citation model that compensated the 46 thinkers who shaped her understanding of the subject.

She approached Yancey Strickler, director of Metalabel, “an indie record label for all forms of culture,” and co-founder of Kickstarter. Seu’s original proposal waived all profits to collaborators. “Everyone got paid but her,” Strickler said. If she wanted the model to be replicated, he told her, it needed a capitalist backbone.

They landed on Citational Splits, where everyone who was cited joined a 30% profits pool, in perpetuity, across future printings (27 opted in). The remaining 60% goes to Seu and five core collaborators. Strickler likened it to music royalties or company shares: “Your presence increases the project’s value, and some of that value should flow back to you.”

Neither can name a publishing precedent. “It shows a profound, practical morality that underlies her work,” he said.

At MOCA, about 300 Angelenos braved an atmospheric river to sit in the darkened former police car warehouse bathed in red light. No projector, no spotlight. A pair of Tabis winks at her all-black-clad friend; a couple holds hands as Seu moves through the room. (“I intentionally wear very noisy shoes,” she said earlier.)

With the calm cadence of a flight attendant, Sue instructs everyone to put their phones on Do Not Disturb, sound and brightness to max and open Instagram to find @asexualhistoryoftheinternet.

The audience reads in unison when their designated color appears. What follows is a chorus of anecdotes, artworks and historical fragments tracing the pervasive — and sometimes perverted — roots of our everyday technologies. Hearing men and women say “click and clitoris” together is its own spectacle.

“From personal websites to online communities, cryptocurrencies to AI, the internet has been built on the backs of unattributed sex workers,” one slide notes. Sex work has long been an early adopter of emerging technology — from VHS to the internet — and the present is no exception. Two years ago, OnlyFans creators made more money than the total NBA salary combined; today, the company now generates more revenue per employee than Apple or Nvidia.

Seu ends with the widely known dominatrix Mistress Harley’s concept of data domination, a subset of BDSM in which her “subs” (a.k.a. submissives) grant her remote access to their machines. Seu tells the crowd that she has essentially done the same, “viewing the voyeurs” and taking photos of us throughout the performance, which are already posted to Instagram.

We walk out into the dark rain, wondering what exactly we witnessed — and realizing, perhaps, we’ve been witnessing it all along.



Source link

Malaysia says it will ban social media for under-16s from next year | Social Media News

Move comes as a growing number of countries are rolling out measures to limit children’s exposure to digital platforms.

Malaysia plans to ban social media for users under the age of 16 starting from next year, joining a growing list of countries choosing to limit access to digital platforms due to concerns about child safety.

Communications Minister Fahmi Fadzil said on Sunday the government was reviewing mechanisms used to impose age restrictions for social media use in Australia and other nations, citing a need to protect youths from online harms such as cyberbullying, financial scams and child sexual abuse.

Recommended Stories

list of 4 itemsend of list

“We hope by next year that social media platforms will comply with the government’s decision to bar those under the age of 16 from opening user accounts,” he told reporters, according to a video of his remarks posted online by local daily The Star.

The effects of social media on children’s health and safety have become a growing global concern, with companies including TikTok, Snapchat, Google and Meta Platforms – the operator of Facebook, Instagram and WhatsApp – facing lawsuits in the United States for their role in driving a mental health crisis.

In Australia, social media platforms are poised to deactivate accounts registered to users younger than 16 next month, under a sweeping ban for teenagers that is being closely watched by regulators around the world.

France, Spain, Italy, Denmark and Greece are also jointly testing a template for an age verification app.

Malaysia’s neighbour Indonesia said in January it planned to set a minimum age for social media users, but later issued a less stringent regulation requiring tech platforms to filter negative content and impose stronger age verification measures.

Malaysia has put social media companies under greater scrutiny in recent years in response to what it claims to be a rise in harmful content, including online gambling and posts related to race, religion and royalty.

Platforms and messaging services with more than eight million users in Malaysia are now required to obtain a license under a new regulation that came into effect in January.

Source link

Amazon lays off 1,800 engineers amid efficiency push

Software engineering jobs are among the thousands Amazon cut in October amid a push to downsize and increase efficiency and innovation, the company reported on Friday. Photo by Friedemann Vogel/EPA-EFE

Nov. 21 (UPI) — Engineers formerly employed by Amazon accounted for about 40% of its 4,700 jobs cut in October as the online retailer and tech company seeks greater efficiency and innovation.

Amazon fired more than 1,800 engineers in October amid downsizing, while also seeking more rapid innovation, CNBC reported on Friday.

Amazon has 1,578,000 employees as of Sept. 30, which is twice as many as 2019, according to Stock Analysis. The $2.3 trillion market cap is fifth fifth-highest in the world, The Market Fool posted.

The company announced the job cuts in its respective Worker Adjustment and Retaining Notifications filed in California, New Jersey, New York and the state of Washington.

The jobs cut in one month are the largest reduction in Amazon’s 31 years in business and part of the more than 14,000 layoffs announced last month by Amazon officials.

The tech firm’s human resources leader, Beth Galetti, said Amazon needs more artificial intelligence engineers to enable it to better manage operations while reducing labor costs.

“This generation of AI is the most transformative technology we’ve seen since the internet, and it’s enabling companies to innovate much faster than ever before,” Galetti said in the memo notifying states of the job cuts.

“We’re convinced that we need to be organized more leanly, with fewer layers and more ownership, to move as quickly as possible for our customers and business.”

Most of the engineers fired this year are software specialists.

Amazon’s job cuts echo those of other tech firms, which combined for nearly 113,000 job reductions in total among 231 tech firms so far this year, according to Layoffs.fyi.

Amazon Chief Executive Officer Andy Jassy in recent years has emphasized downsizing to make Amazon more efficient by cutting its organizational fat, CNBC reported.

The virtual retailer is expected to announce more job cuts in January while revising its workforce to improve efficiency and reduce bureaucracy.

Source link

Seoul shares tumble amid AI bubble fears; won plunges to 7-month low

Officials work at a dealing room of Hana Bank in Seoul on Friday, after the KOSPI closed at 3,853.26, down 151.59 points (3.79%) from the previous day. Photo by Yonhap

South Korean stocks closed sharply lower Friday, as renewed concerns over an artificial intelligence (AI) bubble weighed heavily on big-cap tech shares. The local currency fell to the lowest level in seven months against the U.S. dollar on massive foreign stock selling.

The benchmark Korea Composite Stock Price Index (KOSPI) tumbled 151.59 points, or 3.79 percent, to close at 3,853.26.

Trade volume was moderate at 307.95 million shares worth 14.02 trillion won (US$9.5 billion), with decliners outnumbering gainers 718 to 177.

Foreigners sold a net 2.83 trillion won worth of shares, while retail and institutional investors bought a net 2.29 trillion won and 495.46 billion won worth of shares, respectively.

According to the Korea Exchange, offshore investors’ net selling reached its largest level since Feb. 26, 2021, when they offloaded 2.83 trillion won worth of shares.

The index opened lower, tracking overnight losses on Wall Street, and further extended its decline as investors were wary of the valuation of AI-related shares and their aggressive investment plans.

Also affecting the sentiment was the Federal Reserve‘s monetary policy, as expectations for further rate cuts continued to wane.

“The market surrendered its gains from yesterday’s Nvidia earnings surprise. Following recent sharp gains, volatility appears to have persisted,” Han Ji-young, a researcher at Kiwoom Securities, said.

“But there remains ample potential for sentiment to reverse depending on upcoming key economic data and additional AI-related developments,” the analyst added.

Tech shares dipped following a rally in the previous session.

Market bellwether Samsung Electronics fell 5.77 percent to 94,800 won, and chip giant SK hynix plunged 8.76 percent to 521,000 won.

Major battery maker LG Energy Solution lost 3.51 percent to 425,500 won, and LG Chem dipped 5.53 percent to 367,000 won.

Nuclear power plant builder Doosan Enerbility sank 5.92 percent to 73,100 won, and defense giant Hanwha Aerospace shed 5.13 percent to 869,000 won.

Leading shipbuilder HD Hyundai Heavy skidded 4.8 percent to 555,000 won, and its rival Hanwha Ocean lost 4.16 percent to 119,800 won. No. 1 steelmaker POSCO declined 3.42 percent to 310,500 won.

Carmakers finished mixed. Top automaker Hyundai Motor retreated 0.95 percent to 259,500 won, while its sister affiliate Kia rose 0.53 percent to 114,000 won.

Leading financial group KB Financial decreased 0.58 percent to 120,500 won, while internet portal operator Naver surged 2.14 percent to 262,500 won.

The local currency was quoted at 1,475.6 won against the greenback at 3:30 p.m., down 7.7 won from the previous session.

It marked the weakest level since April 9, when it finished at 1,484.1 won. The April 9 figure was the lowest since March 12, 2009, when the won closed at 1,496.5 amid the global financial crisis.

Bond prices, which move inversely to yields, ended higher. The yield on three-year Treasurys fell 3.6 basis points to 2.872 percent, and the return on the benchmark five-year government bonds lost 3.9 basis points to 3.076 percent.

Copyright (c) Yonhap News Agency prohibits its content from being redistributed or reprinted without consent, and forbids the content from being learned and used by artificial intelligence systems.

Source link

GAO: ‘Digital footprints’ endanger the nation, military and personnel

Nov. 20 (UPI) — Using digital devices creates a data footprint that endangers national security, U.S. military personnel and military operations, the Government Accountability Office said.

The Defense Department has cited publicly available data generated by defense platforms, personal devices and online activities as a growing threat that requires continual caution, the GAO reported on Monday.

“Massive amounts of traceable data about military personnel and operations now exist due to the digital revolution,” according to the report.

“When aggregated, these ‘digital footprints’ can threaten military personnel and their families, operations and, ultimately, national security.”

Such information could enable “malicious actors” to trace the movements of ships and aircraft and otherwise endanger military operations, according to Military Times.

The GAO report says 10 Defense Department components are vulnerable to security lapses that create “volumes of traceable data.”

The vulnerability is especially prevalent for training and security assessment within the U.S. Cyber Command, National Security Agency, Defense Intelligence Agency, Defense Counterintelligence and Security Agency, U.S.Special Operations Command and every U.S. military branch, according to the GAO.

Only the U.S. Special Operations Command has consistently trained its personnel to minimize the risks created by digital information, the report says.

Most Defense Department agencies and offices also fail to undertake threat assessments targeting force protection, insider threats, mission assurance and operations security.

The GAO said information provided via press releases, news sources, online activities, social media posts and ship coordinates might be capable of telegraphing the routes of ships and aircraft and jeopardize their respective operations.

Only three of five offices within the Defense Department have issued policies or provided guidance to minimize the risks of digital information, but even those efforts are “narrowly focused” and insufficient, the GAO said.

The GAO report echoes concerns raised by federal lawmakers after a Signal app discussion of a pending military strike on Houthi targets on March 15 accidentally included The Atlantic journalist Jeffrey Goldberg.

Source link

Organization warns against giving AI toys to children

Nov. 20 (UPI) — Toys that use AI to interact with children might seem like a fun idea, but one organization is warning against them.

The nonprofit Fairplay released an advisory Thursday warning parents to avoid artificial intelligence-based children’s toys this holiday season.

AI toys are chatbots embedded in children’s toys — such as plushies, dolls, action figures, or kids’ robots — and use AI technology designed to communicate like a friend.

Examples include Miko, Curio Interactive’s Grok and Gabbo, Smart Teddy, FoloToy’s Kumma bear, Roybi and Keyi Technology’s Loona Robot Dog. Some of the toys are marketed to children as young as infants, Fairplay said in a statement.

“It’s ridiculous to expect young children to avoid potential harm here,” said Rachel Franz, a Fairplay program director, in a statement to NPR.

“Young children are especially susceptible to the potential harms of these toys, such as invading their privacy, collecting data, engendering false trust and friendship, and displacing what they need to thrive, like human-to-human interactions and time to play with all their senses. These can have long and short-term impacts on development,” she said

Singapore-based FoloToy suspended sales of its Kumma bear after it was found to give inappropriate advice to children, CNN reported Wednesday. The bear’s chatbot talked about sexual fetishes, how to find knives in the home and how to light a match.

FoloToy CEO Larry Wang told CNN that the company had withdrawn Kumma and its other AI toys and is now “conducting an internal safety audit.”

The Toy Association, which represents toy manufacturers, told NPR that toys sold by responsible manufacturers and retailers must follow more than 100 strict federal safety standards and tests, including the Children’s Online Privacy Protection Act, which governs children’s privacy and data security online.

“The Toy Association urges parents and caregivers to shop only from reputable toymakers, brands, and retailers who prioritize children’s safety above all else,” the statement said. The organization added that it offers safety tips for AI and other connected products.

Fairplay offered more reasons that AI toys are not safe for children.

AI toys are usually powered by the same AI that has already harmed children, and young children who use them are less equipped to protect themselves than older children and teens, Fairplay said.

AI chatbots have caused children to use them obsessively, engaged in explicit sexual conversations, and encouraged unsafe behaviors, violence against others, and self-harm.

AI toys may sabotage children’s trust by pretending to be trustworthy companions or “friends.” Young children are likely to treat connected toys and devices as if they were people and develop an emotional attachment to them.

These “relationships” can disrupt children’s real relationships and resilience by offering “genuine friendship,” which isn’t possible from a machine.

Probably most concerning is that AI toys can invade family privacy by collecting sensitive data using audio and video recording, speech-to-text technology, and even voice, gesture, and facial recognition software, Fairplay said.

A child might talk to the toy and tell it their personal thoughts, emotions, fears, and desires, which will be delivered to a third party. They could also record private family conversations or record other children in the room.

Some toys even have facial recognition and video recording, which could take video of children in the bath or getting dressed.

Source link

Nvidia forecasts Q4 revenue above estimates despite AI bubble concerns | Technology News

Analysts expect AI chip demand to remain strong.

Nvidia has forecast fourth-quarter revenue above Wall Street estimates and is betting on booming demand for its AI chips from cloud providers even as widespread concerns of an artificial intelligence bubble grow stronger.

The world’s most valuable company expects fourth-quarter sales of $65bn, plus or minus 2 percent, compared with analysts’ average estimate of $61.66bn, according to data compiled by LSEG.

Recommended Stories

list of 4 itemsend of list

The results from the AI chip leader mark a defining moment for Wall Street as global markets look to the chip designer to determine whether investing billions of dollars in AI infrastructure expansion has resulted in towering valuations that potentially outpace fundamentals.

“The AI ecosystem is scaling fast with more new foundation model makers, more AI start-ups across more industries and in more countries. AI is going everywhere, doing everything, all at once,” Nvidia CEO Jensen Huang said in a statement.

Before the results, doubts had pushed Nvidia shares down nearly 8 percent in November after a 1,200 percent surge in the past three years.

Sales in the data-centre segment, which accounts for a majority of Nvidia’s revenue, grew to $51.2bn in the quarter that ended on October 26. Analysts had expected sales of $48.62bn, according to LSEG data.

Warning signs

But some analysts noted that factors beyond Nvidia’s control could impede its growth.

“While GPU [graphics processing unit] demand continues to be massive, investors are increasingly focused on whether hyperscalers can actually put this capacity to use fast enough,” said Jacob Bourne, an analyst with eMarketer. “The question is whether physical bottlenecks in power, land and grid access will cap how quickly this demand translates into revenue growth through 2026 and beyond.”

Nvidia’s business also became increasingly concentrated in its fiscal third quarter with four customers accounting for 61 percent of sales. At the same time, it sharply ramped up how much money it spends renting back its own chips from its cloud customers, who otherwise cannot rent them out, with those contracts totalling $26bn – more than double their $12.6bn in the previous quarter.

Still, analysts and investors widely expected the underlying demand for AI chips, which has powered Nvidia results since ChatGPT’s launch in late 2022, to remain strong.

Nvidia CEO Jensen Huang said last month that the company has $500bn in bookings for its advanced chips through 2026.

Big Tech, among Nvidia’s largest customers, has doubled down on spending to expand AI data centres and snatch the most advanced, pricey chips as it commits to multibillion-dollar, multigigawatt build-outs.

Microsoft last month reported a record capital expenditure of nearly $35bn for its fiscal first quarter  with roughly half of it spent primarily on chips.

Nvidia expects an adjusted gross margin of 75 percent, plus or minus 50 basis points in the fourth quarter, compared with market expectation of 74.5 percent.

Source link