Artificial Intelligence

Bubble or boom? What to watch as risks grow amid record market rally

An estimated half a trillion dollars was wiped out from the financial markets this week, as some of the biggest tech companies, including Nvidia, Microsoft, and Palantir Technologies saw a temporary but sizeable drop in their share prices on Tuesday. It may have been just a short-lived correction, but experts warn of mounting signs of a financial market crash, which could cost several times this amount.

With dependence on tech and AI growing, critics argue that betting on these profits is a gamble, stressing that the future remains uncertain.

Singapore’s central bank joined a global chorus of warnings from the IMF, Fed Chair Jerome Powell, and Andrew Bailey about overvalued stocks.

The Monetary Authority of Singapore said on Wednesday that such a trend is fuelled by “optimism in AI’s ability to generate sufficient future returns”, which could trigger sharp corrections in the broader stock market.

Goldman Sachs and Morgan Stanley predict a 10–20% decline in equities over the next one to two years, their CEOs told the Global Financial Leaders’ Investment Summit in Hong Kong, CNBC reported.

Experts interviewed by Euronews Business also agree that a sizeable correction could be on the way.

In a worst-case scenario, a market crash could wipe out trillions of dollars from the financial markets.

According to Mathieu Savary, chief European strategist at BCA Research, Big Tech companies, including Nvidia and Alphabet, would cause a $4.4 trillion (€3.8tn) market wipeout if they were to lose just 20% of their stock value.

“If they go down 50%, you’re talking about an $11tr (€9.6tr) haircut,” he said.

AI rally: Bubble or boom?

The US stock market has defied expectations this year. The S&P 500 is up nearly 20% over the past 12 months, despite geopolitical tensions and global trade uncertainty driven by Washington’s tariff policies. Gains have been strongest in tech, buoyed by optimism over future AI profits.

While Big Tech continues to deliver, with multibillion-dollar AI investments and massive infrastructure buildouts now routine, concerns are growing over a slowing US economy, compounded by limited data during the government shutdown. Once fresh figures emerge, they could rattle investors.

AI enthusiasm is most evident in Nvidia’s extraordinary stock gains and soaring valuation. The company is central to the tech revolution as its graphics processing units (GPUs) are essential for AI computing.

Nvidia’s shares have surged over 3,000% since early 2020, recently making it the world’s most valuable public company. Between July and October alone, it gained $1tr (€870bn) in market capitalisation — roughly equal to Switzerland’s annual GDP. Its stock trades at around 45 times projected earnings for the current fiscal year.

Derren Nathan, head of equity research at Hargreaves Lansdown, said: “Much of this growth is backed by real financial progress, and despite the massive nominal increase in value, relative valuations don’t look overstretched.”

Analysts debate whether the current market mirrors the dot-com bubble of 2000. Nathan notes that many tech companies that failed back then never reached profitability, unlike today’s giants, which generate strong revenues and profits, with robust demand for their products.

Ben Barringer, global head of technology research at Quilter Cheviot, added: “With governments investing heavily in AI infrastructure and rate cuts likely on the horizon, the sector has solid foundations. It is an expensive market, but not necessarily a screaming bubble. Momentum is hard to sustain, and not every company will thrive.”

BCA Research sees a bubble forming, though not set to burst immediately. Chief European strategist Mathieu Savary said such bubbles historically peak when firms begin relying on external financing for large projects.

Investments in assets for future growth, or capital expenditures, as a share of operating cash flow, have jumped from 35% to 70% for hyperscalers, according to Savary. Hyperscalers are tech firms such as Microsoft, Google, and Meta that run massive cloud computing networks.

“The share of operating earnings is likely to move above 100% before we hit the peak,” Savary added. This means that they may soon be investing more than they earn from operations.

Recent examples of Big Tech firms turning to external financing for such moves include Meta’s Hyperion project with Blue Owl Capital and Alphabet’s €3 billion bond issue for AI and cloud expansion.

While AI investment growth is hard to sustain, Quilter’s Barringer told Euronews: “If CapEx starts to moderate later this year, markets may start to get nervous.”

Other factors to watch include return on invested capital and rising yields and inflation pressures, which could signal a higher cost of capital and a bubble approaching its end.

“But we’re not there yet,” said Savary.

Further concerns and how to hedge against market turbulence

Even as tech companies ride the AI wave, inflated expectations for future profits may prove difficult to meet.

“The sceptics’ main problem may not be with AI’s potential itself, but with the valuations investors are paying for that potential and the speed at which they expect it to materialise,” said AJ Bell investment director Russ Mould.

A recent report by BCA reflects the mounting reasons to question the AI narrative, but the technology “remains a potent force”, said the group.

If investor optimism does slow, “a sharp correction in tech could still have ripple effects across broader markets, given the sector’s dominant weight in global indices,” Barringer said. He added that other regions and asset classes, such as bonds and commodities, would be less directly affected and could provide an important balance during a downturn.

According to Emma Wall, chief investment strategist at Hargreaves Lansdown, “investors should use this opportunity to crystallise impressive gains and diversify their portfolios to include a range of sectors, geographies and asset classes — adding resilience to portfolios. The gold price tipping up is screaming a warning again — a siren that this rally will not last.”

Source link

Automating Oppression: How AI Firms and Governments Rewire Democracy

Authors: Christopher Jackson and Aaron Spitler*

Digital technologies, particularly AI, are accelerating democratic backsliding and revitalizing authoritarian governments. AI-focused companies have been forming close partnerships with government actors, often in ways that undermine democratic norms. Around the world, private firms are supplying or co-designing technologies that enhance mass surveillance, predictive policing, propaganda campaigns, and online censorship. In places like China, Russia, and Egypt, a blurring of boundaries between the state and the technology industry has led to serious consequences. This collusion has undercut privacy rights, stifled civil society, and diminished public accountability.

This dynamic is now playing out in the United States. Companies like Palantir and Paragon Solutions are providing government entities with powerful AI tools and analytics platforms, often under opaque contracts. In September, U.S. President Donald Trump approved the sale of TikTok to U.S. private entities friendly with the administration. Unchecked public-private integration within the technology industry poses serious risks for democratic societies, namely that it offers increased power to unaccountable actors. The focus of this article is to examine case studies on how these emerging alliances are enabling authoritarian practices, as well as what they might mean for the future of democratic societies.

Russia: Manipulating Digital Tools

In Russia, democratic norms under Vladimir Putin have eroded while Russian tech companies continue to work hand in glove with state authorities. Sberbank, the country’s largest financial institution, and their development of Kandinsky 2.1, an AI-powered, text-to-image tool owned by the firm, illustrate this long-running trend.

Despite the quality of its outputs compared to rivals like DALL-E, the solution came under fire in 2023 from veteran lawmaker Sergey Mironov, who argued that it generated images that tarnished Russia’s image. He would go on to charge that Kandinsky 2.1 was designed by “unfriendly states waging an informational and mental war” against the country.

Not long after, some in the tech space noticed that Kandinsky 2.1’s outputs changed. For instance, while the tool previously churned out images of zombies when prompted with “Z Patriot,” users noted that it now repeatedly produced pictures of hyper-masculine figures. Critics claim that this alteration not only represented an overt manipulation of the technology itself but also an attempt to curry favor with those in the government.

This episode shows how AI-powered tools are being specifically tailored to serve the needs of authorities. The modifications made to the model transformed it into an invaluable resource the government could use to amplify its messaging. As a result, users are no longer likely to see Kandinsky 2.1 as a tool for creativity, particularly if its outputs remain blatantly skewed. Developers in countries like Russia may look to this case for inspiration on how to succeed in restrictive political contexts.

United States: Supercharging Mass Surveillance

AI-centric firms in the United States have also taken note. Palantir Technologies stands as the most prominent example of how private technology firms can deepen government surveillance capabilities in ways that test the limits of democratic accountability. The firm, established in the wake of 9/11, has expanded its domestic footprint through lucrative contracts with local police departments and, most notably, Immigration and Customs Enforcement (ICE).

Investigations reveal that Palantir’s software enables ICE agents to compile and cross-reference vast amounts of personal data, from Department of Motor Vehicle (DMV) records and employment information to social media activity and utility bills. This capability gives the government a unique opportunity to build detailed profiles on individuals and their community networks. This has helped facilitate deportations and raids on immigrant communities. Critics argue that Palantir’s tools create a dragnet that vastly expands state power, all while shielding the company and its government clients from public oversight.

Beyond immigration enforcement, Palantir’s Gotham platform has been adopted by police departments for predictive policing initiatives, which attempt to forecast locations and suspects for crimes. Civil liberties groups have warned that such uses reinforce systemic biases by encoding discriminatory policing practices into algorithmic decision-making. Predictive policing algorithms inherit bias because they rely on historical data shaped by discriminatory over-policing of Black communities, among others. Scholars of “surveillance capitalism” also note that these partnerships normalize the commodification of personal data for state security purposes.

The deeper concern lies in how this private-public nexus erodes societal trust and transparency. Unlike government agencies bound by Freedom of Information Act (FOIA) requirements, companies like Palantir operate under corporate secrecy, limiting democratic oversight of technologies that profoundly affect civil rights. In this sense, the Palantir case illustrates how authoritarian-style practices, combined with technological breakthroughs, can be incubated within democratic societies and later contribute to their overall decline.

Challenging Anti-Democratic Alliances

The deepening collaboration between AI firms and authorities in developing repressive technologies is alarming. Across the globe, these partnerships have flourished, often to the detriment of average citizens. The examples of Russia and the United States underline how AI firms have been willing and able to work with governments engaging in repression when convenient, leaving the public in the lurch.

Advocates for democracy must educate themselves on how to combat the misuse of AI. Leaders in civil society, for example, could build up their technical knowledge as a starting point. Capacity-building may also have the bonus of enabling pro-democracy groups to create their own AI solutions that support civic accountability actions. Activities like these may provide a counterbalance to corporate-state collusion that places citizens at a disadvantage. It may also help ensure that AI tools are designed in ways that strengthen democracies, not undermine them.

*Aaron Spitler is a researcher whose interests lie at the intersection of human rights, democratic governance, and digital technologies. He has worked with numerous organizations in this space, from the International Telecommunication Union (ITU) to the International Republican Institute (IRI). He is passionate about ensuring technology can be a force for good. You can reach him on LinkedIn

Source link

The AI That Maps the Floods: How SatGPT is Building Asia-Pacific’s Disaster Resilience

In an era of escalating climate disasters, the ability to translate data into life-saving action has never been more critical. For the Asia-Pacific region—the world’s most disaster-prone, this is not an abstract challenge but a daily reality. At the forefront of this battle is the United Nations Economic and Social Commission for Asia and the Pacific (ESCAP), which is leveraging artificial intelligence to close the gap between risk knowledge and on-the-ground resilience. In this exclusive Q&A, Kareff May Rafisura, Economic Affairs Officer at the ICT and Disaster Risk Reduction Division of ESCAP, provides a clear-eyed look at their innovative tool, SatGPT, and how it’s changing the game for communities from the remote village to the ministerial office.

1. It’s one thing to see a flood risk map, and another to break ground on a new levee. Could you walk us through how a local official might use SatGPT to confidently decide where to actually build?
Kareff May Rafisura, Economic Affairs Officer at the ICT and Disaster Risk Reduction Division of ESCAP: First, it’s worth noting that there’s growing rethinking within the science and policy communities on the long-term benefits and trade-offs of constructing artificial levees.

Going back to your question, understanding an area’s flood history is key to making smart infrastructure decisions. You wouldn’t build a levee on natural floodplains, for example. Without risk knowledge, levees might not protect communities effectively and could even cause problems downstream or in ecologically sensitive areas. SatGPT offers a rapid mapping service that helps local officials make risk-informed decisions. It significantly reduces the time and cost traditionally required to assess flood characteristics, such as frequency, spatial extent, and impacts, and converts that data into actionable information. This information is critical for decisionmakers who must weigh it alongside economic, social, and environmental considerations when determining whether, and where, to build a levee.

2. We often hear about getting tech “to the last mile.” Picture a rural community leader with a simple smartphone. How does SatGPT’s insight practically reach and help them make a life-saving decision?

Kareff: SatGPT’s strength lies in enhancing historical risk knowledge. It’s not designed to predict the next disaster, but rather to help communities prepare more effectively for it. For instance, when a rural leader needs to decide whether to evacuate ahead of a flood, she will still rely on early warnings from national meteorological services. What SatGPT can do is support smarter ex-ante planning—so that when early warning information arrives, the community is ready to respond quickly. This includes decisions on where to build shelters, how to lay out evacuation routes, and where to preposition relief supplies. These are all critical elements that must be in place to help avert disasters, as consistently demonstrated in the cyclone response histories of India and Bangladesh.

3. Floods are an urgent threat, but what about slower crises like droughts? Is the vision for SatGPT to eventually help with these less visible, but equally devastating, disasters?

Kareff: ESCAP coordinates the long-standing Regional Drought Mechanism, which has been supporting drought-prone countries in gaining access to satellite data, products, tools, and technical expertise—everything they need to conduct drought monitoring and impact assessments more effectively. Our support goes beyond making data available—we work with countries and partners to strengthen institutions and capacities, converting these data into actionable analytics and insights. We are currently working with three Central Asian countries in establishing their own Earth observation-based agricultural drought monitoring systems.

4. AI is powerful, but it can sometimes reflect our own blind spots. How are you ensuring SatGPT doesn’t accidentally worsen inequality by overlooking the most vulnerable communities in its models?

Kareff: You raised a valid concern. That’s why in our capacity development work, our participants combine SatGPT’s flood mapping with socio-economic data to pinpoint who’s most at risk and where. They work on use cases that unpack the exposure of essential services like hospitals and water treatment facilities. When these critical infrastructures fail, it’s the poorest who pay the highest price. That’s why it’s vital to understand the hazards that threaten them.

5. Governments have tight budgets. If you were making the pitch to a Finance Minister, what’s the most compelling argument for investing in SatGPT now versus spending on recovery later?

Kareff: Investing in reducing disaster risk – which involves measures taken before disasters occur to reduce vulnerability and enhance resilience (e.g., early warning systems, resilient infrastructure, land-use planning) – is far more cost-effective than recovery. Every dollar invested in disaster risk reduction can save multiple dollars in future losses. While the benefits are context-specific, a recent multi-country study found that for every $1 invested, the return can be as high as $10.50.

6. The region is innovating fast, with countries like Indonesia and Thailand building their own systems. How does SatGPT aim to be a good teammate and connect with these national efforts, rather than just adding another tool to the pile?

Kareff: That’s a good point. And beyond technological innovation, we’re also seeing progress in policy and institutional innovations being put in place. Our intention is not to replace national systems, but to show what’s possible when you make risk knowledge accessible and actionable. We work closely with our national counterparts with a focus on integrating SatGPT insights into existing workflows and systems-not reinventing them.

7. Training young professionals is key. Beyond the technical skills, what’s the most important lesson you hope they take away about using this technology responsibly?

Kareff: I’m glad you recognize that today’s most pressing need goes beyond technical expertise. That’s precisely why our technical capacity-building activities are held alongside youth forums to provide a platform for young people to engage in meaningful conversations around values and motivations. As stakeholders, we all share the responsibility of upholding safe, secure, and trustworthy artificial intelligence systems to support sustainable development.

8. Looking ahead a year, what would a “win” for SatGPT look like on the ground? Is it a specific number of communities better protected, or a faster warning time?

Kareff: Forecasting and enhancing the forecast lead times remains the responsibility of mandated early warning agencies. SatGPT is well-positioned to support efforts to protect more communities. By enhancing the historical understanding of floods, it can help improve the accuracy of early warning information, help communities proactively plan their response, and reduce disaster risk ex-ante. In that sense, I would say that effective SatGPT roll-out would amount to both gains in space and time – more communities being warned with improved lead times for mitigative response with more reliable historical data for granular risk characterization.

9. The document mentions turning the Jakarta Declaration into action. From your vantage point, what’s the biggest spark of progress you’ve seen so far?

Kareff: One of the most promising sparks of progress has been the strengthened regional cooperation aimed at enhancing the capacity of countries—especially the countries in special situations—to overcome barriers to accessing the benefits of innovative geospatial applications. With the support of ESCAP members, we are implementing field projects, providing capacity-building and technical assistance, facilitating expert exchange, and knowledge sharing across more than a dozen countries. These efforts are helping to develop space-based solutions from the ground up to tackle sustainable development challenges such as urban poverty, air pollution, droughts, floods, and crop biodiversity loss.

10. Finally, behind all the data and code, you mention this is about protecting lives. Has working on SatGPT given you a new perspective on what “resilience” truly means for a family facing a flood?

Kareff: Having lived and worked for the United Nations in some of the world’s most flood-prone countries, I’ve witnessed first-hand how the lack of historical data can lead to underinvestment in risk reduction. Tools like SatGPT and other digital innovations are not silver bullets, but they help close this gap by converting geospatial data into actionable insights – quickly and more accessibly – to guide communities to prepare and protect lives and livelihoods.

The conversation with Kareff May Rafisura underscores a pivotal shift in disaster risk management: from reactive recovery to intelligent, data-driven preparedness. SatGPT represents more than a technological achievement; it is a practical instrument of empowerment, ensuring that from the finance minister to the rural community leader, the best available knowledge informs the decisions that save lives and safeguard futures. In the fragile balance between human vulnerability and environmental force, such tools are not just helpful, they are essential. The future of resilience in the Asia-Pacific is being written today, not in the aftermath of disaster, but in the proactive, thoughtful application of innovation like SatGPT.

Source link

David Bowie and The Simpsons named among top icons and shows that predicted the future

DAVID Bowie and Kate Bush have been named among the top cultural icons who most accurately predicted how we live today, according to research.

A poll of 2,000 adults found George Orwell, Roald Dahl, and even Ross from Friends – who in 1999 predicted AI would be smarter than us by 2030 – made the top 10 list.

The Simpsons is known for eerily predicting future eventsCredit: Alamy
David Bowie performing at Boston Garden, Massachusetts, in 1978Credit: Alamy

Other cultural icons included Captain Kirk – who used to talk to computers, foldable communicators, and tablets as far back as the 1960s – and Ridley Scott.

The director’s seminal 1982 film Blade Runner is still hailed today as a masterclass in technological foresight.

Meanwhile the sitcom, The Simpsons has a history of uncanny predictions, including Donald Trump‘s presidency, the Pandemic, a FIFA scandal, and the development of smartwatches.

The research was commissioned by Samsung for its ‘Visionary Hall of Fame’ and rounding off the top 10 are musicians Prince and Bjork – with the former predicting online dating and virtual relationships in his album 1999, released over 40 years ago.

STARGIRL

Moment Hollywood star shocks with claim she TIME TRAVELLED to see David Bowie


STAR GONE

Iconic Radio DJ known as ‘Jed The Fish’ who worked with Bowie & Sting dies at 69

While Bjork foresaw the rise of social media in the 1990s and 2000s, predicting that technology wouldn’t just be functional, it would become deeply personal.

Fearne Cotton has teamed up with the brand, as part of their Can Your Phone Do This campaign which highlights the capabilities of Galaxy AI, to go back to her chart show roots, in a brand-new countdown video which reveals the visionaries who feature on the list.

The broadcaster and author said: “These ten icons didn’t just dream about the future; they made it a reality. It’s incredible to see that the future they envisioned is already here, right at our fingertips.

The research also found self-driving cars (39 per cent) topped the list of real-world innovations people remember seeing in pop culture before they became a reality.

This was followed by artificial intelligence (39 per cent) and video calling (33 per cent), along with voice assistants (28 per cent) and smart watches (22 per cent).

Those polled were also quizzed on their use of AI apps or assistants, with 24 per cent using these on their phones daily.

Many use them to ask factual questions (43 per cent), compose messages or emails (22 per cent), and edit photos and videos (22 per cent).

For 23 per cent, they are even translating speech or text among the most used AI functions.

In fact, almost seven in ten (68 per cent) also agreed that today’s AI-powered smartphones feel as though you are carrying the future in your pocket.

Annika Bizon, from Samsung, added: “68 per cent of Brits are amazed that these once-futuristic predictions are now part of everyday life, with over half crediting AI for boosting general knowledge and creativity.

“With Galaxy AI, we’re not just keeping pace with the predictions of modern-day visionaries, we’re actively shaping what comes next.

“We’re turning tomorrow’s possibilities into today’s realities, because when you hold the future in your hand, you’re not just ahead of the curve—you’re defining it.”

Fearne Cotton unveils the Visionary Hall of FameCredit: Michael Leckie/PinPep

Top ten cultural icons who saw the future

1. George Orwell
2. The Simpsons
3. David Bowie
4. Captain Kirk from Star Trek
5. Ridley Scott
6. Kate Bush
7. Roald Dahl
8. Ross from Friends
9. Prince
10. Bjork

Source link

Humain, a Saudi Startup, Debuts an AI-Native Operating System

Saudi-based AI startup Humain, established by the kingdom’s sovereign wealth fund, plans to launch a new computer operating system called Humain One this week.

This system allows users to give voice commands to perform tasks, aiming to replace traditional icon-based systems like Windows and macOS. CEO Tareq Amin stated that the new approach enables users to speak their intent instead of clicking on icons.

Humain, which started in May and is chaired by Crown Prince Mohammed bin Salman, offers various AI services, including data centers and cloud capabilities.

The company has been testing its operating system for internal use, specifically for payroll and human resources. Additionally, Humain plans to develop around 6 gigawatts of data center capacity but has not specified the locations.

With information from Reuters

Source link

Strings attached to bills Newsom signed on antisemitism, AI transparency and other major California policies

Though hailed by some for signing new laws to combat antisemitism in California schools, Gov. Gavin Newsom expressed enough reservations about the bills to urge state lawmakers to make some changes.

Supporters of the legislation, Senate Bill 48 and Assembly Bill 715, said it was needed to protect Jewish students on campus, while opponents argued it was broadly written and would stifle free speech and classroom discussions about current events in the Middle East, including the Israel-Hamas war.

Newsom, when he signed the bills, directed legislators to work quickly on a follow-up measure to address “urgent concerns about unintended consequences.”

The governor made similar requests for nearly a dozen other major bills he signed into law this year, including measures providing safeguards on artificial intelligence, protections for children online and banning law enforcement officers donning masks — a direct response to federal agents hiding their identities during immigration raids across the state.

Newsom’s addendums provide a glimpse into the sometimes flawed or incomplete process of crafting new laws, at times hastily at the end of legislative session, requiring flaws or unresolved conflicts to be remedied later.

San Jose State University professor emeritus and political analyst Larry Gerston said governors sometimes go this route when, despite having concerns, they feel the legislation is too urgent to veto.

“I think you are looking at a situation where he thought the issue was sufficiently important and needed to go ahead and get it moving,” he said.

Gerston, however, noted those with a cynical view of politics could argue governors use this tactic as a way to undo or water down legislation that — for various political reasons — they wanted to pass in the moment.

“Depending upon your attitude toward the governor, politics and legislation, [that viewpoint] could be right or wrong,” he said.

One of the authors of the antisemitism bills, Assemblymember Rick Chavez Zbur (D-Los Angeles), said he will put forth another measure next year and continue working with educational organizations and the California Legislative Jewish Caucus to ensure the right balance is struck.

“The assertions that the bill is intended to prevent instruction about controversial topics, including topics related to Israel, is just not accurate,” said Zbur, who introduced AB 715. “We will be making sure that it’s clear that instruction on complicated issues, on controversial issues, that critical education can continue to take place.”

Zbur said he will reexamine a provision requiring the “factual accuracy” of instructional materials.

“One of the things that we’ve agreed to do was focus on making sure that the bill continues to meet its goal, but revisit that factually accurate language to make sure that, for example, you can continue to teach [works of] fiction in the classroom,” he said.

Another new law flagged by Newsom bans local and federal agents from wearing masks or facial coverings during operations.

The governor approved Senate Bill 627 — carried by Sens. Scott Wiener (D-San Francisco) and Jesse Arreguín (D-Berkeley) — last month as a response to the Trump administration’s aggressive immigration raids that are often conducted by masked agents in unmarked cars. Newsom said it was unacceptable for “secret police” to grab people off the streets.

“This bill establishes important transparency and public accountability measures to protect public safety, but it requires follow-up legislation,” Newsom wrote in his signing statement. “Given the importance of the issue, the legislature must craft a bill that prevents unnecessary masking without compromising law enforcement operations.”

Newsom said clarifications about safety gear and additional exemptions for legitimate law enforcement activities were needed.

“I read this bill as permitting the use of motorcycle or other safety helmets, sunglasses, or other standard law enforcement gear not designed or used for the purpose of hiding anyone’s identity, but the follow-up legislation must also remove any uncertainty or ambiguities,” he wrote.

Wiener agreed to revisit the measure.

“I’m committed to working with the Governor’s office to further refine SB 627 early next year to ensure it is as workable as possible for many law enforcement officers working in good faith,” he said.

California is the first state to ban masking for federal law enforcement and the law will likely be challenged in court. The move drew ire from U.S. Department of Homeland Security Secretary Kristi Noem, who called the legislation “despicable” and said forcing officers to reveal their faces increases their risk of being targeted by criminals.

Newsom is also urging legislators to adjust two new tech-related laws from Assemblymember Buffy Wicks (D-Oakland).

Assembly Bill 853, dubbed the California AI Transparency Act, is intended to help people identify content created by artificial intelligence. It requires large online platforms, such as social media sites, to provide accessible provenance data on uploaded content starting in 2027. Provenance data is information about the origin and modification history of online content.

In his signing statement, Newsom called the legislation a “critical step” but said it could interfere with privacy.

“Some stakeholders remain concerned that provisions of the bill, while well-intentioned, present implementation challenges that could lead to unintended consequences, including impairment of user privacy,” he wrote. “I encourage the legislature to enact follow up legislation in 2026, before the law takes effect, to address these technical feasibility issues.”

Assembly Bill 1043 aims to help prevent children from viewing inappropriate content online. It directs operating system providers to allow parents to input their children’s ages when setting up equipment such as laptops or smartphones, and then requires users to be grouped in different age brackets. It gained approval from tech companies including Meta and Google while others raised concerns.

“Streaming services and video game developers contend that this bill’s framework, while well-suited to traditional software applications, does not fit their respective products,” Newsom wrote in his signing statement. “Many of these companies have existing age verification systems in place, addressing complexities such as multi-user accounts shared by a family and user profiles utilized across multiple devices.”

The governor urged lawmakers to address those concerns before the law is set to take effect in 2027.

Wicks was unavailable for comment.

Source link

Kenny Loggins slams Donald Trump for using his ‘Top Gun’ song ‘Danger Zone’ in AI feces video

Published on
21/10/2025 – 9:22 GMT+2


ADVERTISEMENT

Kenny Loggins has reacted to Donald Trump using his song ‘Danger Zone’ in the president’s “disgusting” AI-generated video showing himself wearing a crown, flying a “KING TRUMP” fighter jet and bombing a crowd of protesters with feces.

The video was published as a response to the historic No Kings” protests which took place across the US on Saturday.

The American singer-songwriter recorded the hit song for the soundtrack of the 1986 Tom Cruise movie Top Gun. He has now called for Trump’s video to be taken down on copyright grounds.

In a statement to Variety, Loggins said: “This is an unauthorized use of my performance of ‘Danger Zone.’ Nobody asked me for my permission, which I would have denied, and I request that my recording on this video is removed immediately.”

He continued: “I can’t imagine why anybody would want their music used or associated with something created with the sole purpose of dividing us. Too many people are trying to tear us apart, and we need to find new ways to come together.”

“We’re all Americans, and we’re all patriotic. There is no ‘us and them’ — that’s not who we are, nor is it what we should be. It’s all of us. We’re in this together, and it is my hope that we can embrace music as a way of celebrating and uniting each and every one of us.”

Well put – especially considering the video has provoked widespread outrage online, with many expressing dismay over the way it shows Trump’s clear disdain for people exercising their right to protest.

Social media users accused Trump of having “the maturity and decorum of a 12-year-old boy”, while others commented: “Can’t believe that’s a president of a country.”

Many posts also pointed out that Trump’s “childish” and “disgusting” AI post revealed a transparent representation of his genuine feelings toward the American people. “It tells you everything you need to know about what he thinks about the people of America who are, in fact, America,” one person commented, while another added: “Him taking a dump on the country is the most honest thing he’s ever posted.”

This is far from the first time that Trump and his administration have used artists’ work without authorisation.

There is an extensive list of musicians who have objected to Trump’s authorized use of their songs. These include ABBA, The Rolling Stones, Bruce Springsteen, Rihanna, Neil Young, R.E.M., Woodkid, Beyoncé and Semisonic.

Sinead O’Connor’s estate previously issued Trump with cease-and-desist orders, while Isaac Hayes’ estate sued him for 134 counts of copywright infringement.

Céline Dion also condemned the use of her song from the Oscar-winning film Titanic, ‘My Heart Will Go On’, which was used at one of Trump’s rallies. Dion’s team questioned the song choice, writing: “And really, THAT song?”

Another band which added their name to the ever-growing list of artists who have sued Trump over the illegal use of their songs in campaign videos was The White Stripes. Last year, the rock band highlighted the “flagrant misappropriation” of their hit song ‘Seven Nation Army’. Jack White captioned a copy of the legal complaint in an Instagram post with: “This machine sues fascists.”

The most recent example to date is Metallica, who forced the US government to withdraw a social media video that used their song ‘Enter Sandman’ without authorisation.

This weekend’s “No Kings” protests saw millions of Americans marching against Trump’s administration, opposing the president’s “authoritarian power grab.”

The 18 October protest, the third mass mobilisation since Trump’s return to the White House, drew nearly 7 million people across all 50 states according to organisers. This figure would make it the largest single-day mobilisation against a US president in modern history.

Source link

Global Markets Rally on China Growth Surprise and AI Earnings Hopes

Global stock markets kicked off the week on a strong note after data showed China’s economy performing better than expected despite ongoing trade tensions with the United States. Investor optimism was also buoyed by expectations of Japanese stimulus and a strong outlook for artificial intelligence (AI) companies during the U.S. earnings season.

Why It Matters

China’s stronger-than-forecast GDP growth (1.1% in Q3) and industrial output gains (6.5%) helped calm fears about a global slowdown triggered by U.S.-China trade frictions.
Meanwhile, optimism surrounding AI-driven tech earnings particularly Nvidia continued to lift global equities, reinforcing investor belief in the sector’s long-term profitability.
At the same time, expectations of further U.S. Federal Reserve rate cuts kept global borrowing costs lower and strengthened risk appetite.

Asia: Japan’s Nikkei surged 2.8% to a record high amid hopes of stimulus under likely new Prime Minister Sanae Takaichi.

Europe: The Stoxx 600 rose 0.7% in early trade.

U.S.: Futures pointed to gains of 0.4–0.5% for the S&P 500 and Nasdaq.

Bonds & FX: Treasury yields dipped to 4.02%, while the euro climbed to $1.1662 on a softer dollar.

Commodities: Gold stayed elevated around $4,266/oz, reflecting persistent geopolitical caution, while Brent crude slipped 0.4% to $61.02 on OPEC+ supply signals.

Jason da Silva (Arbuthnot Latham): “There’s still enough scope for healthy returns from big tech; I’m not selling the AI theme yet.”

Kevin Thozet (Carmignac): Warned of “froth” in some AI stocks but said it’s too soon to exit the trade.

Lorenzo Portelli (Amundi): Predicted gold could rise to $5,000 as central banks diversify reserves and the dollar weakens.

What’s Next

Looking ahead, investor attention will pivot to major U.S. corporate earnings that could shape the market’s next moves. Reports from Tesla, Netflix, Procter & Gamble, and Coca-Cola will offer a clearer picture of consumer demand and how well companies are weathering tariffs and inflation pressures. On the policy front, traders expect the Federal Reserve to deliver two more rate cuts by December, a move that could further support equities, weaken the dollar, and sustain global liquidity. However, the upcoming U.S.–China tariff truce deadline on November 10 looms large, and any breakdown in talks could quickly reverse market optimism. Investors will also watch for fresh data on inflation and labor markets to gauge how long central banks can maintain their dovish stance.

With information from Reuters.

Source link

I rang up the AI travel agent that’s so good people have asked it out on dates

Tim Hentschel, CEO of HotelPlanner.com and HotelPlanner.ai, says his virtual travel agents are already taking 50,000 real calls a day and will hit the 100,000 mark before the year is out

“I’m sorry. That was a generic answer. Let me come up with something a little more meaningful.”

There’s something quite novel about a robot apologising to you. Cassandra, one of HotelPlanner.ai’s American representatives, was as quick on the contrition as it was on suggesting hotels in Tupelo, Mississippi, that I might want to stay in.

I chucked a few requirements at Cassandra and it quickly found me a place that ticked all of the boxes. Cassandra even obliged when I asked for a description of the rooms, and then apologised when I cut it off, demanding that those descriptions be a little less lifeless.

Try as I might, my efforts to wind Cassandra up failed. It kept delivering helpful answers and even extra titbits of information without being prompted, all with an upbeat tone of voice against a backdrop of fake call center sounds.

Author avatarMilo Boyd

READ MORE: Brits using AI to plan trips doubles as Gen Z leads use of robo-travel agents

Robo-travel agents such as Cassandra are currently fielding calls on HotelPlanner.ai, where you can choose the language and gender of your call handler before taking it on a test run. It’s working on a beta trial basis in the UK at the moment, but in the US, the world of AI travel agents is very much here already and booming.

Tim Hentschel, CEO of HotelPlanner.com and HotelPlanner.ai, says his virtual travel agents are already taking 50,000 real calls a day and will hit the 100,000 mark before the year is out. He claims that 10% of HotelPlanner.com’s bookings are made by bots.

That’s no small potatoes, given the company’s expected $1.8 billion gross revenues this year and the 1.5 million properties it can book.

Tim is a huge believer in the power of AI to transform customer service in the travel industry and argues that HotelPlanner.com’s agents are trained to be “more helpful and efficient” than humans and already able to “compete head-to-head” when it comes to customer satisfaction and sales.

While Tim says the company has no desire to stop using humans to answer calls, the AI side of the business is growing quickly and could one day field 100% of all calls. Partly because they’ve learned from the best. The large language model bots were trained using eight million human phone calls.

What’s arguably a little unnerving is that customers aren’t told they’re speaking to a robot, and they often don’t realise.

“We only tell customers if they’re speaking to an AI agent if they ask. Sometimes it has come up as customers have asked them out on dates,” Tim told the Mirror.

“We find AI works best with older customers as it’s extremely helpful and patient—sometimes to a fault, as the AI doesn’t understand the money value of time. The question is, if you want a service, who can perform it better? A human or AI with unlimited information?

“The agents have accents. They make an attempt at humour. The robots are programmed always to be helpful, it has nothing but kindness.”

Whether the idea of an AI customer service bot excites or appalls you, their arrival in increasing numbers seems inevitable. Just this week, OpenAI announced it had struck a deal with travel giant Expedia, paving the way for holidaymakers to book trips directly through the platform.

At the Travel and Tourism Summit in Rome at the end of September, Jane Sun, CEO of Trip.com Group, predicted that AI would “double the travel market” – because “people will work three days due to AI, and take much longer holidays.”

Jane went on to describe how she thought AI would improve customer service.

“We must make sure our customers are very well looked after with good customer support. 30 seconds, an AI in your native language will be able to speak to you over the phone. Within 30 seconds a call centre employee will be able to answer your call. Within two minutes of a crisis, our team will be able to call them to get them to safety. This will be powered by AI,” she said.

How do you feel about AI travel agents? Let us know in the comments below or by emailing [email protected].

Source link