images

Pep Guardiola: Man City boss ‘hurt’ by images of child victims of conflict zones

On Monday, BBC News reported figures from the Home Office stating a total of 933 migrants crossed the Channel in small boats in January.

Guardiola said: “The people who have to do that, run away from their countries, go in the sea and then go on a boat to get rescued – don’t ask if he is right or wrong, rescue him. It is about a human being.

“After we can agree, criticise but everyone is right, everyone has an idea and you have to express it. People are dying, you have to help him. Protecting the human being and human life is the only thing we have.”

Last month, two US citizens were shot dead by federal agents carrying out US President Donald Trump’s immigration crackdown in Minneapolis over the past month.

Fan group Football Supporters Europe (FSE) says it is “extremely concerned by the ongoing militarisation of police forces in the US” before this summer’s World Cup in North America.

Guardiola added: “When I see the images, I am sorry it hurts. That is why in every position I can help speak up to be a better society, I will try and will be there. All the time. It is for my kids, my families, for you.

“From my point of view, the justice? You have to talk. Otherwise it will just move on. Look what happened in the United States of America, Renee Good and Alex Pretti have been killed. Tell me how you can defend that?

“There is not a perfect society, nowhere is perfect, I am not perfect, we have to work to be better.”

Source link

Trump’s use of AI images pushes new boundaries, further eroding public trust, experts say

The Trump administration has not shied away from sharing AI-generated imagery online, embracing cartoonlike visuals and memes and promoting them on official White House channels.

But an edited — and realistic — image of civil rights attorney Nekima Levy Armstrong in tears after being arrested is raising new alarms about how the administration is blurring the lines between what is real and what is fake.

Department of Homeland Security Secretary Kristi Noem’s account posted the original image from Levy Armstrong’s arrest before the official White House account posted an altered image that showed her crying. The doctored picture is part of a deluge of AI-edited imagery that has been shared across the political spectrum since the fatal shootings of U.S. citizens Renee Good and Alex Pretti by U.S. Border Patrol officers in Minneapolis

However, the White House’s use of artificial intelligence has troubled misinformation experts who fear the spreading of AI-generated or AI-edited images erodes public perception of the truth and sows distrust.

In response to criticism of the edited image of Levy Armstrong, White House officials doubled down on the post, with Deputy Communications Director Kaelan Dorr writing on X that the “memes will continue.” White House Deputy Press Secretary Abigail Jackson also shared a post mocking the criticism.

David Rand, a professor of information science at Cornell University, says calling the altered image a meme “certainly seems like an attempt to cast it as a joke or humorous post, like their prior cartoons. This presumably aims to shield them from criticism for posting manipulated media.” He said the purpose of sharing the altered arrest image seems “much more ambiguous” than the cartoonish images the administration has shared in the past.

Memes have always carried layered messages that are funny or informative to people who understand them, but indecipherable to outsiders. AI-enhanced or AI-edited imagery is just the latest tool the White House uses to engage the segment of Trump’s base that spends a lot of time online, said Zach Henry, a Republican communications consultant who founded Total Virality, an influencer marketing firm.

“People who are terminally online will see it and instantly recognize it as a meme,” he said. “Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids about it.”

All the better if it prompts a fierce reaction, which helps it go viral, said Henry, who generally praised the work of the White House’s social media team.

The creation and dissemination of altered images, especially when they are shared by credible sources, “crystallizes an idea of what’s happening, instead of showing what is actually happening,” said Michael A. Spikes, a professor at Northwestern University and news media literacy researcher.

“The government should be a place where you can trust the information, where you can say it’s accurate, because they have a responsibility to do so,” he said. “By sharing this kind of content, and creating this kind of content … it is eroding the trust — even though I’m always kind of skeptical of the term trust — but the trust we should have in our federal government to give us accurate, verified information. It’s a real loss, and it really worries me a lot.”

Spikes said he already sees the “institutional crises” around distrust in news organizations and higher education, and feels this behavior from official channels inflames those issues.

Ramesh Srinivasan, a professor at UCLA and the host of the “Utopias” podcast, said many people are now questioning where they can turn to for “trustable information.” “AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence,” he said.

Srinivasan said he feels the White House and other officials sharing AI-generated content not only invites everyday people to continue to post similar content but also grants permission to others who are in positions of credibility and power, such as policymakers, to share unlabeled synthetic content. He added that given that social media platforms tend to “algorithmically privilege” extreme and conspiratorial content — which AI generation tools can create with ease — “we’ve got a big, big set of challenges on our hands.”

An influx of AI-generated videos related to Immigration and Customs Enforcement action, protests and interactions with citizens has already been proliferating on social media. After Good was shot by an ICE officer while she was in her car, several AI-generated videos began circulating of women driving away from ICE officers who told them to stop. There are also many fabricated videos circulating of immigration raids and of people confronting ICE officers, often yelling at them or throwing food in their faces.

Jeremy Carrasco, a content creator who specializes in media literacy and debunking viral AI videos, said the bulk of these videos are likely coming from accounts that are “engagement farming,” or looking to capitalize on clicks by generating content with popular keywords and search terms such as ICE. But he also said the videos are getting views from people who oppose ICE and DHS and could be watching them as “fan fiction,” or engaging in “wishful thinking,” hoping that they’re seeing real pushback against the organizations and their officers.

Still, Carrasco also believes that most viewers can’t tell if what they’re watching is fake, and questions whether they would know “what’s real or not when it actually matters, like when the stakes are a lot higher.”

Even when there are blatant signs of AI generation, like street signs with gibberish on them or other obvious errors, only in the “best-case scenario” would a viewer be savvy enough or be paying enough attention to register the use of AI.

This issue is, of course, not limited to news surrounding immigration enforcement and protests. Fabricated and misrepresented images following the capture of deposed Venezuelan leader Nicolás Maduro exploded online earlier this month. Experts, including Carrasco, think the spread of AI-generated political content will only become more commonplace.

Carrasco believes that the widespread implementation of a watermarking system that embeds information about the origin of a piece of media into its metadata layer could be a step toward a solution. The Coalition for Content Provenance and Authenticity has developed such a system, but Carrasco doesn’t think that will become extensively adopted for at least another year.

“It’s going to be an issue forever now,” he said. I don’t think people understand how bad this is.”

Huamani writes for the Associated Press. AP writers Jonathan J. Cooper in Phoenix and Barbara Ortutay in San Francisco contributed to this report.



Source link

F1 pre-season testing: McLaren & Red Bull release images of 2026 cars

Seven of the 11 teams were running on Monday – Red Bull, Mercedes, Racing Bulls, Haas, Alpine and the new Audi and Cadillac teams.

McLaren said last week that they would not take their car on to the track until after day one as a result of trying to maximise design time. Ferrari, after an initial test at their factory last week, had also announced they would not run on Monday.

Aston Martin have said their new car will not be ready until Thursday, and Williams are missing the test entirely after delays to their design and production programme.

No times were issued, and all teams ran into at least some technical issues as they learned about their new cars.

Rival teams were impressed by the amount of mileage the two Red Bull teams managed considering it was the first full day of running with their new in-house engine, which has been developed in conjunction with new partner Ford.

Before the test started, McLaren and Red Bull revealed images of their 2026 cars for the first time.

McLaren, who won the drivers’ and constructors’ championship double last year for the first time since 1998, showed the car in the one-off testing livery it will run in this week.

Red Bull showed studio shots of their car, carefully chosen to disguise key design features.

Russell added: “We are pleased with our day, but I’ve also been impressed by several other teams.

“The Red Bull power unit has completed a lot of laps which, given that it’s their first engine they’ve built, means they’ve clearly done a good job.

“Haas also managed a similar amount of running to ourselves, so the Ferrari power unit has also put together plenty of mileage. It’s not quite how it was in 2014! The sport has evolved so much since then and the level, in every single aspect, is so high now.”

Audi have taken over the Sauber team for the German car company’s first entry into F1. They suffered a reliability issue that prevented Bortoleto driving in the afternoon.

Team principal Jonathan Wheatley said: “We had a technical issue, spotted it and decided to switch the car off. Plenty of testing this year and wanted to really understand the problems, see what the weather conditions are and decide whether we run tomorrow.”

Rain is forecast for Tuesday’s second day of the test, during which teams are permitted to run on a maximum of three of the available days.

Bortoleto said: “The cars are very different. I feel these are going to be slower but it’s very cool to have the power unit being 50% electric – you go out of the corner and you have so much speed being deployed and you can see how strong the engine is.

“You need to adapt but it’s still a racing car.”

Racing Bulls’ Lawson said: “[The car is] very different to drive and I haven’t got my head around it fully yet. We will keep learning. It feels like there is a lot more we can do as drivers to make a difference, potentially, but right now it’s very early days, very hard to know where we are for now just trying to optimise the car.

“[The power unit] feels good. But it’s very hard to tell – the main thing is reliability at the moment.

“We have done a good amount of laps today and the only issues we had today were safety precautions, not actual issues. But hard to know where we’re at compared to everyone else.”

Source link