content

Brazil gives X five days to stop Grok from producing sexual content

Brazilian authorities have ordered that Elon Musk’s platform, X, must implement measures to prevent its artificial intelligence tool, Grok, from generating sexualized content involving minors and involving adults without consent. File Photo by Fazry Ismail/EPA

Feb. 13 (UPI) — Brazilian authorities have ordered that Elon Musk’s platform, X, must implement measures to prevent its artificial intelligence tool, Grok, from generating sexualized content involving minors and involving adults without consent.

The case is being reviewed by Brazil’s National Data Protection Authority, the Federal Public Prosecutor’s Office and the National Consumer Secretariat. The agencies contend that, despite prior warnings, the social network failed to show effective action to curb these practices.

“X must immediately implement appropriate measures to prevent the production, through Grok, of sexualized or eroticized content involving children and adolescents, as well as adults who have not expressed their consent,” Brazilian authorities said in a joint statement.

The agencies gave the platform five days to comply with the determination, under penalty of fines and legal action.

On Jan. 20, the agencies recommended that X establish, within up to 30 days, technical and operational procedures to identify, review and remove sexualized content generated by Grok that remained available on the social network. That deadline co-existed with the requirement to adopt immediate actions to prevent new posts.

The company said at the time that it had removed thousands of posts and suspended hundreds of accounts for violating its policies.

However, Brazilian authorities said the information provided “was not accompanied by concrete evidence, technical reports or monitoring mechanisms that would allow its effectiveness to be assessed.”

Tests conducted by technical teams indicated that the platform still allows the generation and sharing of sexualized or eroticized images of minors and adults without authorization.

Since late last year, thousands of complaints in several countries have alleged that Grok responds to requests to alter photographs posted by women on social media, making them appear nude or in bikinis. At least two Brazilian women have reported being victims of these deepfakes.

The Federal Public Prosecutor’s Office ordered X to submit monthly reports on the actions taken to prevent the production of these images and pointed to a lack of transparency in the company’s response.

The National Data Protection Authority also required the platform to detail the measures implemented and provide evidence that can be verified by authorities.

The new five-business-day deadline requires the company to explain what specific actions it will take to prevent Grok from creating this type of content. The official document does not specify when clock began.

If the order is not met, the company could face fines and other administrative sanctions. Those responsible could also be prosecuted for disobedience.

The Federal Public Prosecutor’s Office said it could open broader investigations that may lead to legal action to seek damages for harm caused by the creation and dissemination of these images.

The case adds to investigations in Europe. On Feb. 3, the Paris Prosecutor’s Office searched Grok’s offices as part of a preliminary investigation into the alleged dissemination of child pornography and deepfakes. Authorities in the United Kingdom and the European Union are also examining Grok’s use to manipulate images.

Grok is developed by xAI, Musk’s artificial intelligence company, which also controls X. Earlier this month, the entrepreneur announced the merger of xAI with SpaceX, his aerospace company. SpaceX is expected to debut on the New York Stock Exchange later this year.

Source link

Trump’s use of AI images pushes new boundaries, further eroding public trust, experts say

The Trump administration has not shied away from sharing AI-generated imagery online, embracing cartoonlike visuals and memes and promoting them on official White House channels.

But an edited — and realistic — image of civil rights attorney Nekima Levy Armstrong in tears after being arrested is raising new alarms about how the administration is blurring the lines between what is real and what is fake.

Department of Homeland Security Secretary Kristi Noem’s account posted the original image from Levy Armstrong’s arrest before the official White House account posted an altered image that showed her crying. The doctored picture is part of a deluge of AI-edited imagery that has been shared across the political spectrum since the fatal shootings of U.S. citizens Renee Good and Alex Pretti by U.S. Border Patrol officers in Minneapolis

However, the White House’s use of artificial intelligence has troubled misinformation experts who fear the spreading of AI-generated or AI-edited images erodes public perception of the truth and sows distrust.

In response to criticism of the edited image of Levy Armstrong, White House officials doubled down on the post, with Deputy Communications Director Kaelan Dorr writing on X that the “memes will continue.” White House Deputy Press Secretary Abigail Jackson also shared a post mocking the criticism.

David Rand, a professor of information science at Cornell University, says calling the altered image a meme “certainly seems like an attempt to cast it as a joke or humorous post, like their prior cartoons. This presumably aims to shield them from criticism for posting manipulated media.” He said the purpose of sharing the altered arrest image seems “much more ambiguous” than the cartoonish images the administration has shared in the past.

Memes have always carried layered messages that are funny or informative to people who understand them, but indecipherable to outsiders. AI-enhanced or AI-edited imagery is just the latest tool the White House uses to engage the segment of Trump’s base that spends a lot of time online, said Zach Henry, a Republican communications consultant who founded Total Virality, an influencer marketing firm.

“People who are terminally online will see it and instantly recognize it as a meme,” he said. “Your grandparents may see it and not understand the meme, but because it looks real, it leads them to ask their kids or grandkids about it.”

All the better if it prompts a fierce reaction, which helps it go viral, said Henry, who generally praised the work of the White House’s social media team.

The creation and dissemination of altered images, especially when they are shared by credible sources, “crystallizes an idea of what’s happening, instead of showing what is actually happening,” said Michael A. Spikes, a professor at Northwestern University and news media literacy researcher.

“The government should be a place where you can trust the information, where you can say it’s accurate, because they have a responsibility to do so,” he said. “By sharing this kind of content, and creating this kind of content … it is eroding the trust — even though I’m always kind of skeptical of the term trust — but the trust we should have in our federal government to give us accurate, verified information. It’s a real loss, and it really worries me a lot.”

Spikes said he already sees the “institutional crises” around distrust in news organizations and higher education, and feels this behavior from official channels inflames those issues.

Ramesh Srinivasan, a professor at UCLA and the host of the “Utopias” podcast, said many people are now questioning where they can turn to for “trustable information.” “AI systems are only going to exacerbate, amplify and accelerate these problems of an absence of trust, an absence of even understanding what might be considered reality or truth or evidence,” he said.

Srinivasan said he feels the White House and other officials sharing AI-generated content not only invites everyday people to continue to post similar content but also grants permission to others who are in positions of credibility and power, such as policymakers, to share unlabeled synthetic content. He added that given that social media platforms tend to “algorithmically privilege” extreme and conspiratorial content — which AI generation tools can create with ease — “we’ve got a big, big set of challenges on our hands.”

An influx of AI-generated videos related to Immigration and Customs Enforcement action, protests and interactions with citizens has already been proliferating on social media. After Good was shot by an ICE officer while she was in her car, several AI-generated videos began circulating of women driving away from ICE officers who told them to stop. There are also many fabricated videos circulating of immigration raids and of people confronting ICE officers, often yelling at them or throwing food in their faces.

Jeremy Carrasco, a content creator who specializes in media literacy and debunking viral AI videos, said the bulk of these videos are likely coming from accounts that are “engagement farming,” or looking to capitalize on clicks by generating content with popular keywords and search terms such as ICE. But he also said the videos are getting views from people who oppose ICE and DHS and could be watching them as “fan fiction,” or engaging in “wishful thinking,” hoping that they’re seeing real pushback against the organizations and their officers.

Still, Carrasco also believes that most viewers can’t tell if what they’re watching is fake, and questions whether they would know “what’s real or not when it actually matters, like when the stakes are a lot higher.”

Even when there are blatant signs of AI generation, like street signs with gibberish on them or other obvious errors, only in the “best-case scenario” would a viewer be savvy enough or be paying enough attention to register the use of AI.

This issue is, of course, not limited to news surrounding immigration enforcement and protests. Fabricated and misrepresented images following the capture of deposed Venezuelan leader Nicolás Maduro exploded online earlier this month. Experts, including Carrasco, think the spread of AI-generated political content will only become more commonplace.

Carrasco believes that the widespread implementation of a watermarking system that embeds information about the origin of a piece of media into its metadata layer could be a step toward a solution. The Coalition for Content Provenance and Authenticity has developed such a system, but Carrasco doesn’t think that will become extensively adopted for at least another year.

“It’s going to be an issue forever now,” he said. I don’t think people understand how bad this is.”

Huamani writes for the Associated Press. AP writers Jonathan J. Cooper in Phoenix and Barbara Ortutay in San Francisco contributed to this report.



Source link