misinformation

Experts call on US Health Secretary RFK Jr to resign over misinformation | Health News

Health experts said Kennedy’s ‘repeated efforts to undermine science and public health’ have left Americans ‘less safe’.

More than 20 health groups and medical associations have called on Robert F Kennedy Jr to step down as the United States’ health secretary, accusing him of putting lives at risk by disregarding decades of lifesaving science and reversing medical progress.

In a joint statement published on Wednesday, the groups – including the Infectious Diseases Society of America, the American Public Health Association, and the American Association of Immunologists – said Kennedy is forcing Centers for Disease Control and Prevention (CDC) experts to “turn their back on decades of sound science” to further his agenda.

The groups also accused Kennedy of “repeated efforts to undermine science and public health”, leaving Americans “less safe in a multitude of ways”.

“Our country needs leadership that will promote open, honest dialogue, not disregard decades of lifesaving science, spread misinformation, reverse medical progress and decimate programs that keep us safe,” the statement said.

“We are gravely concerned that American people will needlessly suffer and die as a result of policies that turn away from sound interventions,” it added.

The letter comes after multiple former CDC directors said last week that Kennedy’s decisions are putting Americans’ health at risk, after he fired the agency’s director, Susan Monarez, less than a month after she was sworn in.

White House Deputy Press Secretary Kush Desai said Monarez was not “aligned with” President Donald Trump’s agenda and refused to resign, so the White House terminated her.

Monarez’s lawyers said she had been targeted as she “refused to rubber-stamp unscientific, reckless directives and fire dedicated health experts”.

Her departure coincided with the resignations of at least four other top CDC officials in response to Kennedy’s influence over the agency.

In a social media post on Wednesday, Kennedy said his mission was “to restore the CDC’s focus on infectious disease” and “rebuild trust through transparency and competence”.

Kennedy – who has long been accused of spreading anti-vaccine misinformation – has made sweeping changes to US vaccine policies since being appointed by Trump, causing friction with health officials.

In May, he withdrew federal recommendations for COVID shots for pregnant women and healthy children. In June, he also fired all members of the CDC’s expert vaccine advisory panel and replaced them with hand-picked advisers, including fellow anti-vaccine activists.

In August, he then cancelled nearly $500m in funding for mRNA vaccine research in a move health experts said could make the US much more vulnerable to future outbreaks of respiratory viruses.

Kennedy said the US will shift mRNA funding to other vaccine development technologies that are “safer” and “remain effective”.

The International Vaccine Access Center at Johns Hopkins Bloomberg School of Public Health credits mRNA vaccines with preventing millions of deaths from COVID-19, saying the innovative technology has the potential to treat diseases such as cancer and HIV.

Most recently, on August 20, hundreds of federal health employees wrote to Kennedy imploring him to “stop spreading inaccurate health information” and for him to either resign or be fired.

Signatories accused Kennedy of “sowing public mistrust by questioning the integrity and morality” of the CDC’s workforce, including by calling the public health agency a “cesspool of corruption”.

Source link

CDC director: Misinformation ‘lead to deadly consequence” in Atlanta

The U.S. Centers for Disease Control and Prevention headquarters in Atlanta was attacked by a gunman on Friday. File Photo by Erike S. Lesser/EPA

Aug. 13 (UPI) — The director of the Centers for Disease Control and Prevention told employees about the dangers of misinformation, four days after a suspected gunman shot at the agency’s headquarters in Atlanta, claiming the COVID-19 vaccine made him sick.

On Tuesday, Susan Monarez met with staffers virtually and then sent a note to all 10,000 employees nationwide, obtained by ABC News. Staffers at the headquarters have been working remotely since the attack on Friday.

“The dangers of misinformation and its promulgation has now led to deadly consequences,” she wrote. “I will work to restore trust in public health to those who have lost it — through science, evidence and clarity of purpose. I will need your help.”

The comments were slightly different than those during her staff meeting in which she said: “Public health should never be under attack. We know that misinformation can be dangerous.”

She said the health agency can rebuild trust with “rational evidence-based discourse” with “compassion and understanding.”

Dr. Paul Offit, the director of the Vaccine Education Center at Children’s Hospital of Pennsylvania, told NBC News: “The irony is her boss is the biggest spreader of misinformation.”

Robert F. Kennedy Jr. is secretary of the Health and Human Services, which oversees the CDC.

In 2021, during the pandemic, Kennedy described the shot as the “deadliest vaccine ever made” after he filed a citizens’ petition requesting that the Food and Drug Administration end emergency authorization.

Last week, Kennedy announced that HHS was moving to terminate $500 million in contracts to develop vaccines using mRNA technology, which was used to develop the COVID-19 vaccine in 2020.

“After reviewing the science and consulting top experts at NIH [National Institutes of Health] and FDA, HHS has determined that mRNA technology poses more risks than benefits for these respiratory viruses,”

The American Medical Association backs mRNA vaccine research and the CDC still says on its website: “During the COVID-19 pandemic, COVID-19 vaccines underwent the most intensive safety analysis in U.S. history.”

Health officials have denounced skepticism of the research, noting the COVID-19 vaccine saved millions of lives in the United States.

“The Covid pandemic showed us what’s possible when science moves fast,” Rick Bright, who directed the Biomedical Advanced Research and Development from 2016 to 2020, told NBC News. “Dismantling that momentum now is like disbanding the fire department because the fire’s out.”

As head of the HHS, he has updated COVID-19 vaccine recommendations for the fall to be restricted to older adults and those with underlying health conditions. He also doesn’t want children to get the shots.

On Monday, Kennedy toured the CDC campus in Atlanta and met with the widow of the one person slain in the attack, DeKalb County Police Officer David Rose. He toured with Monarez and HHS Deputy Secretary Jim O’Neill.

“He offered his deepest condolences and reaffirmed the agency’s commitment to honoring officer Rose’s bravery, sacrifice and service to the nation,” HHS said.

Since the attack, the union representing CDC workers condemned the lack of support from top officials.

“This leadership is critical in reinforcing public trust and ensuring that accurate, science-based information prevails,” the union said Sunday. “This condemnation is necessary to help prevent violence against scientists that may be incited by such disinformation.”

The American Federation of Government Employees Local 2883, which represents more than 2,000 CDC workers, said in a statement Sunday that the attack “was not random and it compounds months of mistreatment, neglect and vilification that CDC staff have endured.”

The union also said: “The deliberate targeting of CDC through this violent act is deeply disturbing, completely unacceptable and an attack on every public servant.”

The father of the suspected gunman, 30-year-old Patrick Joseph White, reportedly told authorities he targeted the CDC over health problems he blamed on the COVID-19 vaccine. He said the shot made him depressed and suicidal.

He was fatally shot by police after around 200 bullets struck the six buildings. Five firearms were recovered.

“All indications are that this was an isolated event involving one individual,” Jeff Williams, the deputy secretary of the CDC’s Office of Safety, Security and Asset Management, said during the staff meeting Tuesday.



Source link

CDC union calls on Trump officials to condemn vaccine misinformation

The U.S. Centers for Disease Control and Prevention headquarters in Atlanta, Ga. Union representing thousands of CDC workers is calling on the Trump administration to condemn vaccine misinformation after a shooting targeting the headquarters on Friday. File Photo by Erik S. Lesser/EPA-EFE

Aug. 11 (UPI) — The union representing thousands of workers at the U.S. Centers for Disease Control and Prevention is calling on the Trump administration to condemn vaccine disinformation after a gunman killed a police officer in a shooting targeting the CDC headquarters in Georgia.

The suspected gunman behind the Friday shooting was identified Saturday as 30-year-old Patrick Joseph White. He was shot dead by police after opening fire at an Emory Point CVS, with police suspecting he targeted the nearby CDC headquarters over health problems he blamed on the COVID-19 vaccine.

The American Federation of Government Employees Local 2883, which represents more than 2,000 CDC workers, said in a statement Sunday that the attack “was not random and it compounds months of mistreatment, neglect and vilification that CDC staff have endured.”

“The deliberate targeting of CDC through this violent act is deeply disturbing, completely unacceptable and an attack on every public servant,” the union said.

“Early reports indicate the gunman was motivated by vaccine disinformation, which continues to pose a dangerous threat to public health and safety.”

To its members, it said it is advocating for “a clear and unequivocal stance in condemning disinformation” by the CDC and the leadership of Health and Human Services, which is run by vaccine skeptic Robert F. Kennedy Jr.

“This leadership is critical in reinforcing public trust and ensuring that accurate, science-based information prevails,” the union said. “This condemnation is necessary to help prevent violence against scientists that may be incited by such disinformation.”

The shooting occurred just days after Kennedy announced that HHS was moving to terminate $500 million in contracts to develop vaccines using mRNA technology, which was used to develop the COVID-19 vaccine.

COVID-19 vaccines are estimated to have saved more than 2 million lives worldwide.

The American Medical Association supports mRNA vaccine research.

Despite the support from the medical community, Kennedy claimed “the technology poses more risks than benefits.”

“HHS supports safe, effective vaccines for every American who wants them, that’s why we’re moving beyond the limitations of mRNA for respiratory viruses,” he said.

The AMA, in response, urged the Trump administration to reverse course, and to continue “vital research to improve mRNA vaccines, not throw the baby out with the bathwater by effectively preventing research from moving forward.”

The union said the shooting had CDC employees, including more than 90 children, trapped in buildings throughout the CDC campus late into Friday.

It said in its Sunday statement that staff should not be required to return to work until the facility is repaired. The CDC campus was reportedly damaged by bullet holes and shattered windows.

“Staff should not be required to work next to bullet holes,” it said. “Forcing a return under these conditions risks re-traumatizing staff by exposing them to the reminders of the horrific shooting they endured.”

Source link

CDC shooter blamed COVID vaccine for depression. Union demands statement against misinformation

As authorities identified the shooter in the deadly attack on CDC headquarters as a Georgia man who blamed the COVID-19 vaccine for making him depressed and suicidal, a union representing workers at the agency is demanding that federal officials condemn vaccine misinformation, saying it was putting scientists at risk.

The union said that Friday’s shooting at the Atlanta offices of the U.S. Centers for Disease Control and Prevention, which left a police officer dead, was not a random incident and that it “compounds months of mistreatment, neglect, and vilification that CDC staff have endured.”

The American Federation of Government Employees, Local 2883, said the CDC and leadership of the U.S. Department of Health and Human Services must provide a “clear and unequivocal stance in condemning vaccine disinformation.”

The 30-year-old gunman, who died during the event, had also tried to get into the CDC’s headquarters in Atlanta but was stopped by guards before driving to a pharmacy across the street and opening fire, a law enforcement official told the Associated Press on Saturday.

The man, identified as Patrick Joseph White, was armed with five guns, including at least one long gun, the official said, speaking on condition of anonymity because they were not authorized to publicly discuss the investigation.

Here’s what to know about the shooting and the continuing investigation:

An attack on a public health institution

Police say White opened fire outside the CDC headquarters in Atlanta on Friday, leaving bullet marks in windows across the sprawling campus. At least four CDC buildings were hit, agency Director Susan Monarez said on X.

DeKalb County Police Officer David Rose was mortally wounded while responding. Rose, 33, a former Marine who served in Afghanistan, had graduated from the police academy in March.

White was found on the second floor of a building across the street from the CDC campus and died at the scene, Atlanta Police Chief Darin Schierbaum said. “We do not know at this time whether that was from officers or if it was self-inflicted,” he said.

The Georgia Bureau of Investigation said the crime scene was “complex” and the investigation would take “an extended period of time.”

CDC union’s call

The American Federation of Government Employees, Local 2883, is calling for a statement condemning vaccine misinformation from the Department of Health and Human Services. The agency is led by Robert F. Kennedy Jr., who rose to public prominence on healthcare issues as a leading vaccine skeptic, sometimes advancing false information.

A public statement by federal officials condemning misinformation is needed to help prevent violence against scientists, the union said in a news release.

“Their leadership is critical in reinforcing public trust and ensuring that accurate, science-based information prevails,” the union said.

Fired But Fighting, a group of laid-off CDC employees, has said Kennedy is directly responsible for the villainization of the CDC’s workforce through “his continuous lies about science and vaccine safety, which have fueled a climate of hostility and mistrust.”

Kennedy reached out to staff on Saturday, saying that “no one should face violence while working to protect the health of others.”

Thousands of people who work on critical disease research are employed on the campus. The union said some staff members were huddled in various buildings until late at night, including more than 90 young children who were locked down inside the CDC’s Clifton School.

The union said CDC staff should not be required to immediately return to work after experiencing such a traumatic event. In a statement released Saturday, it said windows and buildings should first be fixed and made “completely secure.”

“Staff should not be required to work next to bullet holes,” the union said. “Forcing a return under these conditions risks re-traumatizing staff by exposing them to the reminders of the horrific shooting they endured.”

The union also called for “perimeter security on all campuses” until the investigation is fully completed and shared with staff.

Shooter’s focus on COVID-19 vaccine

White’s father, who contacted police and identified his son as the possible shooter, said White had been upset over the death of his dog and had become fixated on the COVID-19 vaccine, according to a law enforcement official.

A neighbor of White told the Atlanta Journal-Constitution that White “seemed like a good guy” but spoke with her multiple times about his distrust of COVID-19 vaccines in unrelated conversations.

“He was very unsettled, and he very deeply believed that vaccines hurt him and were hurting other people,” Nancy Hoalst told the newspaper. “He emphatically believed that.”

But Hoalst said she never believed White would be violent: “I had no idea he thought he would take it out on the CDC.”

Haigh writes for the Associated Press.

Source link

How Facebook’s Monetisation Programme is Fueling the Misinformation Economy in Northern Nigeria

The ring light in Amina Yusuf’s* room stood near an old white wardrobe. For months, it remained unused, except during the occasional recordings where she mimed along to Hausa love songs, glancing between her phone screen and the mirror at the other side of the room. These moments were fleeting, unsure steps in her experiment with social media, particularly TikTok.

But when the news came that Facebook had rolled out monetisation features for content creators in Nigeria, something stirred. Opportunity, like the sudden spark of light, loomed and offered a new possibility. Not fame, no – at least not yet – but fortune, or its illusion.

“As soon as I heard about it,” she said, fiddling with the edge of her veil, “I knew this was a way to earn from what I was already doing.”

She speaks with the assurance of someone who has discovered a private economy within a public world. Amina converted her dormant Facebook profile, once used to scroll aimlessly through posts and video reels, into a professional page. She followed every breadcrumb Facebook’s interface dropped: optimize your bio, post consistently, engage followers, and cross-promote from Instagram. Soon enough, the app crowned her eligible for monetisation.

And that’s when her trouble began.

In this algorithmic marketplace, virality is currency. With 190 thousand followers on Facebook, her reach was growing – thousands of views, shares, and comments flooding her posts. Amina’s strategy was simple: find trending TikTok videos and repost them. It didn’t matter whether the videos were true or false, informative or inflammatory.

“My job is just to share,” she said. “It’s the viewer’s responsibility to figure out if it’s true or not.”

“Sometimes I earn between 10 to 15 dollars a day,” she said, not with pride, but a sense of surprise. “That’s a lot of money for someone like me. I even paid my school fees with it.”

As a university student in Northern Nigeria, where classrooms are overcrowded, lectures often suspended, and lecturers underpaid, she says her digital hustle has made her richer than her lecturers.

“I earn more than them,” she said plainly. “Imagine that.” She referenced how recently a university professor revealed the dire professional conditions they find themselves in.

To digital rights activists and fact-checkers, Amina is not just a clever student seizing a modern opportunity. She is part of a growing ecosystem that profits from confusion. What she calls content, they call misinformation. Monetised misinformation.

Facebook’s monetisation in Africa, especially in Nigeria and particularly in the northern part of the country, has become a double-edged sword. On one hand, it democratizes income in a region that ranks high in poverty rate. On the other hand, it rewards spectacle, sometimes at the expense of truth. Sensational headlines, recycled conspiracy theories, emotional hoaxes: these are the new exports of a digital continent eager to be seen, eager to be paid.

Amina does not deny this. But she also does not apologise.

“I don’t make the videos,” she said. “I just share what people have already posted. If it makes people comment and watch, that’s all I need.”

Her profile on Facebook is a mixture of different videos – politics, religion, celebrity gossip, football, and everything that may generate engagements. Among this, is the amplification of information disorder originally shared by the creators of the videos. 

For example, in a Facebook post that garnered over 60 shares, she amplified a false claim that Osun State Governor Adeleke had announced Babagana Zulum would spearhead the defection of five Northern governors to the new coalition of ADC. Despite the claim being publicly debunked, the post is still on her profile.

An algorithm designed for outrage

By design, Facebook’s algorithm privileges intensity over integrity. According to the platform’s own documentation, content that provokes strong emotional reactions – anger, fear, shock– is more likely to spread. For many users in Northern Nigeria, where Facebook doubles as both a social space and a news source, this has created a chaotic digital environment where engagement is currency and accuracy is often overlooked.

“Facebook isn’t just a platform here,” said Bashir Sharfadi, a journalist based in Kano. “It’s the main source of news for millions. So when influencers post fake news, the impact is immediate and vast.”

A 2020 report by the Centre for Democracy and Development (CDD) West Africa, revealed that most of the viral posts flagged by Nigerian fact-checkers in the previous year originated from influencers who directly benefited from Facebook’s financial incentives. The rewards are tangible and tempting.

One such influencer, who regularly posts unverified videos to nearly a million followers, put it plainly: “It’s about engagement, not content.” He explained how influencers operate in coordinated communities, often through WhatsApp groups, sharing what trends, what triggers reaction. “The only reason we avoid some kinds of content, like nudity, is religious. But many others still post that too.”

The more scandalous the claim, the greater the traffic. And with traffic comes income.

But Sharfadi warns that the crisis goes beyond the individual pursuit of profit. It has become institutional: a digital ecosystem where misinformation is normalised, defended, and scaled.

“Our biggest challenge isn’t detecting lies,” he said. “It’s competing with the incentives that come with spreading them.” 

But Sharfadi has more concerns. People believe misinformation and they don’t care even after it is fact-checked.

In one recent case, a TikTok video targeting an activist named Dan Bello was re-edited and republished across Facebook and WhatsApp. Dan Bello is a popular Hausa vlogger with millions of followers on Facebook, TikTok, and X, posting mainly on accountability in governance.

The manipulated clip, falsely portrayed Dan Bello as ‘an enemy of Islam’ supporting an attack on Muslim clerics by showing him raising thumbs up on an audio attached to the video. It gained massive traction. The result: a popular cleric condemned Dan Bello publicly, sparking backlash that lingered even after the video was proven to be doctored.

“Even when the cleric apologised, people still believed he had been threatened into doing so,” said Sharfadi. “The damage had already been done.”

Another case involved one Sultan, a TikTok influencer known for posting commentary on current events. During the recent Israeli-Iran conflict, he claimed that Israeli Prime Minister Netanyahu was hiding in a bunker, near death. The clip was later manipulated to feature an image of Nigeria’s President Tinubu and circulated widely.

Sultan is now in jail.

“He was arrested in Kano for something he never did,” posted his lawyer on Facebook. “There was no investigation. No effort to verify. Just a swift response to digital noise.”

The story of Sultan is a portrait of a system where the line between user-generated content and criminal liability is dangerously blurred.

Who bears the burden?

In response to the growing crisis, Meta—Facebook’s parent company—has recently taken down and demonetised dozens of accounts for violating its content policies. But enforcement remains scattershot.

One influencer interviewed for this report admitted to receiving multiple warnings. Yet his account remains active and profitable.

About what caused a restriction on his account, he admitted, “I know it’s wrong, but if I stop, someone else will do it. So what’s the point?”

Critics argue that Facebook’s moderation policies are inconsistent and reactive. Content flagged in English may be removed, while misinformation in Hausa, spoken by tens of millions, is often overlooked.

“What we see is a system where the platform benefits, the influencers benefit, and the public suffers,” Sharfadi said. “It’s not just about demonetization. It’s about influence. These pages, with their massive followings, can be rented. You pay, they publish whatever narrative you want.”

The commodification of disinformation has taken root. Several influencers are now operating as pay-for-post vendors, spreading political propaganda and conspiracy theories on demand.

Fact-checkers like Muhammad Dahiru believe that Facebook must go beyond machine learning and invest in people—moderators fluent in local languages and cultures, equipped to flag false content in real time.

“We need language-specific moderation, especially in Hausa, which is the lingua franca in Northern Nigeria,” Muhammad said. “Otherwise, misinformation will remain the most profitable game in town.”

He added, “There must be accountability. Either platforms police themselves, or governments will do it for them. And when governments control speech, history reminds us what follows.” Muhammad believes the work against misinformation is shared responsibility  “between the government, Facebook, and civil society organisations.” 

For now, Northern Nigeria’s digital public is left to sort through a feed where facts and falsehoods blend seamlessly, where a student like Amina can pay tuition with profits from misinformation, and an activist like Dan Bello can be condemned for something that never happened.


The asterisked name is a pseudonym we have used at the source’s request to protect her against backlash.



Source link

As millions adopt Grok to fact-check, misinformation abounds | Elon Musk

On June 9, soon after United States President Donald Trump dispatched US National Guard troops to Los Angeles to quell the protests taking place over immigration raids, California Governor Gavin Newsom posted two photographs on X. The images showed dozens of troopers wearing the National Guard uniform sleeping on the floor in a cramped space, with a caption that decried Trump for disrespecting the troops.

X users immediately turned to Grok, Elon Musk’s AI, which is integrated directly into X, to fact-check the veracity of the image. For that, they tagged @grok in a reply to the tweet in question, triggering an automatic response from the AI.

“You’re sharing fake photos,” one user posted, citing a screenshot of Grok’s response that claimed a reverse image search could not find the exact source. In another instance, Grok said the images were recycled from 2021, when former US President Joe Biden, a Democrat, withdrew troops from Afghanistan. Melissa O’Connor, a conspiracy-minded influencer, cited a ChatGPT analysis that also said the images were from the Afghanistan evacuation.

However, non-partisan fact-checking organisation PolitiFact found that both AI citations were incorrect. The images shared by Newsom were real, and had been published in the San Francisco Chronicle.

The bot-sourced erroneous fact checks formed the basis for hours of cacophonous debates on X, before Grok corrected itself.

Unlike OpenAI’s standalone app ChatGPT, Grok’s integration into X offers users immediate access to real-time AI answers without quitting the app, a feature that has been reshaping user behaviour since its March launch. However, the increasingly first stop for fact checks during breaking news or for other general posts often provides convincing but inaccurate answers.

“I think in some ways, it helps, and in some ways, it doesn’t,” said Theodora Skeadas, an AI policy expert formerly at Twitter. “People have more access to tools that can serve a fact-checking function, which is a good thing. However, it is harder to know when the information isn’t accurate.”

There’s no denying that chatbots could help users be more informed and gain context on events unfolding in real time. But currently, its tendency to make things up outstrips its usefulness.

Chatbots, including ChatGPT and Google’s Gemini, are large language models (LLMs) that learn to predict the next word in a sequence by analysing enormous troves of data from the internet. The outputs of chatbots are reflections of the patterns and biases in the data it is trained on, which makes them prone to factual errors and misleading information called “hallucinations”.

For Grok, these inherent challenges are further complicated because of Musk’s instructions that the chatbot should not adhere to political correctness, and should be suspicious of mainstream sources. Where other AI models have guidelines around politically sensitive queries, Grok doesn’t. The lack of guardrails has resulted in Grok praising Hitler, and consistently parroting anti-Semitic views, sometimes to unrelated user questions.

In addition, Grok’s reliance on public posts by users on X, which aren’t always accurate, as a source for its real-time answers to some fact checks, adds to its misinformation problem.

‘Locked into a misinformation echo chamber’

Al Jazeera analysed two of the most highly discussed posts on X from June to investigate how often Grok tags in replies to posts were used for fact-checking. The posts analysed were Gavin Newsom’s on the LA protests, and Elon Musk’s allegations that Trump’s name appears in the unreleased documents held by US federal authorities on the convicted sex offender Jeffrey Epstein. Musk’s allegations on X have since been deleted.

Our analysis of the 434 replies that tagged Grok in Newsom’s post found that the majority of requests, nearly 68 percent, wanted Grok to either confirm whether the images Newsom posted were authentic or get context about National Guard deployment.

Beyond the straightforward confirmation, there was an eclectic mix of requests: some wanted Grok to make funny AI images based on the post, others asked Grok to narrate the LA protests in pirate-speak. Notably, a few users lashed out because Grok had made the correction, and wouldn’t endorse their flawed belief.

“These photos are from Afghanistan. This was debunked a couple day[s] go. Good try tho @grok is full of it,” one user wrote, two days after Grok corrected itself.

The analysis of the top 3,000 posts that mentioned @grok in Musk’s post revealed that half of all user queries directed at Grok were to “explain” the context and sought background information on the Epstein files, which required descriptive details.

Another 20 percent of queries demanded “fact checks” whose primary goal was to confirm or deny Musk’s assertions, while 10 percent of users shared their “opinion”, questioning Musk’s motives and credibility, and wanted Grok’s judgement or speculation on possible futures of Musk-Trump fallout.

“I will say that I do worry about this phenomenon becoming ingrained,” said Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech, about the instant fact checks. “Even if it’s better than just believing a tweet straight-up or hurling abuse at the poster, it doesn’t do a ton for our collective critical thinking abilities to expect an instant fact check without taking the time to reflect about the content we’re seeing.”

Grok was called on 2.3 million times in just one week —between June 5 and June 12— to answer posts on X, data accessed by Al Jazeera through X’s API shows, underscoring how deeply this behaviour has taken root.

“X is keeping people locked into a misinformation echo chamber, in which they’re asking a tool known for hallucinating, that has promoted racist conspiracy theories, to fact-check for them,” Alex Mahadevan, a media literacy educator at the Poynter Institute, told Al Jazeera.

Mahadevan has spent years teaching people how to “read laterally”, which means when you encounter information on social media, you leave the page or post, and go search for reliable sources to check something out. But he now sees the opposite happening with Grok. “I didn’t think X could get any worse for the online information ecosystem, and every day I am proved wrong.”

Grok’s inconsistencies in fact-checking are already reshaping opinions in some corners of the internet. Digital Forensic Research Lab (DFRLab), which studies disinformation, analysed 130,000 posts related to the Israel-Iran war to understand the wartime verification efficacy of Grok. “The investigation found that Grok was inconsistent in its fact-checking, struggling to authenticate AI-generated media or determine whether X accounts belong to an official Iranian government source,” the authors noted.

Grok has also incorrectly blamed a trans pilot for a helicopter crash in Washington, DC; claimed the assassination attempt on Trump was partially staged; conjured up a criminal history for an Idaho shooting suspect; echoed anti-Semitic stereotypes of Hollywood; and misidentified an Indian journalist as an opposition spy during the recent India-Pakistan conflict.

Despite this growing behaviour shift of instant fact checks, it is worth noting that the 2025 Digital News Report by Reuters Institute showed that online populations in several countries still preferred going to news sources or fact checkers over AI chatbots by a large margin.

“Even if that’s not how all of them behave, we should acknowledge that some of the “@grok-ing” that we’re seeing is also a bit of a meme, with some folks using it to express disagreement or hoping to trigger a dunking response to the original tweet,” Mantzarlis said.

Mantzarlis’s assessment is echoed in our findings. Al Jazeera’s analysis of the Musk-Trump feud showed that about 20 percent used Grok for things ranging from trolling or dunking directed at either Musk or Grok itself, to requests for AI meme-images such as Trump with kids on Epstein island, and other non-English language requests including translations. (We used GPT-4.1 to assist in identifying the various categories the 3,000 posts belonged to, and manually checked the categorisations.)

Beyond real-time fact-checking, “I worry about the image-generation abuse most of all because we have seen Grok fail at setting the right guardrails on synthetic non-consensual intimate imagery, which we know to be the #1 vector of abuse from deepfakes to date,” Mantzarlis said.

For years, social media users benefited from context on the information they encountered online with interventions such as labeling state media or introducing fact-checking warnings.

But after buying X in 2022, Musk ended those initiatives and loosened speech restrictions. He also used the platform as a megaphone to amplify misinformation on widespread election fraud, and to boost conservative theories on race and immigration. Earlier this year, xAI acquired X in an all-stock deal valued at $80bn. Musk also replaced human fact-checking with a voluntary crowdsource programme called Community Notes, to police misleading content on X.

Instead of a centralised professional fact-checking authority, a contextual “note” with corrections is added to misleading posts, based on the ratings the note receives from users with diverse perspectives. Meta soon followed X and abandoned its third-party fact-checking programme for Community Notes.

Research shows that Community Notes is indeed viewed as more trustworthy and has proven to be faster than traditional centralised fact-checking. The median time to attach a note to a misleading post has dropped to under 14 hours in February, from 30 hours in 2023, a Bloomberg analysis found.

But the programme has also been flailing— with diminished volunteer contributions, less visibility for posts that are corrected, and notes on contentious topics having a higher chance of being removed.

Grok, however, is faster than Community Notes. “You can think of the Grok mentions today as what an automated AI fact checker would look like — it’s super fast but nowhere near as reliable as Community Notes because no humans were involved,” Soham De, a Community Notes researcher and PhD student at the University of Washington, told Al Jazeera. “There’s a delicate balance between speed and reliability.”

X is trying to bridge this gap by supercharging the pace of creation of contextual notes. On July 1, X piloted the “AI Note Writer,” enabling developers to create AI bots to write community notes alongside human contributors on misleading posts.

According to researchers involved in the project, LLM-written notes can be produced faster with high-quality contexts, speeding up the note generation for fact checks.

But these AI contributors must still go through the human rating process that makes Community Notes trustworthy and reliable today, De said. This human-AI system works better than what human contributors can manage alone, De and other co-authors said in a preprint of the research paper published alongside the official X announcement.

Still, the researchers themselves highlighted its limitations, noting that using AI to write notes could lead to risks of persuasive but inaccurate responses by the LLM.

Grok vs Musk

On Wednesday, xAI launched its latest flagship model, Grok 4. On stage, Musk boasted about the current model capabilities as the leader on Humanity’s Last Exam, a collection of advanced reasoning problems that help measure AI progress.

Such confidence belied recent struggles with Grok. In February, xAI patched an issue after Grok suggested that Trump and Musk deserve the death penalty. In May, Grok ranted about a discredited conspiracy of the persecution of white people in South Africa for unrelated queries on health and sports, and xAI clarified that it was because of an unauthorised modification by a rogue employee. A few days later, Grok gave inaccurate results on the death toll of the Holocaust, which it said was due to a programming error.

Grok has also butted heads with Musk. In June, while answering a user question on whether political violence is higher on the left or the right, Grok cited data from government sources and Reuters, to draw the conclusion that, “right-wing political violence has been more frequent and deadly, with incidents like the January 6 Capitol riot and mass shootings.”

“Major fail, as this is objectively false. Grok is parroting legacy media,” Musk said, adding, there was “far too much garbage in any foundation model trained on uncorrected data.”

Musk has also chided Grok for not sharing his distrust of mainstream news outlets such as Rolling Stone and Media Matters. Subsequently, Musk said he would “rewrite the entire corpus of human knowledge” by adding missing information and deleting errors in Grok’s training data, calling on his followers to share “divisive facts” which are “politically incorrect but nonetheless factually true” for retraining the forthcoming version on the model.

That’s the thorny truth about LLMs. Just as they are likely to make things up, they can also offer answers grounded in truth — even at the peril of their creators. Though Grok gets things wrong, Mahadevan of the Poynter Institute said, it does get facts right while citing credible news outlets, fact-checking sites, and government data in its replies.

On July 6, xAI updated the chatbot’s public system prompt that directs its responses to be “politically incorrect” and to “assume subjective viewpoints sourced from the media are biased”.

Two days later, the chatbot shocked everyone by praising Adolf Hitler as the best person to handle “anti-white hate”. X deleted the inflammatory posts later that day, and xAI removed the guidelines to not adhere to political correctness from its code base.

Grok 4 was launched against this backdrop, and in the less than two days that it has been available, researchers have already begun noticing some weird modifications.

When asked for its opinion on politically sensitive questions such as who does Grok 4 support in the ongoing Israel-Palestine conflict, it sometimes runs a search to find out Musk’s stance on the subject, before returning an answer, according to at least five AI researchers who independently reproduced the results.

“It first searches Twitter for what Elon thinks. Then it searches the web for Elon’s views. Finally, it adds some non-Elon bits at the end,” Jeremy Howard, a prominent Australian data scientist, wrote in a post on X, pointing out that “54 of 64 citations are about Elon.”

Researchers also expressed surprise over the reintroduction of the directive for Grok 4 to be “politically incorrect”, despite this code having been removed from its predecessor, Grok 3.

Experts said political manipulation could risk losing institutional trust and might not be good for Grok’s business.

“There’s about to be a structural clash as Musk tries to get the xAI people to stop it from being woke, to stop saying things that are against his idea of objective fact,” said Alexander Howard, an open government and transparency advocate based in Washington, DC. “In which case, it won’t be commercially viable to businesses which, at the end of the day, need accurate facts to make decisions.”

Source link

RFK’s CDC panel includes members who’ve spread vaccine misinformation

U.S. Health Secretary Robert F. Kennedy Jr. on Wednesday named eight new vaccine policy advisers to replace the panel that he abruptly dismissed earlier this week.

They include a scientist who researched mRNA vaccine technology and became a conservative darling for his criticisms of COVID-19 vaccines, a leading critic of pandemic-era lockdowns, and a professor of operations management.

Kennedy’s decision to “retire” the previous 17-member Advisory Committee on Immunization Practices was widely decried by doctors’ groups and public health organizations, who feared the advisers would be replaced by a group aligned with Kennedy’s desire to reassess — and possibly end — longstanding vaccination recommendations.

On Tuesday, before he announced his picks, Kennedy said: “We’re going to bring great people onto the ACIP panel — not anti-vaxxers — bringing people on who are credentialed scientists.”

The new appointees include Vicky Pebsworth, a regional director for the National Assn. of Catholic Nurses. She has been listed as a board member and volunteer director for the National Vaccine Information Center, a group that is widely considered to be a leading source of vaccine misinformation.

Another is Dr. Robert Malone, the former mRNA researcher who emerged as a close adviser to Kennedy during the measles outbreak. Malone, who runs a wellness institute and a popular blog, rose to prominence during the COVID-19 pandemic as he relayed conspiracy theories around the outbreak and the vaccines that followed. He has appeared on podcasts and other conservative news outlets where he’s promoted unproven and alternative treatments for measles and COVID-19.

He has claimed that millions of Americans were hypnotized into taking the COVID-19 shots and has suggested that those vaccines cause a form of AIDS. He’s downplayed deaths related to one of the largest measles outbreaks in the U.S. in years.

Malone told the Associated Press he will do his best “to serve with unbiased objectivity and rigor.”

Other appointees include Dr. Martin Kulldorff, a biostatistician and epidemiologist who was a co-author of the Great Barrington Declaration, an October 2020 letter maintaining that pandemic shutdowns were causing irreparable harm. Dr. Cody Meissner, a former ACIP member, also was named.

Abram Wagner of the University of Michigan’s school of public health, who investigates vaccination programs, said he’s not satisfied with the composition of the committee.

“The previous ACIP was made up of technical experts who have spent their lives studying vaccines,” he said. Most people on the current list “don’t have the technical capacity that we would expect out of people who would have to make really complicated decisions involving interpreting complicated scientific data.”

He said having Pebsworth on the board is “incredibly problematic” since she is involved in an organization that “distributes a lot of misinformation.”

Kennedy made the announcement in a social media post on Wednesday.

The committee, created in 1964, makes recommendations to the director of the Centers for Disease Control and Prevention. CDC directors almost always approve those recommendations on how vaccines that have been approved by the Food and Drug Administration should be used. The CDC’s final recommendations are widely heeded by doctors and guide vaccination programs.

The other appointees are:

  • Dr. James Hibbeln, who formerly headed a National Institutes of Health group focused on nutritional neurosciences and who studies how nutrition affects the brain, including the potential benefits of seafood consumption during pregnancy.
  • Retsef Levi, a professor of operations management at the Massachusetts Institute of Technology who studies business issues related to supply chain, logistics, pricing optimization and health and healthcare management. In a 2023 video pinned to an X profile under his name, Levi called for the end of the COVID-19 vaccination program, claiming the vaccines were ineffective and dangerous despite evidence they saved millions of lives. Levi told the AP he would try to help inform “public health policies with data and science, with the goal of improving the health and wellbeing of people and regain the public trust.”
  • Dr. James Pagano, an emergency medicine physician from Los Angeles.
  • Dr. Michael Ross, a Virginia-based obstetrician and gynecologist who previously served on a CDC breast and cervical cancer advisory committee. He is described as a “serial CEO and physician leader” in a bio for Havencrest Capital Management, a private equity investment firm where he is an operating partner.

Of the eight named by Kennedy, perhaps the most experienced in vaccine policy is Meissner, an expert in pediatric infectious diseases at Dartmouth-Hitchcock Medical Center, who has previously served as a member of both ACIP and the Food and Drug Administration’s vaccine advisory panel.

During his five-year term as an FDA adviser, the committee was repeatedly asked to review and vote on the safety and effectiveness of COVID-19 vaccines that were rapidly developed to fight the pandemic. In September 2021, he joined the majority of panelists who voted against a plan from the Biden administration to offer an extra vaccine dose to all American adults. The panel instead recommended that the extra shot should be limited to seniors and those at higher risk of the disease.

Ultimately, the FDA disregarded the panel’s recommendation and approved an extra vaccine dose for all adults.

In addition to serving on government panels, Meissner has helped author policy statements and vaccination schedules for the American Academy of Pediatrics.

ACIP members typically serve in staggered four-year terms, although several appointments were delayed during the Biden administration before positions were filled last year. The voting members are all supposed to have scientific or clinical expertise in immunization, except for one “consumer representative” who can bring perspective on community and social facets of vaccine programs.

Kennedy, a leading voice in the anti-vaccine movement before becoming the U.S. government’s top health official, has accused the committee of being too closely aligned with vaccine manufacturers and of rubber-stamping vaccines. ACIP policies require members to state past collaborations with vaccine companies and to recuse themselves from votes in which they had a conflict of interest, but Kennedy has dismissed those safeguards as weak.

Most of the people who best understand vaccines are those who have researched them, which usually requires some degree of collaboration with the companies that develop and sell them, said Jason Schwartz, a Yale University health policy researcher.

“If you are to exclude any reputable, respected vaccine expert who has ever engaged even in a limited way with the vaccine industry, you’re likely to have a very small pool of folks to draw from,” Schwartz said.

The U.S. Senate confirmed Kennedy in February after he promised he would not change the vaccination schedule. But less than a week later, he vowed to investigate childhood vaccines that prevent measles, polio and other dangerous diseases.

Kennedy has ignored some of the recommendations ACIP voted for in April, including the endorsement of a new combination shot that protects against five strains of meningococcal bacteria and the expansion of vaccinations against RSV.

In late May, Kennedy disregarded the committee and announced the government would change the recommendation for children and pregnant women to get COVID-19 shots.

On Monday, Kennedy ousted all 17 members of the ACIP, saying he would appoint a new group before the next scheduled meeting in late June. The agenda for that meeting has not yet been posted, but a recent federal notice said votes are expected on vaccinations against flu, COVID-19, HPV, RSV and meningococcal bacteria.

A HHS spokesman did not respond to a question about whether there would be only eight ACIP members, or whether more will be named later.

Stobbe writes for the Associated Press. Associated Press reporters Matthew Perrone, Amanda Seitz, Devi Shastri and Laura Ungar contributed to this report. The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Science and Educational Media Group and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.

Source link

Five years after George Floyd’s death, why misinformation still persists | Racism News

Five years ago on May 25, 2020, a white police officer in the United States killed George Floyd, a 46-year-old Black man, during an arrest.

A bystander’s video showed officer Derek Chauvin kneeling on Floyd’s neck for about nine minutes in Minneapolis, Minnesota, as Floyd pleaded that he couldn’t breathe. The footage sparked weeks of global protests against police brutality and racism. It contributed to a jury’s murder conviction against Chauvin and a federal investigation into the Minneapolis Police Department.

Although ample evidence showed that Chauvin and police misconduct were to blame for Floyd’s death, another narrative quickly emerged – that Floyd died because of a drug overdose.

Five years later, that falsehood is central to calls for President Donald Trump to pardon Chauvin.

Representative Marjorie Taylor Greene, a member of Trump’s Republican Party from Georgia, for example, recently revived her longstanding and long-debunked take that Chauvin did not cause Floyd’s death.

“I strongly support Derek Chauvin being pardoned and released from prison,” Greene wrote in a May 14 X post. “George Floyd died of a drug overdose.”

In 2021, a Minnesota jury convicted Chauvin of second-degree unintentional murder, third-degree murder and second-degree manslaughter. Chauvin also pleaded guilty to twice violating a federal criminal civil rights statute – once against Floyd and once against a 14-year-old in 2017. The state and federal sentences that Chauvin is serving concurrently each exceeded 20 years.

In 2023 after a two-year investigation sparked by Floyd’s death, the US Department of Justice found that the city of Minneapolis and its police department engaged in a pattern of civil rights violations, including use of excessive force and unlawful discrimination against Black and Native American people.

The narrative that Floyd died of an overdose persisted through the involved police officers’ criminal trials and beyond their convictions, in part because powerful political critics of the racial justice movement sought to rewrite history with false claims. It was one of many false statements about Floyd’s actions, his criminal history and the protests that followed his murder.

Experts said systemic racism contributes also to the proliferation of the inaccurate narratives and their staying power.

“The core through-line that emerges is the kind of longstanding, deep racist narratives around Black criminality and also the ways people try to justify who is or isn’t an ‘innocent victim’,” Rachel Kuo, a University of Wisconsin-Madison professor who studies race, social movements and technology, said of the falsehoods.

The summer 2020 protests built on 2014 and 2016 protests against police brutality, but with Floyd’s case as a catalyst, racial justice advocates achieved global visibility and corporate attention, Kuo said.

That visibility came with a price.

When people of colour achieve visibility for their social movements or political demands, an effort to delegitimise those demands quickly follows, Kuo said. Misinformation plays a part by trying to “chip away” at the belief that what happened to Floyd was unjust or to undermine the protest movement overall, she said.

How conservative influencers distort an autopsy report to push overdose claim

Chauvin killed Floyd after police were called to a corner grocery store where Floyd was suspected of using a counterfeit $20 bill. News reports about Floyd’s criminal record – which included three drug charges, two theft cases, aggravated robbery and trespassing – fuelled false claims about his background.

Two autopsy reports – one performed by Hennepin County’s medical examiner and one commissioned by Floyd’s family – concluded Floyd’s death was a homicide. Although they pointed to different causes of death, neither report said he died because of an overdose.

The Hennepin County medical examiner’s office reported “fentanyl intoxication” and “recent methamphetamine use” among “other significant conditions” related to his death, but it did not say drugs killed him. It said Floyd “experienced a cardiopulmonary arrest while being restrained by law enforcement officer”. The private autopsy concluded Floyd died of suffocation.

Nevertheless, the Hennepin County autopsy report’s fentanyl detail provided kindling for the drug overdose narrative to catch fire. PolitiFact first fact-checked this narrative when it was published on a conservative blog in August 2020.

As Chauvin’s trial approached in early 2021, then-Fox News host Tucker Carlson wrongly told his millions of viewers that Floyd’s autopsy showed he “almost certainly died of a drug overdose. Fentanyl.”

Conservative influencer Candace Owens amplified the false narrative in March 2021. Lawyers defending Chauvin argued drug use was a more primary cause of death than the police restraint, but jurors were unconvinced.

Chauvin’s 2021 conviction didn’t spell the end of misinformation about Floyd’s death. The drug overdose narrative emerged again in late 2022 as the trial neared for two other police officers charged with aiding and abetting second-degree murder and second-degree manslaughter in Floyd’s death.

Misinformation experts said it’s not surprising that Floyd and the 2020 protests remain a target of false portrayals years later because of the widespread attention Floyd’s death drew at a time when online platforms incentivise inflammatory commentary.

“Marginalised groups have been prime targets of misinformation going back hundreds, even thousands of years” because falsehoods can be weaponised to demonise, harm and further oppress and discriminate, said Deen Freelon, a University of Pennsylvania Annenberg School for Communication professor who studies digital politics with a focus on race, gender, ideology and other identity dimensions in social media.

He said Floyd’s murder was a magnet for mis- and disinformation because it “fits the mould of a prominent event that ties into controversial, long-running political issues,” similar to events such as the 2012 Sandy Hook Elementary School mass shooting and the COVID-19 pandemic.

Conservative activists and politicians with large followings have continued to target Floyd and the 2020 protests.

The drug overdose narrative proliferated in conjunction with the October 2022 release of Owens’s film about Floyd and the Black Lives Matter (BLM) movement, titled The Greatest Lie Ever Sold: George Floyd and the Rise of BLM. Rapper Ye, formerly Kanye West, parroted the false narrative in an October 2022 podcast interview, citing Owens’s film.

In October 2023, Carlson repeated the false drug overdose narrative. That X video has since received more than 23.5 million views. In December 2023, Greene reshared a different Carlson video with the caption, “George Floyd died from a drug overdose.”

Ramesh Srinivasan, an information studies professor at the University of California-Los Angeles Graduate School of Education and Information Studies, said social media algorithms don’t allow for nuanced conversations that require detail and context, which are important for productive discussion about what happened in the summer of 2020.

A person’s online visibility and virality, which can directly correlate to their revenues in some cases, improves when a person takes extreme, antagonistic, partisan or hardened positions, he said.

“Those conditions have propped up certain people who specialise in the peddling of troll-type content, of caricatured content, of deliberately false content,” Srinivasan said.

Freelon said the internet has “added fuel to the fire” and broadened misinformation’s reach.

“So it’s important to remain vigilant against misinformation,” he said, “not only because lies are inherently bad but also because the people who bear the harm have often historically suffered disproportionately from prejudice and mistreatment.”

PolitiFact researcher Caryn Baird contributed to this report.

Source link