Meta

Meta sets date to remove Australians under 16 from Instagram, Facebook | Social Media News

‘Soon, you’ll no longer be able to use Facebook’, Meta said in messages it sent to young people ahead of the social media ban.

Meta will prevent Australians younger than 16 from accessing Facebook and Instagram from December 4, as Canberra prepares to enforce a sweeping new social media law that has sparked concerns from young people and advocates.

The US tech giant said it would start removing teenagers and children from its platforms ahead of the new Australian social media ban on users under 16 coming into effect on December 10.

Recommended Stories

list of 4 itemsend of list

The Australian government is preparing to enforce the law with fines of up to 49.5 million Australian Dollars (US$32 million) for social media companies even as critics say the changes have been rushed through without addressing questions around privacy, and the effects on young people’s mental health and access to information.

“From today, Meta will be notifying Australian users it understands to be aged 13-15 that they will lose access to Instagram, Threads and Facebook,” Meta said in a statement.

“Meta will begin blocking new under-16 accounts and revoking existing access from 4 December, expecting to remove all known under-16s by 10 December.”

There are around 350,000 Instagram users aged between 13-15 in Australia and around 150,000 Facebook accounts, according to government figures.

Meta has started warning impacted users that they will soon be locked out.

“Soon, you’ll no longer be able to use Facebook and your profile won’t be visible to you or others,” reads a message sent to users that Meta believes to be under 16.

“When you turn 16, we’ll let you know that you can start using Facebook again.”

In addition to Facebook and Instagram, the Australian government has said that the ban will be applied to several other social media platforms, including Reddit, Snapchat, Threads, TikTok, X and YouTube.

Ban ‘doesn’t add up’

A number of young people and advocates have expressed concerns about the implementation of the new ban, including journalist and founder of youth news service 6 News Australia Leo Puglisi, 18, who told an Australian senate inquiry that young people “deeply care” about the ban and its potential implications.

Puglisi says that many of the people who engage with 6 News are young people who find their content on social media.

“I think young people do have the right to be informed,” he told the inquiry.

“We’re saying that a 15 year old can’t access any news or political information on social media. I just don’t think that that adds up.”

Australian Senator David Shoebridge, has expressed concerns that “an estimated 2.4 million young people will be kicked off social media accounts… just as school holidays start.”

“I’m deeply concerned about the impacts on the ban including on young people’s mental health and privacy,” Shoebridge wrote in a recent post on X.

John Pane, from Electronic Frontiers Australia, also told a senate inquiry that the new legislation creates new risks, while trying to address other issues.

While Pane acknowledged the ban seeks to address young people potentially seeing “unsuitable content” online, he says it also creates a new “far greater, systemic risk” of “potential mass collection of children’s and adults’ identity data.”

This will further increase “the data stores and financial positions of big tech and big data and increasing cyber risk on a very significant scale,” Pane said.

Since most Australians aged under 16 don’t yet have official government ID, social media companies are planning to require some users to verify their age by recording videos of themselves.

Other countries mull similar bans

There is keen interest in whether Australia’s sweeping restrictions can work as regulators around the globe wrestle with the mixed dangers and benefits of social media.

In New Zealand, Prime Minister Christopher Luxon is planning to introduce a similar bill to restrict children’s social media use.

Indonesia has also said it is preparing legislation to protect young people from “physical, mental, or moral perils”.

In Europe, the Dutch government has advised parents to forbid children under 15 from using social media apps like TikTok and Snapchat.

Source link

Judge rules Meta can keep WhatsApp, Instagram in antitrust trial

Nov. 18 (UPI) — Facebook owner Meta can keep the WhatsApp mobile messaging app and the Instagram social media site in a federal trial first brought by the Federal Trade Commission in 2020.

Washington D.C.-based Judge James Boasberg ruled Tuesday that the FTC did not prove its claim that Meta has maintained a monopoly on social media platforms, CNBC reported.

“Whether or not Meta enjoyed monopoly power in the past, though, the agency must show that it continues to hold such power now,” Boasberg wrote.

“The court’s verdict today determines that the FTC has not done so,” he added.

Meta officials said in a statement to NPR that Boasberg’s ruling affirms that social media remains competitive.

Boasberg in 2021 dismissed the case citing a lack of evidence that Facebook held “market power” over social media.

The FTC amended and refiled its complaint in August 2021, providing more detail on user data and comparisons to competitors, including Snapchat, the discontinued Google+ social network and Myspace.

The FTC also argued Meta engaged in a “buy or bury” strategy to monopolize social media when it paid more than market value to buy Instagram in 2012 and when it bought WhatsApp in 2014, according to NPR.

The only way to resolve the alleged monopoly was to require Meta to spin off Instagram and WhatsApp as independent companies, the FTC argued.

The social media marketplace has changed greatly over the past five years since the federal agency first accused Meta of monopolizing social media, Boasberg wrote.

“While it once might have made sense to partition apps into separate markets of social networking and social media, that wall has since broken down,” Boasberg wrote.

He cited the rise of TikTok and called it “Meta’s fiercest rival,” which he called evidence of a competitive social media marketplace.

During the trial that concluded in May, Meta’s legal team argued it faced plenty of competition and only bought WhatsApp and Instagram because they are quality products that were easier to buy instead of replicating.

During the trial, Meta CEO Mark Zuckerberg testified that buying Instagram was easier than creating a new product that would compete with it.

Source link

Leaked Docs Reveal Meta Cashing In on a ‘Deluge’ of Fraudulent Ads

Meta anticipated earning about 10% of its total annual revenue, or $16 billion, from advertising for scams and banned items, according to internal documents reviewed by Reuters. The documents reveal that for at least three years, the company failed to stop a significant number of ads exposing its billions of users on Facebook, Instagram, and WhatsApp to fraudulent schemes, illegal casinos, and banned medical products. On average, around 15 billion “higher risk” scam ads, showing clear signs of fraud, were displayed daily on these platforms. Meta reportedly generates about $7 billion annually from these scam ads.

Many of these ads were linked to marketers flagged by Meta’s internal systems. However, the company only bans advertisers if fraud is at least 95% certain according to its systems. If less certain but still suspect, Meta imposes higher ad rates as a penalty instead of outright banning them. This approach aims to deter dubious advertisers without fully eliminating them. The company’s ad-personalization system also ensures that users who click on scam ads see more of them based on their interests.

The documents create an image of Meta grappling with the extent of abuse on its platforms while hesitating to take stronger actions that could impact its revenue. The acceptance of revenue from suspicious sources highlights a lack of oversight in the advertising industry, as noted by fraud expert Sandeep Abraham. Meta’s spokesperson, Andy Stone, counters that the documents provide a biased view and argues that the actual share of revenue from scam ads would be lower than estimated. He claimed the plan aimed to validate investments in combating fraud.

Stone mentioned that Meta has significantly reduced user reports of scam ads globally and removed millions of scam ad content in recent efforts. The company aims for major reductions in scam ads in the upcoming year. Despite this, internal research indicates that Meta’s platforms are central to the global fraud economy, with one presentation estimating they contribute to a third of all successful fraud in the U. S. Competitors were noted to have better systems to combat fraud.

As regulators step up pressure for stronger consumer protections, the documents reveal the U. S. Securities and Exchange Commission is investigating Meta for financial scam ads. In Britain, regulators identified Meta as the source of over half of the payment-related scam losses in 2023. The company has acknowledged that addressing illicit advertising may hurt its revenue.

Meta is investing heavily in technology and has plans for extensive capital expenditures in AI. CEO Mark Zuckerberg reassured investors that their advertising revenue can support these projects. The internal documents suggest a careful consideration of the financial impact of increasing measures against scam ads, indicating that while the company intends to reduce illicit revenue, it is wary of the potential business implications.

Despite planning to diminish scam ads’ revenue share, Meta is bracing for regulatory fines, estimating penalties that could reach up to $1 billion. However, these fines are viewed as comparatively minor against the income from scam ads, which already generates significant revenue. The leadership’s strategy shows a tendency to react to regulatory pressure rather than implementing proactive measures to vet advertisers effectively. Stone disputed claims that Meta’s policy is to act only under regulatory threat.

Meta has set limits on how much revenue it can afford to lose from actions against suspect advertisers. In early 2025, a document revealed that the team reviewing questionable ads was restricted to a loss of no more than 0.15% of company revenue, which equated to around $135 million from Meta’s total of $90 billion in the same period. A manager noted that this revenue cap included both scam ads and harmless ads that might be mistakenly blocked, indicating strict financial boundaries in their approach.

Under increasing pressure to manage scams more effectively, Meta’s executives proposed a moderate strategy to CEO Mark Zuckerberg in October 2024. Instead of a drastic approach, they suggested targeting countries where they anticipated regulatory action. Their goal was to reduce the revenue lost to scams, illegal gambling, and prohibited goods from approximately 10.1% in 2024 to 7.3% by the end of 2025, with further reductions planned for subsequent years.

A surge in online fraud was noted in 2022, when Meta uncovered a network of accounts pretending to be U. S. military members trying to scam Facebook users. Other scams, such as sextortion, were also rising. Yet, at that time, Meta invested little in automated systems to detect such scams and categorized them as a low-priority issue. Internal documents showed efforts were mainly focused on fraudsters impersonating celebrities, which threatened to alienate advertisers and users alike. However, layoffs at Meta affected the enforcement team, as many working on advertiser rights were let go, and resources shifted heavily toward virtual reality and AI projects.

Despite layoffs, Meta claimed to have increased its staff handling scam advertising. However, data from 2023 revealed that Meta was ignoring about 96% of valid scam reports filed by users, suggesting a significant gap in their response to customer concerns. The safety staff aimed to improve this by reducing the number of dismissed reports to no more than 75% in the future.

Instances of user frustration were evident, such as a recruiter for the Royal Canadian Air Force who lost access to her account after being hacked. Despite multiple reports to Meta, her account remained active, even sharing false cryptocurrency investment opportunities that defrauded her connections. Reports indicated that she had many people flag her account, but it took about a month before Meta finally removed it.

Meta refers to scams that do not involve paid ads as “organic,” which include free classified ads, fake dating profiles, and fraudulent medical claims. A report from December 2024 stated that users face approximately 22 billion organic scam attempts each day, alongside 15 billion scam ads, highlighting the company’s ongoing struggle to manage fraud effectively. Internal documents suggest that Meta’s efforts to police fraud are not capturing much of the scam activity occurring across its platforms.

In Singapore, police shared a list of 146 scams targeting local users, but Meta staff found that only 23% of these scams broke the platform’s policies. The remaining 77% went against the spirit of the rules but not the exact wording. Examples of unchecked scams included fake offers on designer clothes, false concert tickets, and job ads pretending to be from major tech firms. In one case, Meta discovered scam ads claiming to belong to the Canadian prime minister, yet the existing rules wouldn’t flag the account.

Even when advertisers are found to be scamming, the rules can be lenient. Small advertisers need to be flagged for scams eight times before being blocked, while larger ones can have over 500 complaints without being shut down. Some scams generated significant revenue; for example, four removed ads were linked to $67 million monthly.

An employee initiated reports highlighting the “Scammiest Scammer” each week to raise awareness, but some flagged accounts remained active for months. Meta tried to deter scammers by charging them more in ad auctions, labeling this practice “penalty bids. ” Advertisers suspected of fraud would have to bid higher amounts, thus reducing competition for legitimate advertisers. Meta aimed to decrease scam ads from this approach, which showed some success, resulting in fewer scam reports and a slight dip in overall ad revenue.

With information from Reuters

Source link

California backs down on AI laws so more tech leaders don’t flee the state

California’s tech companies, the epicenter of the state’s economy, sent politicians a loud message this year: Back down from restrictive artificial intelligence regulation or they’ll leave.

The tactic appeared to have worked, activists said, because some politicians weakened or scrapped guardrails to mitigate AI’s biggest risks.

California Gov. Gavin Newsom rejected a bill aimed at making companion chatbots safer for children after the tech industry fought it. In his veto message, the governor raised concerns about placing broad limits on AI, which has sparked a massive investment spree and created new billionaires overnight around the San Francisco Bay Area.

Assembly Bill 1064 would have barred companion chatbot operators from making these AI systems available to minors unless the chatbots weren’t “foreseeably capable” of certain conduct, including encouraging a child to engage in self-harm. Newsom said he supported the goal, but feared it would unintentionally bar minors from using AI tools and learning how to use technology safely.

“We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether,” he wrote in his veto message.

The bill’s veto was a blow to child safety advocates who had pushed it through the state Legislature and a win for tech industry groups that fought it. In social media ads, groups such as TechNet had urged the public to tell the governor to veto the bill because it would harm innovation and lead to students falling behind in school.

Organizations trying to rein in the world’s largest tech companies as they advance the powerful technology say the tech industry has become more empowered at the national and state levels.

Meta, Google, OpenAI, Apple and other major tech companies have strengthened their relationships with the Trump administration. Companies are funding new organizations and political action committees to push back against state AI policy while pouring money into lobbying.

In Sacramento, AI companies have lobbied behind the scenes for more freedom. California’s massive pool of engineering talent, tech investors and companies make it an attractive place for the tech industry, but companies are letting policymakers know that other states are also interested in attracting those investments and jobs. Big Tech is particularly sensitive to regulations in the Golden State because so many companies are headquartered there and must abide by its rules.

“We believe California can strike a better balance between protecting consumers and enabling responsible technological growth,” Robert Boykin, TechNet’s executive director for California and the Southwest, said in a statement.

Common Sense Media founder and Chief Executive Jim Steyer said tech lobbyists put tremendous pressure on Newsom to veto AB 1064. Common Sense Media, a nonprofit that rates and reviews technology and entertainment for families, sponsored the bill.

“They threaten to hurt the economy of California,” he said. “That’s the basic message from the tech companies.”

Advertising is among the tactics tech companies with deep pockets use to convince politicians to kill or weaken legislation. Even if the governor signs a bill, companies have at times sued to block new laws from taking effect.

“If you’re really trying to do something bold with tech policy, you have to jump over a lot of hurdles,” said David Evan Harris, senior policy advisor at the California Initiative for Technology and Democracy, which supported AB 1064. The group focuses on finding state-level solutions to threats that AI, disinformation and emerging technologies pose to democracy.

Tech companies have threatened to move their headquarters and jobs to other states or countries, a risk looming over politicians and regulators.

The California Chamber of Commerce, a broad-based business advocacy group that includes tech giants, launched a campaign this year that warned over-regulation could stifle innovation and hinder California.

“Making competition harder could cause California companies to expand elsewhere, costing the state’s economy billions,” the group said on its website.

From January to September, the California Chamber of Commerce spent $11.48 million lobbying California lawmakers and regulators on a variety of bills, filings to the California secretary of state show. During that period, Meta spent $4.13 million. A lobbying disclosure report shows that Meta paid the California Chamber of Commerce $3.1 million, making up the bulk of their spending. Google, which also paid TechNet and the California Chamber of Commerce, spent $2.39 million.

Amazon, Uber, DoorDash and other tech companies spent more than $1 million each. TechNet spent around $800,000.

The threat that California companies could move away has caught the attention of some politicians.

California Atty. Gen. Rob Bonta, who has investigated tech companies over child safety concerns, indicated that despite initial concern, his office wouldn’t oppose ChatGPT maker OpenAI’s restructuring plans. The new structure gives OpenAI’s nonprofit parent a stake in its for-profit public benefit corporation and clears the way for OpenAI to list its shares.

Bonta blessed the restructuring partly because of OpenAI’s pledge to stay in the state.

“Safety will be prioritized, as well as a commitment that OpenAI will remain right here in California,” he said in a statement last week. The AG’s office, which supervises charitable trusts and ensures these assets are used for public benefit, had been investigating OpenAI’s restructuring plan over the last year and a half.

OpenAI Chief Executive Sam Altman said he’s glad to stay in California.

“California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued,” he posted on X.

Critics — which included some tech leaders such as Elon Musk, Meta and former OpenAI executives as well as nonprofits and foundations — have raised concerns about OpenAI’s restructuring plan. Some warned it would allow startups to exploit charitable tax exemptions and let OpenAI prioritize financial gain over public good.

Lawmakers and advocacy groups say it’s been a mixed year for tech regulation. The governor signed Assembly Bill 56, which requires platforms to display labels for minors that warn about social media’s mental health harms. Another piece of signed legislation, Senate Bill 53, aims to make AI developers more transparent about safety risks and offers more whistleblower protections.

The governor also signed a bill that requires chatbot operators to have procedures to prevent the production of suicide or self-harm content. But advocacy groups, including Common Sense Media, removed their support for Senate Bill 243 because they said the tech industry pushed for changes that weakened its protections.

Newsom vetoed other legislation that the tech industry opposed, including Senate Bill 7, which requires employers to notify workers before deploying an “automated decision system” in hiring, promotions and other employment decisions.

Called the “No Robo Bosses Act,” the legislation didn’t clear the governor, who thought it was too broad.

“A lot of nuance was demonstrated in the lawmaking process about the balance between ensuring meaningful protections while also encouraging innovation,” said Julia Powles, a professor and executive director of the UCLA Institute for Technology, Law & Policy.

The battle over AI safety is far from over. Assemblymember Rebecca Bauer-Kahan (D-Orinda), who co-wrote AB 1064, said she plans to revive the legislation.

Child safety is an issue that both Democrats and Republicans are examining after parents sued AI companies such as OpenAI and Character.AI for allegedly contributing to their children’s suicides.

“The harm that these chatbots are causing feels so fast and furious, public and real that I thought we would have a different outcome,” Bauer-Kahan said. “It’s always fascinating to me when the outcome of policy feels to be disconnected from what I believe the public wants.”

Steyer from Common Sense Media said a new ballot initiative includes the AI safety protections that Newsom vetoed.

“That was a setback, but not an overall defeat,” he said about the veto of AB 1064. “This is a David and Goliath situation, and we are David.”

Source link