WASHINGTON — The Supreme Court agreed Monday to decide if “habitual drug users” lose their gun rights under the 2nd Amendment.
The Trump administration is defending a federal gun control law dating to 1968 and challenging the rulings of two conservative appeals court that struck down the ban on gun possession by any “unlawful user” of illegal drugs, including marijuana.
Trump’s lawyers say this limit on gun rights comports with early American history when “common drunkards” were prohibited from having guns.
And they argue this “modest, modern” limit make sense because well-armed drug addicts “present unique dangers to society — especially because they pose a grave risk of armed, hostile encounters with police officers while impaired.”
The government says the ban applies only to addicts and “habitual users of illegal drugs,” not to all those who have used drugs on occasion or in the past.
Under this interpretation, the law “imposes a limited, inherently temporary restriction — one which the individual can remove at any time simply by ceasing his unlawful drug use,” the administration’s attorneys told the court.
The appeal noted that California and 31 other states have laws restricting gun possession by drug users and drug addicts, all of which could be nullified by a broad reading of the 2nd Amendment
The court said it will hear the case of a Texas man and a Pakistani native who came under investigation by the FBI for allegedly working with the Iranian Revolutionary Guard Corps, a designated foreign terrorist organization.
When agents with warrant searched the home of Ali Denali Hemani, they found a Glock pistol, 60 grams of marijuana, and 4.7 grams of cocaine. He told the agents he used marijuana about every other day.
He was charged with violating the federal gun control law, but the 5th Circuit Court in New Orleans ruled this ban on gun possession violates the 2nd Amendment unless the defendant was under the influence of drugs when he was arrested.
The 8th Circuit Court based in St. Louis adopted a similar view that gun ban for drug users is unconstitutional.
The Trump administration asked the justices to hear the case of U.S. vs. Hemani and to reverse the two lower courts. Arguments are likely to be heard in January.
Last year, the justices rejected a gun rights claim in another case from Texas and ruled that a man charged with domestic violence can lose his rights to have firearms.
Historically, people who “threaten physical harm to others” have lost their legal rights to guns, Chief Justice John G. Roberts said in an 8-1 decision.
The judge ruled NSO caused ‘irreparable harm’ to Meta, but said an earlier award of $168m in damages was ‘excessive’.
Published On 18 Oct 202518 Oct 2025
Share
A United States judge has granted an injunction barring Israeli spyware maker the NSO Group from targeting WhatsApp users, saying the firm’s software causes “direct harm” but slashed an earlier damages award of $168m to just $4m.
In a ruling on Friday granting WhatsApp owner Meta an injunction to stop NSO’s spyware from being used in the messaging service, district judge Phyllis Hamilton said the Israeli firm’s “conduct causes irreparable harm”, adding that there was “no dispute that the conduct is ongoing”.
Recommended Stories
list of 3 itemsend of list
Hamilton said NSO’s conduct “serves to defeat” one of the key purposes of the service offered by WhatsApp: privacy.
“Part of what companies such as WhatsApp are ‘selling’ is informational privacy, and any unauthorised access is an interference with that sale,” she said.
In her ruling, Hamilton said that evidence at trial showed that NSO reverse-engineered WhatsApp code to stealthily install its spyware Pegasus on users’ phones, and repeatedly redesigned it to escape detection and bypass security fixes.
NSO was founded in 2010 and is based in the Israeli seaside tech hub of Herzliya, near Tel Aviv.
Pegasus – a highly invasive software marketed as a tool for law enforcement to fight crime and terrorism – allows operators to remotely embed spyware in devices.
NSO says it only sells the spyware to vetted and legitimate government law enforcement and intelligence agencies. But Meta, which owns WhatsApp, filed a lawsuit in California federal court in late 2019, accusing NSO of exploiting its encrypted messaging service to target journalists, lawyers and human rights activists with its spyware.
Judge Hamilton said her broad injunction was appropriate given NSO’s “multiple design-arounds” to infect WhatsApp users – including missed phone calls and “zero-click” attacks – as well as the “covert nature” of the firm’s work more generally.
Will Cathcart, the head of WhatsApp, said in a statement that the “ruling bans spyware maker NSO from ever targeting WhatsApp and our global users again”.
“We applaud this decision that comes after six years of litigation to hold NSO accountable for targeting members of civil society. It sets an important precedent that there are serious consequences to attacking an American company,” he said.
Meta had asked Hamilton to extend the injunction to its other products – including Facebook, Instagram and Threads – but the judge ruled there was no way for her to determine if similar harms were being done on the other platforms without more evidence.
Hamilton also ruled that an initial award of $168m against NSO for damages to Meta in May this year was excessive, determining that the court did not have “sufficient basis” to support the jury’s initial calculation.
“There have simply not yet been enough cases involving unlawful electronic surveillance in the smartphone era for the court to be able to conclude that defendants’ conduct was ‘particularly egregious’,” Hamilton wrote.
The judge ruled that the punitive damages ratio should therefore be “capped at 9/1”, reducing the initial sum by about $164m to just $4m.
Nearly 30 million people in the UK who bought an Apple or Samsung smartphone between 2015 and 2024 may be entitled to about £17 if the consumer campaign group Which? is successful in a case against US tech giant Qualcomm.
The consumer group is taking the tech giant to the Competition Appeal Tribunal in London on Monday.
The trial between Which? and Qualcomm is expected to last five weeks. The consumer group is accusing the chip company of anti-competitive practices.
It claims the firm forced Apple and Samsung to pay inflated prices and licensing fees for essential handset components, which then pushed up the cost of those smartphones for consumers.
The BBC has reached out to Qualcomm for comment.
The trial starting on Monday will focus on whether Qualcomm held market power and, if so, whether it abused a dominant position.
If Which? is successful, there will be a second stage seeking £480m from Qualcomm, to be distributed among an estimated 29 million British phone owners affected.
Which? is seeking damages for all affected Apple and Samsung smartphones purchased between 1 October 2015 and 9 January 2024.
The consumer group says this would probably work out at around £17 each. Qualcomm has previously said the case has “no basis”.
A similar case against Qualcomm is ongoing in Canada, and the firm has also previously been fined by the EU for antitrust.
Anabel Hoult, chief executive of Which?, said: “This trial is a huge moment. It shows how the power of consumers – backed by Which? – can be used to hold the biggest companies to account if they abuse their dominant position.”
Qualcomm is one of the world’s biggest producers of smartphone chips and has faced allegations about anti-competitive behaviour before.
The Federal Trade Commission in the United States sued the firm for unfair practices in the way it licensed its technology back in 2017, but had its case dismissed in 2020.
Second order this year focuses on UK users; earlier attempt included US user data, but was withdrawn under US pressure.
The British government has ordered Apple to hand over personal data uploaded by its customers to the cloud for the second time this year in an ongoing privacy row that has raised concerns among civil liberties campaigners.
The Home Office issued a demand in early September for the tech behemoth to create a so-called back door that would allow the authorities access to private data uploaded by United Kingdom Apple customers after a previous attempt that included customers in the United States failed, according to a report published on Wednesday by The Financial Times.
Recommended Stories
list of 3 itemsend of list
A previous technical capability notice (TCN) issued early this year led to a major backlash from the US, which frowns upon foreign entities seeking to regulate Silicon Valley. The administration of US President Donald Trump eventually forced the UK to back down.
US intelligence chief Tulsi Gabbard said in August that the administration had wanted to “ensure Americans’ private data remains private and our constitutional rights and civil liberties are protected”.
Civil liberties campaigners in the UK reacted with alarm to the latest order for access to encrypted data. “If this new order isn’t stopped, the UK Government will likely issue similar orders to other companies, too,” said London-based group Privacy International.
It said the UK government, which would be deploying the measure to protect national security, risked “everyone’s security, while claiming to ‘protect’ people”.
If this new order isn’t stopped, the UK Government will likely issue similar orders to other companies, too.
The Home Office was cited by the FT as saying: “We do not comment on operational matters, including, for example, confirming or denying the existence of any such notices.”
Privacy through encryption is a major selling point for tech platforms, which have long seen providing access to law enforcement as a red line.
On Wednesday, Apple said it had “never built a backdoor or master key to any of our products or services and we never will”. The company had appealed against the earlier TCN at the UK’s Investigatory Powers Tribunal, the body confirmed in April.
However, it withdrew full end-to-end encryption, known as Advanced Data Protection, for UK users in February. The feature allows iPhone and Mac users to ensure that only they – and not even Apple – can unlock data stored on its cloud.
“Apple is still unable to offer Advanced Data Protection in the United Kingdom to new users, and current UK users will eventually need to disable this security feature,” the California-based company said on Wednesday.
The company said it was committed to offering users the highest level of security, and it was hopeful it would be able to do so in Britain in the future.
The controversy over official attempts to snoop on Apple users comes amid a growing furore over government plans to issue digital identity cards to curb undocumented immigration and ward off threats from the right-wing Reform UK party.
The move raised hackles among civil liberties groups and citizens in the UK, where the concept of national identity cards has traditionally been unpopular.
Mitchum has apologised after customers were left with “burning armpits”
A well-known deodorant brand has apologised after some of its roll-on products reportedly left customers with itchy, burning armpits.
Consumers of Mitchum’s 48-hour roll-on anti-perspirant and deodorant complained on social media of having “agonising weeping spots”, redness and irritation after using the product.
Posting on TikTok, one customer claimed she wanted to “rip my armpits out”, while another said her underarms felt like they were “on fire”.
The company said it was “truly sorry” and explained how a change in the manufacturing process had affected 100ml batches sold in the UK, Ireland and South Africa. It said it was removing the affected products from shelves.
Hundreds of people have taken to sharing videos of their experience on TikTok.
One woman said she was unable to sleep after using the roll-on because it left her with “second degree chemical burns on my armpits”.
Another described her underarm skin as developing a pink rash which had “scabbed over”.
A third said she was left in agony because of “weeping spots” under her arm.
“I won’t be using any Mitchum products again because I’m not risking this happening again,” she said.
A Mitchum spokesperson said the brand was “truly sorry some of our customers have experienced temporary irritation”.
In a statement, the company said: “We want to reassure there has been no change to the formula of our products, but we have identified a change in the manufacturing process affecting one of our raw materials.
“This has impacted how the roll-on interacts with the skin of some users.”
It did not give details of what the change to its manufacturing process was but said the issue had since been resolved and it was working to “remove the small amount of product” left in shops.
“In addition, we have reverted to the original manufacturing process to ensure no other batches are affected,” the spokesperson said.
Mitchum advised all those affected to contact its customer services team so it could “make this right”.
The firm has issued a list of all the affected 100ml roll-on products. These are:
US tech giant says jury decision misunderstands its products and it will appeal.
Published On 4 Sep 20254 Sep 2025
Google has been told by a US jury to pay $425m for violating the privacy of tens of millions of users who opted out of a feature tracking app use.
The jury in San Francisco handed down the verdict on Wednesday after a group of Google users accused the tech giant of continuing to collect data from third-party apps even when they changed their account settings to prevent the practice.
Google said the decision misunderstood how its products work and that it planned to appeal.
“Our privacy tools give people control over their data, and when they turn off personalization, we honor that choice,” Google spokesperson Jose Castaneda said in a statement.
In their lawsuit, the plaintiffs alleged that Google collected and sold users’ mobile app activity data in breach of privacy assurances contained in its Web & App Activity settings.
The suit, which was filed in July 2020, covered some 98 million Google users.
During the trial, Google had argued that collected data was “nonpersonal” and “pseudonymous” and stored in “segregated, secured, and encrypted locations”.
Google has faced a number of other recent privacy-related lawsuits.
In May, the tech giant agreed to pay $1.375bn to the state of Texas over claims it had collected residents’ face geometry and voiceprints without proper consent, and tracked users’ locations even when they opted out of the feature.
Lagos-based fintech giant Flutterwave has launched a new stock-trading feature for African users, enabling them to invest in US equities via their local currencies.
The rollout began in late June, following an integration with US-based brokerage API provider Alpaca, and is currently available in Nigeria and Kenya, with plans to expand to more African markets.
The new offering lets users purchase fractional shares of US stocks directly through Flutterwave’s app or thirdparty platforms integrated with its API. Notably, the system facilitates real-time settlement in local currencies and integrates with existing mobile wallets, providing seamless access for first-time retail investors across the continent.
The move marks Flutterwave’s entry into wealthtech, expanding its suite beyond core payments infrastructure. Founded in 2016 in Lagos, the company has become Africa’s most valuable payments startup, with a valuation exceeding $3 billion. It processes billions of dollars annually across 33 African countries, powering payments for global firms including Uber, Meta, and Microsoft.
Flutterwave’s wealthtech ambitions are reinforced by its acquisition of US-based money transfer platform Orbital in February 2025. The deal—whose value was not disclosed—strengthened the company’s remittance capabilities and allowed it to integrate US financial infrastructure into its services. As a result, Flutterwave is better positioned to facilitate diaspora-led investments and crossborder flows between the United States and Africa.
This expansion into stock trading comes at a time when Africa’s young, mobile-savvy population is showing heightened interest in global investment opportunities. According to Verified Market Reports, the global micro-investing app market is forecast to grow from $1.2 billion in 2024 to $4.5 billion by 2033, with demand in emerging economies leading the curve.
The new product also pits Flutterwave against rivals like Chipper Cash, Bamboo, and Trove. Still, it aims to differentiate itself through localized integration, multi-currency support, and access through already trusted payment channels.
The company has raised more than $475 million from global investors, including Tiger Global, Visa Ventures, and Avenir Growth Capital, positioning it to scale further into the financial services sector.
With this new offering, Flutterwave is redefining itself as a comprehensive financial gateway bridging Africa to global capital markets.
Aug. 29 (UPI) — Anthropic plans to start training its artificial intelligence models with user data, one day after announcing a hacker used Claude to identify 17 companies vulnerable to attack and obtained sensitive information.
The company is asking all users of Claude to decide by Sept. 28 whether they want their conversations used for the process. Anthropic will retain data for up to five years, according to a blog post by the company on Thursday.
Anthropic, a public AI research and development company headquartered in San Francisco, was founded in 2021 by seven OpenAI leaders and researchers who left because of disagreements over safety policies. OpenAI is a rival company.
In 2023, Amazon invested $4 billion and Google $2 billion in the company.
Claude debuted in March 2023 with the latest version, Claude 4, introduced in May. Claude has approximately 18.9 million monthly users active users worldwide. There are free and direct use plans that cost as much as $30 per month per user.
Users of the affected consumer products include Claude Free, Pro and Max plans. Not applicable are Claude for Work, Claude Gov, Claude for Education, or application programming interface use, including third parties that include Amazon Bedrock and Google Cloud’s Vertex AI.
Previously, users were told their prompts and conversations would be deleted automatically from the company’s back end within 30 days “unless legally or policy‑required to keep them longer” or their input was flagged as violating its policies. In the latter case, a user’s inputs and outputs might be retained for up to two years.
“By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations,” the company said. “You’ll also help future Claude models improve at skills like coding, analysis and reasoning, ultimately leading to better models for all users.
The company noted users are “always in control of this setting and whether we use your data in this way.”
New users can select a preference in the sign-up process. Existing ones will see the choice in a pop-up window. To avoid accidentally clicking “accept,” the following message is in larger letters: “Updates to Consumer Terms and Policies.”
Changes will go into effect immediately.
After Sept. 28, users will need to make their selection on the model training setting to continue using Claude.
The five years of data retention will only apply to new or resumed chats and coding sessions, “and will allow us to better support model development and safety improvements,” the company said.
Also, their privacy will be guaranteed.
“To protect users’ privacy, we use a combination of tools and automated processes to filter or obfuscate sensitive data,” the company said. “We do not sell users’ data to third parties.“
Connie Loizos, a writer for TechCrunch, explained why the policy changed.
“Like every other large language model company, Anthropic needs data more than it needs people to have fuzzy feelings about its brand,” Loizos said. “Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google.”
The Federal Trade Commission, when Joe Biden was president, warned on Jan. 9, 2024, that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print — they risk running afoul of the law.
The current FTC has only three members.
On Wednesday, Anthropic said an unnamed hacker “used AI to what we believe is an unprecedented degree. Claude Code was used to automate reconnaissance, harvesting victims’ credentials and penetrating networks.” In cyber extortion, hackers steal sensitive user information or trade secrets.
A hacker convinced Claude Code, which is Anthropic’s chatbot that specializes in “vibe coding,” or creating computer programming based on simple requests, to identify companies vulnerable to attack. Claude created malicious software to actually steal sensitive information from the companies. It organized the hacked files and analyzed them to help determine what was sensitive and could be used to extort the victim companies.
Targeted were healthcare, emergency services, and governmental and religious institutions. The person threatened to publicly expose the data unless a ransom of up to $500,000 was paid, the company said.
The company also said it discovered that North Korean operatives had been using Claude to fraudulently secure and maintain remote employment positions at U.S. Fortune 500 technology companies to generate profit for the North Korean regime.
“Operators who cannot otherwise write basic code or communicate professionally in English are now able to pass technical interviews at reputable technology companies and then maintain their positions,” the company said.
The company said it updated preventive safety measures.
Netflix fans have been searching through the streaming service’s catalogue to find the best hidden gems, and one ‘absolutely incredible’ particular show is getting a lot of love
A Netflix miniseries based on a true story has been ranked as the ‘most underrated’ (stock photo)(Image: Marvin Samuel Tolentino Pineda via Getty Images)
Netflix boasts an enormous library of films and series covering a wide variety of genres, including content in multiple languages. There’s something suit every TV enthusiast, whether you’re into true crime, action-packed thrillers or laugh-out-loud comedies.
Some blockbuster programmes and movies dominate many Netflix subscribers’ viewing lists and receive widespread promotion across social media platforms, such as Wednesday, Stranger Things and Bridgerton. However, the streaming giant’s vast collection also contains a number of lesser-known gems that audiences have discovered to be surprisingly captivating viewing. With such an extensive selection of series and films available, there are inevitably some brilliant hidden treasures waiting to be found.
A Reddit user recently asked fellow forum members to share the “most underrated” Netflix show they have found.
The user went on to explain that they occasionally stumble upon programmes they’ve never heard of before, only to find themselves enjoying them far more than anticipated.
The streaming fan continued: “I am curious if anyone else has had this experience. What is the most underrated show you have discovered on Netflix that more people should watch?”
The post attracted more than 400 responses, with the top suggestion being the 2019 drama series Unbelievable, starring Kaitlyn Dever, Merritt Wever and Toni Colette.
Netflix’s description of the show reads: “After a young woman is accused of lying about a rape, two female detectives investigate a spate of eerily similar attacks. Inspired by true events.”
It describes the programme as “bittersweet” and “emotional.” The eight-part series boasts an outstanding 98 per cent rating on Rotten Tomatoes alongside an 8.3 out of 10 score on IMDb.
Several Reddit users have expressed their views on Unbelievable. One wrote: “Absolutely incredible show with incredible acting. I’ve watched it multiple times, even though it’s a tough watch.”
Another remarked: “Depressing and amazing at the same time.” A third added: “Love this show. Great characters.”
Responding to the suggestion about Unbelievable, another Reddit user declared: “I was coming to suggest Unbelievable too!”
A second reply states: “True that, [I] started watching this a few days ago and it’s so underrated.”
The programme was jointly created by Susannah Grant, Ayelet Waldman and Michael Chabon.
It draws from the 2015 news article ‘An Unbelievable Story of Rape’ written by T. Christian Miller and Ken Armstrong concerning the Washington and Colorado serial rape incidents between 2008 and 2011.
Unbelievable received a nomination in the International section of the BAFTA TV Awards in 2020 along with numerous other honours, including the Critics Choice Awards, Golden Globes USA and Primetime Emmy Awards.
Toni Collette secured victory at the Critics Choice Awards in 2020 for her role in Unbelievable, claiming the Best Supporting Actress in a Limited Series or Movie Made for Television award.
WASHINGTON — Alana Voechting, a 27-year-old nursing student, had never heard of Klarna when she noticed its bright pink logo while checking out at Sephora.com with $165 in skin care products.
Mounting medical debts from chronic health conditions left Voechting with money problems, so she was thrilled to learn the app would allow her to break the purchase price into four installments over six weeks — with no interest, fees or credit inquiries to ding her already subpar credit score.
“It’s like your brain thinks, ‘Oh, I’m getting this product for cheap,’ because you really only look at that first payment, and after that you kind of forget about it,” she said. “So psychologically, it feels like you’re spending so much less when you’re not.”
Soon Voechting began regularly using not just Klarna but also similar services, including Quadpay and Affirm, to buy makeup, clothing, airline tickets and expensive lounge wear she acknowledged she “would not have purchased otherwise.”
Voechting is one of millions of young Americans with scant or subprime credit histories who are using so-called buy-now-pay-later apps every month.
The smartphone-based services are an updated version of the old layaway plan, except users can do it all on their phones and — most appealingly — get their purchase immediately rather than having to wait until they’ve paid for it.
The companies act as intermediaries between retailers and consumers, making most of their profit by charging merchants 2% to 8% of the purchase price, similar to the retailer fees levied by credit card companies.
The apps are taking off among millennials and Generation Z consumers attracted by the ability to bypass traditional credit cards and still delay payments with no interest.
Retailers such as Macy’s and H&M have jumped to partner with the services, which soared in popularity during the COVID-19 pandemic. Roughly 42% of Americans report using the apps at least once, according to a Credit Karma survey from February.
U.S. regulators are taking a wait-and-see approach, saying they don’t want to stifle a new financial product that could help consumers who might otherwise fall into predatory lending schemes.
But regulators in Europe and Australia, where many of the companies first launched, are increasingly concerned the apps are extending credit irresponsibly.
Using celebrities such as A$AP Rocky and Keke Palmer to portray the services as a hip alternative to the “gotcha” fine print of credit cards, the apps could promote overborrowing in a generation already struggling with high debt and poor credit, consumer advocates warn.
And despite claims that users’ credit ratings won’t be affected and that there are no hidden fees, experts say consumers can still face late charges, overdraft fees and debt collection. Some apps, such as Quadpay, charge a $1 transaction fee on every payment made, regardless of the amount.
“It sounds too good to be true, and it is, in many ways, because there are perils for people who use this,” said Jamie Court, president of Consumer Watchdog.
The apps offer different repayment options, but the most common links to a user’s debit card and makes automatic withdrawals every two weeks. Problems quickly arise when there is not enough money in the account, potentially resulting in charges by both the user’s bank and the app.
Voechting said that for the most part she has been able to control her spending and keep track of when her payments will be withdrawn, a challenge when dealing with multiple purchases and multiple apps.
But this year, she missed a payment with Quadpay on a $120 order from Beautycounter because she failed to change her payment information in the app after receiving a new debit card.
Sixty days later, she was informed the installment would go to collections unless she paid off the full remaining balance of $54, plus a $10 late fee. Voechting promptly gathered the money, fearing more damage to her credit.
Services boast that users’ activity and debt are not regularly reported to major credit bureaus. That’s appealing to consumers under pressure or already cut off from traditional lenders.
But not reporting on-time payments also means that users don’t see their credit scores increase as they demonstrate a track record of responsible borrowing, a crucial hurdle for younger consumers.
And the apps may report missed or late payments for some payment plans, which can hurt users’ credit scores, according to a clause buried deep in terms and conditions agreements for Quadpay, Affirm and Klarna.
The Credit Karma survey found about 38% of buy-now-pay-later customers had missed at least one payment, and 72% of those users reported seeing their credit score drop afterward, though many factors can cause fluctuations.
Buy-now-pay-later users also don’t benefit from many protections applied to credit cards.
For instance, if a credit card company refuses to offer credit to a potential customer, it must disclose why the application was declined. No such rules apply to the apps, which authorize every purchase on a case-by-case basis. That means users have no assurance a transaction will be approved.
“They don’t know what the issue is,” said Angela Hunt, 31, of Hampton, Va., part of a Facebook group devoted to Klarna, in which members frequently complain they are denied approval for purchases in a seemingly random manner.
App users also don’t enjoy the same billing-dispute protections they would with other payment methods, so returning merchandise, resolving fraudulent charges and requesting refunds can be difficult.
In January, Brittany Conn, 30, was moving into a new apartment in Melbourne, Fla., and used Klarna on Wayfair to buy a bed frame, headboard and bookcase for $450.
The bookcase never arrived, so she reached out to Klarna to get a partial refund. Multiple agents promised a supervisor would contact her, but the call never came. When she tried to publicly request help on Klarna’s Facebook page, she said, her comments were deleted.
If Conn had made her purchase with a credit card, the lender would have been forced to respond immediately, launch an investigation and explain its final determination within two billing cycles. During the process, she would be entitled to withhold payment on the disputed amount.
It took Conn, who works in customer service, nearly two months and many emails and online chats to get her money back. She filed a complaint with the Better Business Bureau.
“It was just an uphill battle, just email after email and chat after chat, and it got to a point where my chats weren’t being answered anymore,” she said.
According to the Better Business Bureau, Klarna — the largest buy-now-pay-later app in the U.S. with 15 million customers in 2020 — received 676 complaints in the last 12 months.
Quadpay received 979. Affirm had 227, and Afterpay and Sezzle saw more than 100 complaints each.
By comparison, Discover, a well-established credit card brand with more than 55 million customers, saw 532 complaints with the Better Business Bureau in the same period.
The rise in users — and complaints — has brought more scrutiny to the apps.
Credit card giant Capital One barred its customers worldwide last year from linking its cards to fund buy-now-pay-later purchases, citing the lack of consumer protections.
Class-action lawsuits in California, Connecticut and New York allege plaintiffs suffered from large bank overdraft fees due to automatic withdrawals, undisclosed late fees and deceptive marketing.
Consumer complaints prompted regulators in other countries to crack down. Sweden enacted a law last year that bans online checkout portals from making the apps the default payment option.
Australian financial experts wrote a report in November that found 20% of app users surveyed “cut back on or went without essentials” to make their payments on time. The United Kingdom released a nearly 70-page report in February concluding that “urgent and timely” regulatory changes were needed.
U.S. regulators say they are aware of the services but are exercising caution.
“We’re really interested in use cases of buy-now-pay-later where perhaps a consumer that would otherwise go to a payday lender and pay a very high cost for a loan might be able to use it,” said John McNamara, principal assistant director of markets at the Consumer Financial Protection Bureau.
In July, the CFPB released a blog post titled “Should you buy now and pay later?” warning consumers that the apps can charge late fees, report to credit bureaus and do not offer the same protections as other credit products.
Laura Udis, who manages installment loan programs at the CFPB, said the apps are subject to the Dodd-Frank act, passed in 2010 after the subprime mortgage crisis to prevent unfair, deceptive and abusive practices by lenders. She said the law “should be flexible enough to apply to any particular credit situation, including new innovations like buy-now-pay-later.”
But the services have found loopholes in regulation.
For instance, the Truth in Lending Act, which requires lenders disclose the terms and costs of services, states that payment plans of fewer than five installments are not subject to ad disclosure requirements as long as they avoid certain terms.
Consumer advocates say that explains why many apps are structured as four installments. And the companies help merchants avoid terminology that would trigger greater disclosures.
Affirm offers its merchant partners a guide. Quadpay has a variety of promotions for merchants to download that won’t trigger disclosures.
An advertisement for Afterpay and United Kingdom-based retailer Boohoo at a company-sponsored party.
(Caroline McCredie / Getty Images )
An Affirm spokesperson said the company provides information to users at checkout, including disclosures that would be required by the Truth in Lending Act, to ensure customers are informed. A Quadpay spokesperson said the company makes “every effort to help consumers by providing fair, flexible and transparent payment terms.”
Ira Rheingold, executive director of the National Assn. of Consumer Advocates, said it may take time for regulators to sort out how lending laws apply to the services, and whether new ones are needed.
“I think there are different ways that regulators can deal with them,” he said. “And I think that there’s some places where they’ll be far behind and some places where they won’t be.”
Lawmakers show no signs of getting involved. Spokespeople for multiple congressional committees said they were not considering regulating the apps.
California’s regulators are among the few U.S. watchdogs that have taken substantive actions against the services. In 2019, the state’s Department of Business Oversight, now the Department of Financial Protection and Innovation, sued Sezzle,Afterpay,Quadpay and Klarna for making illegal loans.
Each of the companies ultimately settled and had to get licensed, refund fees collected from Californians and pay fines.
“Today, the buy-now-pay-later companies we license in California are required to take into consideration a borrower’s ability to repay the loan and are subject to strict rate and fee caps,” department spokesperson Maria Luisa Cesar said.
As regulators and lawmakers determine how best to keep up with the growth of the apps, their popularity endures. Voechting, Hunt and Conn all said they will continue to use them.
“It’s kind of nice to be able to say, ‘Oh, you know, I can’t afford to buy this right upfront, but I can split it up into four payments and afford it that way,’” Conn said.
Before the apps, Conn would spend weeks saving money for special purchases. The apps allow her to get products immediately.
OpenAI creator Sam Altman and other leaders in artificial intelligence testify before the Senate Commerce, Science, and Transportation Committee on Capitol Hill in May in Washington, DC. The company released its latest version of the program, GPT-5, on Thursday, free to users. File Photo by Anna Rose Layden/UPI | License Photo
Aug. 7 (UPI) — Open AI announced Thursday the release of GPT-5, the latest and most-advanced iteration of its artificial intelligence technology, which will be free to users.
The company said the program, which underwent 5,000 hours of testing, is smarter, faster and more useful than the previous model, GPT-4, and will be especially efficient at writing, coding and for use in the healthcare sector.
OpenAI said it expects to reach 700 million weekly users on ChatGPT this week, and is talking with investors who are discussing a potential stock valuation of $500 billion, CNBC has reported.
While GPT-4 can refuse to answer user questions if they are “potentially risky,” GPT-5 has been designed to provide high-level responses within certain safety restraints so it cannot be used to cause harm, the company said.
“GPT-5 has been trained to recognize when a task can’t be finished, avoid speculation and can explain limitations more clearly, which reduces unsupported claims compared to prior models,” said Michelle Pokrass, a trainer at OpenAI.
GPT-5 marks the first time that users have access to a reasoning feature, which allows the software to “think” or carry out what’s known as an internal chain of thoughts before responding, the company said. The program comes with a usage cap for free users, who will have access to GPT-5 mini if they reach the free limit.
ChatGPT developer OpenAI is ending an experiment that enabled users to index and share their private conversations with the artificial intelligence program. File Photo by Wu Hao/EPA-EFE
Aug. 2 (UPI) — OpenAI is ending the option to have Google and other search engines index user chats with ChatGPT and make the content of those chats discoverable on searches.
Google accounts for more than 89% of all online searches, which made private chats on ChatGPT potentially widely accessible when indexed on that search engine and others.
“This feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option,” Dan Stuckey, OpenAI chief information security officer, told PC Mag.
Bing, DuckDuckGo and other search engines will continue to index discoverable chats, but only for a while longer.
“We’re also working to remove indexed content from the relevant search engines,” Stuckey said.
OpenAI recently enabled the index option for private ChatGPT discussions as an experiment, Stuckey added, but that experiment is ending.
A message informed users their indexed chats were searchable on Google and other search engines, but many users did not read the message or don’t understand the extent to which their conversations might be available to others.
Such conversations are accessible when affixing “site:chatgpt/share” to search queries when those conversations are indexed.
News of the indexed private conversations with ChatGPT first was reported by FastCompany on Wednesday in a story detailing Google’s indexing of ChatGPT conversations.
The indexing does not provide information on respective users, but the conversations might include personal information when mentioned by the users while conversing with ChatGPT.
Many users also were unaware that sharing a conversation with someone via social apps, such as WhatsApp, when saving the URL for future use would cause Google to make it potentially widely available to millions of people.
OpenAI officials recently announced they were appealing a court order requiring the preservation of all chats that users delete after conversing with ChatGPT, Ars Technica reported.
Grindr has implemented a brand new age verification feature to tackle underage use of its app.
On June 25, the company announced that UK-based users will be required to participate in its new “age assurance” initiative starting this month (July).
The decision stemmed from the UK government’s Online Safety Act, which was passed in 2023.
Under the legislation, social media companies and search engines are required to better regulate online safety for both adult and child users by “implementing new systems and processes to reduce risks that their services are used for illegal activity and take down illegal content when it does appear.”
The government website adds: “The strongest protections in the Act have been designed for children. Platforms will be required to prevent children from accessing harmful and age-inappropriate content and provide parents and children with clear and accessible ways to report problems online when they do arise.”
In light of the Online Safety Act, Grindr now requires UK users to confirm if they are 18 or older by uploading a “quick video selfie” or a video selfie paired with a photo ID.
For new users, the one-time action will be prompted during the registration process. Established users will be asked to complete the task upon opening the app.
UK individuals will be blocked from accessing the app until they complete the process, which uses biometric verification technology from FaceTec.
Lastly, Grindr has confirmed that user privacy will also be a priority, revealing that all documents and videos will “only be used for age assurance, are securely encrypted during the process, and are permanently deleted once age assurance is complete.”
The only information that will be retained is the age verification method a user selects and whether they passed or failed.
For those looking to upload a photo ID and video selfie, some of the accepted documents include: provisional driver’s license, driver’s license, international driver’s license, passport, British Army ID card, PASS Proof of age card and residence permit.
Lastly, only UK users will be asked to take part in the new “age assurance” feature. However, if a visitor from another country opens the app in the UK, they will be asked to undergo the process.