content

Taylor Swift’s new trademark filings aim to protect voice, likeness

Taylor Swift is entering her trademark era.

The global pop star’s company, TAS Rights Management, filed three new trademark applications last week, per the U.S. Patent & Trademark Office. Two of the applications relate to soundbites of her voice, saying the phrases “Hey, it’s Taylor Swift” and “Hey, it’s Taylor.” The other is a well-known image of Swift, often representative of her recent Eras tour, featuring the 36-year-old onstage, holding her pink guitar and dressed in a shimmering bodysuit.

The push to lock down her public image comes at a time when many high-profile celebrities have called for regulations against unauthorized AI-generated content. Matthew McConaughey was one of the first Hollywood A-listers to leverage trademark law as an extra layer of protection.

In January, the “Interstellar” actor secured eight trademarks for his likeness, including images of him smiling and the iconic recording of him saying, “Alright, alright, alright,” from the 1993 movie “Dazed and Confused.”

“My team and I want to know that when my voice or likeness is ever used, it’s because I approved and signed off on it,” the actor told the Wall Street Journal in January. “We want to create a clear perimeter around ownership with consent and attribution the norm in an AI world.”

Registering a trademark for a celebrity’s speaking voice to defend against the prospect of AI-voice generation is a novel legal approach that has not yet been tested in court. Representatives for Swift did not respond to a request for comment on the intent of the recently filed trademarks. But Josh Gerben, one of the first attorneys to report Swift’s latest legal moves, said this is one of the growing gaps in intellectual property protection that AI can exploit.

Before AI infiltrated the internet, musicians, like Swift, would typically rely on copyright law to help prevent the unauthorized use and distribution of their music, while right to publicity laws would protect them from unlawful commercial use of their likeness. But with AI, users can manipulate people’s voices and images to sing or say practically anything.

So if McConaughey has a trademark on his voice saying a phrase, then theoretically any AI-generated voice that sounds similar to it could be considered a violation of that trademark, according to Gerben.

“If they have this trademark protection in place, then the [AI] platforms can’t use that same voice to create new content,” Gerben said. “Every celebrity would essentially have to go and do the same thing, but it’s trying to cut this off at the source as much as possible.”

Variety first reported news of Swift’s trademark filing.

As one of the most popular musicians, Swift has dealt with her share of unauthorized AI-generated content. She was previously one of the many female celebrities whose likeness was among several of Meta’s AI chatbot virtual celebrities. The illicit chatbots allegedly produced pornographic images. Before the 2024 presidential election, Donald Trump also shared AI-generated images of Swift falsely suggesting that she had endorsed him, including one of her dressed as Uncle Sam with the words, “Taylor wants you to vote for Donald Trump.”

Because Swift is such a recognizable public figure, Luke Arrigoni, the chief executive of Loti AI, a tech company that focuses on likeness protection, said trademark filings like these aren’t merely defensive but rather a setup for a long-term protective infrastructure.

“By locking down these trademarks now, she’s ensuring that if a brand wants to use a ‘Swift-like’ AI voice in 2027, they’ll have to go through her authorized gates or face federal trademark infringement,” Arrigoni said in a statement. “She’s essentially putting a price tag on her digital self, and that’s exactly where the entire talent industry needs to go to survive.”

Source link

Contributor: Regulate the ‘Enhanced Games’ as a medical experiment and a marketing stunt

It felt like the Olympics. Crowds cheering. The American flag standing tall above the bleachers. Trainers jumping with anticipation. A swimmer staring in disbelief at the clock after his final stroke. The Jumbotron announced: Kristian Gkolomeev — 20.89 seconds. A new world record in the 50-meter freestyle.

Well, kind of.

I’ve left out some details. There was only one swimmer. The crowd? Just doctors, trainers and filmmakers. This was not in an Olympic city nor an Olympic year, but in Greensboro, N.C., in 2025. And there were no iconic rings on the banners, just “Enhanced Games.”

Yes, Gkolomeev swam faster than César Cielo, the official record holder at the time (20.91 seconds). But he did it “enhanced” — a polite way to say that he used performance-enhancing drugs. At the Enhanced Games, doping isn’t punished. It’s required.

The concept, as described by the organization: “to create the definitive scientific, cultural and sporting movement that safely evolves mankind into a new superhumanity.”

Backed by investors such as Peter Thiel and Donald Trump Jr.’s 1789 Capital, the Enhanced Games embodies a techno-utopian ideal: athletes as canvases for chemical optimization, testing the limits of human health for a lot of money. Gkolomeev earned $1 million for his record.

So far, the competition has happened at one-off pop-up events. But in May, Las Vegas will host the first full-scale Enhanced Games, a four-day meet in swimming, track and field, and weightlifting. The group advertises a “potential prize purse of $7.5 million for just a single day of competition,” plus appearance fees.

Does it need to be said? Apparently yes: The Enhanced Games glorifies the risky use of enhancement drugs.

Steroids can harden arteries, elevate stroke risk, damage the liver and permanently alter hormone systems. They are not electrolyte tablets or a little preworkout creatine. If Lance Armstrong had been rewarded — rather than sanctioned — for doping, what would have happened to competitive cycling?

Fans — and especially kids — mimic their idols. As risky as the drugs are for athletes at the Enhanced Games, with its “medical commission” to give the illusion of safety, the substances are even more dangerous when used by people without medical supervision.

The games also expose the economic neglect that drives athletes toward such competition. As Benjamin Proud, the British silver medalist who recently joined the Enhanced Games, put it: “It would have taken me 13 years of winning a World Championship title in order to win what I could win in one race at these games.”

Indeed, the Enhanced Games might look like an easy way out. Only nine swimmers worldwide received prize money and performance bonuses above $75,000 in 2025, according to World Aquatics.

Investors clearly hope to make money off the games as well. The organization is moving closer to becoming a publicly traded company. The economics are not mysterious.

But the Enhanced Games are not just another sporting event. They are an arena for biomedical experimentation and should be regulated as such. The games should face limits similar to those imposed on other high-risk industries, including age restrictions and strict advertising rules.

We already know how to govern legal, profitable activities that carry serious health risks.

In the United States, that means oversight from the Food and Drug Administration and the Federal Trade Commission — bodies that regulate drug protocols and police misleading commercial claims. A steroid-based competition should not be treated as a sport but as a medical experiment and a marketing stunt.

Regulations on pharmaceutical advertising offer a useful model for the Enhanced Games. Prescription drugs are advertised every night on television, but only under strict rules. They require fair balance (content must present benefits and risks with comparable prominence, readability and duration) and a “major statement” of risks (most serious risks must be spoken aloud and not obscured by visuals or music).

Right now, when you play Gkolomeev’s “world-record” video on YouTube, a medical-risk warning appears for barely five seconds — then vanishes. If a cholesterol drug must audibly warn viewers of stroke risk, why shouldn’t a steroid-based competition do the same?

Enhanced Games content should be accompanied by clear warnings of the risks of performance-enhancing drugs and be clearly labeled, age-gated and distributed as high-risk content more akin to pornography than to a boxing match.

Prohibition is not the answer. Trying to shut down these games only fuels a controversy-driven brand. Just recently, the Enhanced Games sued organizations such as World Aquatics and the World Anti-Doping Agency, alleging antitrust violations and that blocking athletes from participating at the Enhanced Games is illegal. As those organizations fight back, they will be seeking to protect the integrity of mainstream sports, but they will also inadvertently be promoting the Enhanced Games.

If we want kids to admire clean athletes rather than those using banned drugs, the Las Vegas launch must not reach the world as a Super Bowl would. The Enhanced Games should not be televised or allowed to stream online to minors. Otherwise, Las Vegas, in May, risks becoming an unregulated public-health experiment mislabeled as a sporting event.

Fabricio Ramos dos Santos is a lawyer, entrepreneur and sports investor.

Source link

How to Detect AI-Generated Content in 2026: Tools & Methods

Within a year where big language models write press releases, student papers, and even peer-reviewed articles with a single press of a button, guesswork is not an option that teachers, editors, and grant reviewers can afford. They require valid methods of determining whether they are looking at a page that was designed by a human being or generated by an algorithm. The boundary is more than ever indistinct: text generators of the modern era do not only imitate idiosyncratic diction, they also reference sources and sprinkle their text with rhetorical flourishes, which traditionally were the bane of automation. But there are still prints, prints of fingers, that are revealed by a rigorous check-up.

Why Detection Matters in 2026

The rapid improvements in transformer efficiency have made generative writing infrastructure, rather than a novelty. Bots write corporate knowledge bases, marketing newsletters, and institutional reports, which are then lightly edited by humans. In the case of academia, this automation endangers the standards of originality; in journalism, it may endanger the standards of credibility; in the case of educators, it may bring about a decline in the learning outcomes when the essays are sent to silicon.

European Union legislators and some U.S. states now mandate AI disclosure on projects funded by the government, and large journals are requesting provenance statements in the same vein as conflict-of-interest disclosures. Although this would be achieved through disclosure, enforcement is based on detection. Not checking authorship may open the door to plagiarism lawsuits, damage reputations, or even allow plagiarism or algorithmic fake news to creep into print. Proper screening can therefore safeguard integrity as well as liability, and human merit and machine assistance remain honorably separated.

Key Linguistic Signals Still Holding Up

Long before you open a dedicated detector, close reading can raise red flags. AI prose often exhibits low burstiness, sentence lengths fluctuate within narrow bands, and high lexical predictability, especially in mid-length passages. Repeated use of transitional adverbs such as “moreover,” “furthermore,” and “overall” in rhythmic sequences is another giveaway. Similarly, large models smooth out idiosyncratic contractions, turning informal drafts into formally homogenized copy. When a reviewer suspects such fingerprints, a quick trip to Smodin to check if text is AI generated offers an immediate probability score without exporting the manuscript. Still, numbers alone are insufficient; the linguistic context of the assignment, the native proficiency of the writer, and genre conventions must frame interpretation.

Burstiness versus Perplexity: What the Metrics Really Say

Two metrics dominate current detector dashboards. Perplexity gauges how surprised a language model is by the next token in a sentence; lower perplexity usually signals machine-like predictability. Burstiness, borrowed from information theory, measures variation across consecutive sentences or paragraphs. Human writers inadvertently mix terse observations with longer reflections, creating uneven cadence, whereas AI output remains impressively even. Detectors from OpenAI, Turnitin, and Sapling combine both numbers in a heat-map interface, but analysts should understand their limits. An expert human editor deliberately smoothing tone for readability will lower burstiness and perplexity, triggering false flags. Conversely, a basic paraphrase of AI text can raise both metrics, slipping past simple thresholds. Treat these scores as starting points, not verdicts.

The last year was characterized by market consolidation in the detection market. Rather than dozens of browser extensions that have questionable provenance, five professional platforms have become dominant: Smodin, GPTZero-Pro, Turnitin AI Indicator, Copyleaks, and the free-of-charge DetectGPT-X consortium. They both are based on their own training corpora, and therefore, the agreement between them is convincing. GPTZero-Pro is good at sentence-level labeling and has a classroom API.

Turnitin is LMS-based but is English-centric. Copyleaks can also analyze code snippets or prose, and is used in computer-science classes. Smodin is more concerned with breadth and sub-second throughput, with a thousand-word manuscript taking less than five seconds. Comparative reviews, such as Quillbot vs Grammarly vs Smodin, show that no single tool prevails in every context. Experienced editors therefore run suspect passages through at least two detectors before escalating to human forensic analysis.

Layered Verification Workflow

Professional reviewers in 2026 rarely trust an automated score in isolation. A common three-layer pipeline balances speed and accuracy.

  • First, bulk ingestion: run every incoming document through a fast detector with a liberal threshold – say, flag anything above 35% probability.
  • Second, targeted analysis: export only the flagged segments into a slower, sentence-granular model for localized scoring; Copyleaks or Smodin excel here.
  • Third, manual audit: a subject-matter expert reads the highlighted sentences aloud, listening for tonal monotony and checking citations against primary sources.

The layered approach maximizes reviewer time by spending human effort where algorithmic consensus already signals risk. Crucially, every step is logged, satisfying the audit requirements now mandated by several accreditation bodies.

Beyond Algorithms: Human Tactics That Still Work

Detecting contextual instincts of an experienced reviewer is beyond the capability of even the most advanced detector. Spontaneous oral defense is, in classroom essays, as effective as ever: tell a student to recite a paragraph that he or she allegedly composed, and the discrepancies will be revealed soon. Cross-interviewing quoted sources in journalism frequently shows whether or not the author actually interviewed them or just picked up publicly available transcripts – AI can not create personal anecdotes with the same level of detail when it comes to follow-ups.

Proposers of grants rely on the history of revision: real writers build up untidy drafts, comments, and time-stamped edits, whereas AI-written submissions tend to be a one-clean submission. The other sure path is stylometric comparison with a previously known and verified work of a given author; identity footprints like infrequent collocations or recurrent metaphors are exceptionally constant over time. Notably, all human checks develop explanatory accounts – which probability numbers do not have – to assist institutions in justifying decisions in case they are questioned.

The only sure method that could be used today to distinguish between silicon and soul is the combination of statistical detectors and active human inquiry.

One last note: even the AI detectors change every month. When giving a score, always record the model version and calibration date used, since thresholds change as generators get better. Record raw text you tested, detector output, and Human commentary. This audit trail is future-proof, and it allows your decision to be duplicated, the foundation of transparent scholarship and review, in the classroom, newsroom, and laboratory.

Source link