Science and Technology

The new boss at work may not be human | Technology

A year ago, engineers at Snowflake, the American cloud-based data platform, still spent part of their day on routine tasks – such as scanning dashboards to ensure systems were running smoothly and chasing colleagues for data to complete trend analyses.

Now, says Qaiser Habib, the company’s Toronto-based head of Canada engineering, AI agents handle much of that groundwork, allowing engineers to focus on higher-level decisions.

Recommended Stories

list of 4 itemsend of list

Habib spends 20 to 30 hours a week interacting with five AI agents. Snowflake has built agents to review product design or to help on-call engineers to help during an outage or an incident, among other uses. He estimates the average engineer works with three or four agents daily, using them to carry out coding projects under human supervision.

“You don’t have to bother a human for basic questions any more,” Habib said, noting that he still collaborates with colleagues on more complex work, such as troubleshooting coding problems.

As companies experiment with AI agents – systems designed to plan, reason and carry out multistep tasks – the technology is beginning to reshape office hierarchies across the United States and Canada. Unlike chatbots, which respond to prompts, AI agents can adapt to changing contexts such as business goals and draw on reference tools including calendars, meeting transcripts and internal databases, to complete work with limited human oversight.

In some workplaces, AI systems are not just completing tasks but also assigning them to human workers. As the technology improves, AI agents are also beginning to manage each other. One agent might generate code, for example, while another reviews it for errors and fixes bugs before a human signs off on the final version.

These agent-to-agent workflows can help companies scale faster. But they also intensify concerns that AI is moving beyond assistance into supervision – and potentially, job replacement.

The leaner office

Anthropic recently expanded access to its cowork agents, allowing users without technical expertise to grant Claude – its AI assistant – permission to specific folders on their computers so it can read, edit, create and organise files autonomously.

The growing use of AI agents is transforming how organisations function around the world, even in companies that aren’t focused on building technology products. For example, some companies are using AI tools to track performance, recommend promotions, role changes, and even identify roles for elimination.

The shift comes as white-collar jobs continue to disappear, particularly in the US. A slew of US employers have announced mass layoffs, mostly affecting entry-level and middle-management workers, and executives have pointed to automation and AI-driven efficiency as part of the rationale. When Amazon said in October that it planned to eliminate about 14,000 jobs, executives cited AI’s potential to help the company operate with fewer layers and greater efficiency. UPS, Target and General Motors also announced deep cuts last year, and this January saw more layoffs than any January in the US since 2009. Several more companies, including Pinterest and HP, continued to cite AI initiatives as part of the reason.

Goldman Sachs has estimated that 6 to 7 percent of US workers could lose their jobs due to AI adoption, with higher risks for computer programmers, accountants, auditors, legal and administrative assistants, and customer service representatives. Overall employment effects, the bank said in August, may be “relatively temporary” as new roles emerge.

Middle management squeezed

Early predictions suggested AI would mainly replace entry-level technical jobs, and some experts tie recent high unemployment rates for new graduates to AI adoption. But the bigger disruption, said Roger Kirkness, founder of AI software firm Convictional in Toronto, is occurring in middle management.

His company’s tools translate executive strategy into operational tasks – a role once handled by supervisors – delivering daily assignments and feedback to employees through a user-friendly inbox interface.

In companies of more than 50 people, “where CEOs can’t speak with each manager, our platform continually surfaces the context that the organisation has that is relevant to leadership decision-making”, Kirkness told Al Jazeera.

This doesn’t mean humans have become irrelevant. But there is growing pressure to reskill, and those who thrive in strategic thinking are better-positioned to adapt to AI-integrated work environments, Kirkness said.

“People are basically becoming managers of their prior jobs,” he said, because AI is now able to perform many of the tasks that previously fell within their roles. Instead of completing tasks such as coding or designing marketing assets, humans are focusing on higher-level strategy while monitoring AI systems, he added.

However, recent research indicates that job cuts reflect companies’ anticipation of AI’s potential, rather than its current ability to replace human workers fully.

A December Harvard Business Review survey of 1,006 global executives found that while AI has played little direct role in replacing workers so far, many companies have already cut jobs or slowed hiring in anticipation of its promised impact.

Most CEOs say they’re still waiting on AI’s payoff: 56 percent report no revenue or cost benefits so far, according to consulting firm PwC’s latest Global CEO Survey of 4,454 executives across 95 countries and territories.

Trust and control

Stefano Puntoni, a behavioural scientist at the University of Pennsylvania’s Wharton School, has found that AI usage is also already affecting workplace communication habits. His research shows employees are often more willing to delegate tasks to AI than to colleagues, which can help to reduce burnout. “There’s no social cost,” he said. “You don’t worry about burdening an AI.”

Still, Puntoni argues the biggest barrier to adoption is psychological, not technical. Even effective systems can fail if workers do not trust them. Generative AI, he said, can threaten employees’ sense of competence, autonomy and connection.

“If workers feel threatened, they may want the system to fail,” Puntoni said. “At scale, that guarantees failure.”

In other words, deploying AI primarily as a cost-cutting tool can backfire. Layoffs framed as efficiency gains may reduce cooperation and limit the productivity benefits companies hope to unlock with technology, Puntoni said.

Trust, Kirkness agreed, is the real constraint. To build staff confidence in the tools it sells – and to avoid layoffs – Convictional adopted a four-day workweek, framing it as a way to share AI-driven productivity gains with employees.

“Mass layoffs in the name of automation destroy trust,” he said.

The human premium

In the US, lawsuits have begun to challenge AI-driven corporate decisions, particularly in areas such as insurance claim denials and alleged AI-enabled hiring discrimination.

Some experts warn that as AI systems become more autonomous, humans risk losing meaningful oversight – and that these agents themselves could become targets for cyberattacks. Yet regulation has struggled to keep pace with innovation. Neither the US nor Canada has clearly defined rules governing AI agents.

Business leaders are testing which functions can be automated and which still require sustained human involvement. For some workers, that uncertainty has become a source of unease.

One employee at a multinational firm, who is based in Vancouver, said she sometimes wonders whether the online “coach” used to support employee development is an AI system or a human relying so heavily on AI tools that the distinction has blurred. She requested anonymity because of concerns about professional repercussions.

Some organisations are setting boundaries. New Ground Wellness, a Canadian clinical counselling and wellness firm, uses AI tools such as chatbots in its daily operations, but recently declined a 20,000 Canadian dollar ($14,600) proposal for an agentic AI intake system that would match therapists with clients.

After receiving feedback from callers, the company concluded that the efficiency gains would not outweigh potential damage to trust. Their decision also reflects multiple surveys showing a strong preference among Western consumers for human customer service workers.

“We are open to revisiting AI systems in the future,” said New Ground Wellness cofounder Lucinda Bibbs, “but at this stage, preserving human connections remains our highest priority.”

Source link

Blood tech: UK’s use of Israeli spyware that helps underpin a genocide | Israel-Palestine conflict News

The United Kingdom’s government is investing in spyware developed and tested on Palestinians in Gaza and the occupied West Bank despite its public criticism of Israeli action there.

In addition to the Corsight facial recognition technology used to track, trace and detain thousands of Palestinian civilians passing through checkpoints in Gaza and the West Bank, the UK government has disregarded its own public concerns over Israel’s war on Gaza and de facto annexation of the West Bank and has purchased spyware from at least two other Israeli-linked manufacturers: Cellebrite and BriefCam.

Recommended Stories

list of 4 itemsend of list

Cellebrite

Cellebrite is an Israeli company closely linked to that country’s military. It has developed software that can bypass passwords and security protocols on smartphones and computers and access data from them.

That software has been used extensively by the Israeli military on Palestinians across Gaza and the West Bank, including to harvest data from the phones of thousands of detained Palestinians, many of whom have been subjected to systematic torture, a report by the American Friends Service Committee said.

Cellebrite is also reported to have received support from the United States Department of Defense to work on technology designed to map underground tunnels in the Gaza Strip.

Despite its stated public concerns over Israeli action in Gaza and the West Bank, records show the UK has entered into several agreements to take advantage of the technology used by Israel in Palestinian territory.

According to public records, a number of UK police forces have purchased access to Cellebrite software, including the City of London Police, which renewed its one-year contract with the Israeli company for more than 95,000 pounds ($128,600) in June. Leicestershire Police also renewed its contract with the Israeli spyware company in March for 328,688 pounds ($445,300). The British Transport Police, the UK’s Serious Fraud Office, Kent and Essex police, and Northumbria Police have also entered into contracts with Cellebrite.

Inquiries from Al Jazeera to the UK Home Office, Home Secretary Shabana Mahmood and the UK Police’s commercial agent, Blue Light Services, have all gone unanswered.

However, while declining to comment on “specific customer relationships or contracts”, Victor Cooper, Cellebrite’s senior director of corporate communication, rejected the characterisation of the company’s activities as “hacking”, instead saying, “Cellebrite’s solutions are forensic tools used in legally sanctioned investigations and require physical possession of the device. They do not enable remote access.”

Rights groups have raised concerns over Cellebrite exporting its technology to hardline states worldwide, including Myanmar, Serbia and Belarus, where it has been used to extract information from the phones of opposition figures, journalists and activists.

BriefCam

The Israeli-founded company BriefCam, which was acquired by Canon in 2018 and then by the Danish company Milestone Systems last year, has been providing the UK’s Cumbria Police with surveillance software since at least 2022.

A further disclosure by Police Scotland in June confirms that Scotland’s police service is also considering using the service.

BriefCam was founded in 2007 by Shmuel Peleg, Gideon Ben-Zvi and Yaron Caspi based on technology developed at Israel’s Hebrew University.

The company provides video synopsis programmes to law enforcement agencies, governments and companies. Police forces and private firms can use BriefCam’s Protect & Insights platform to sift through and condense hours of CCTV and home-surveillance footage, making it easily searchable.

The system includes facial-recognition and licence-plate search tools and allows police to build “watch lists” of specific faces or vehicle plates.

The technology has been used in East Jerusalem, Palestinian territory illegally occupied by Israel.

According to undated files accessed by the research centre Who Profits, a tender document published by the Israeli Ministry of Housing and Construction inviting companies to bid for maintenance contracts for 98 security systems within East Jerusalem specified that the successful bidder must be able to maintain BriefCam’s software. Israeli public records also show that in 2021, Israeli police committed to a contract valued at $1m for BriefCam’s video analysis systems.

A May 2023 report by the rights group Amnesty International documented how surveillance technology, such as that provided by BriefCam, was instrumental in maintaining Israel’s subjugation of Palestinians.

According to the report, the use of surveillance software is critical in maintaining the “continued domination and oppression of Palestinians … [w]ith a record of discriminatory and inhuman acts that maintain a system of apartheid”.

While not mentioning BriefCam by name, the report continued: “The Israeli authorities are able to use facial recognition software – in particular at checkpoints – to consolidate existing practices of discriminatory policing, segregation, and curbing freedom of movement, violating Palestinians’ basic rights.”

According to the company, the software can also filter footage by a wide range of characteristics, including gender, age group, clothing, movement patterns and time spent in a given location.

And that, despite the technology’s links to the oppression of Palestinians, is what makes it attractive to UK police forces.

Cumbria Police has said it does not currently use the facial recognition capabilities of BriefCam’s technology.

A spokesperson for Cumbria Police also clarified that the force has been using BriefCam for “several years” and, before introducing the technology, it had “consulted Cumbria’s independent Ethics and Integrity Panel and Strategic Independent Advisory Group”.

A request for a copy of those findings went unanswered.

epa12723539 A Palestinian resident passes police officers in the Silwan neighborhood in east Jerusalem during a property demolition operation in Jerusalem, 10 February 2006. According to the Jerusalem Governorate of the Palestinian Authorities, Israel issued a demolition notice for more than 21 Palestinian homes in the al-Bustan neighborhood of Silwan, south of the Al-Aqsa Mosque in East Jerusalem. EPA/ATEF SAFADI
Police officers are deployed in occupied East Jerusalem, where, records show, technology supplied to the UK has been used extensively [File: Atef Safadi/EPA]

Corsight

As previously reported by Al Jazeera, the Israeli company Corsight, through a subcontract with UK company Digital Barriers, has also been selected by the UK Home Office to play a key role in its expansion of facial recognition vans.

In March 2024, long before the UK government chose to include Corsight within its rollout of facial recognition technology, The New York Times revealed that misgivings over Corsight’s facial-recognition technology in Gaza had led to various members of the Israeli military voicing objections to its use by Unit 8200, Israel’s cyberintelligence branch.

The expansion of systems such as those marketed by Corsight, Cellebrite and BriefCam is part of a global trade in Israeli spyware, developed and refined through prolonged surveillance of Palestinians, that is now being exported worldwide.

Rights groups warned that techniques pioneered in Israel are being used by governments to target activists, journalists and political opponents as concerns deepen over the spread of unregulated cyberwarfare tools.

“The government and police should not be awarding contracts to Israeli spyware firms under any circumstances,” Palestine Solidarity Campaign Deputy Director Ryvka Barnard told Al Jazeera. “These companies develop and test their products through Israel’s regime of military occupation and apartheid against Palestinians. It is unacceptable for public money to be given to these companies, allowing them to profit from and develop new products used to surveil and harm Palestinians.”

Source link

The AI alarm cycle: Lots of talk, little action | Science and Technology

What is the point of AI alarmism if the people warning the world aren’t changing course?

A series of warnings from artificial intelligence (AI) industry insiders shows how the debate around AI drives extreme news cycles, swinging between hype and alarm.

The result is media coverage that overlooks the intricacies of this technology and its impact on everyday life. We examine the real risks, what’s being overstated, and what major tech companies stand to gain from all the fearmongering.

Contributors:
Rumman Chowdhury – Founder, Humane Intelligence
Mutale Nkonde – Founder, AI for the People
Chris Stokel-Walker – Author of the book How AI Ate the World

On our radar

Francesca Albanese, the United Nations’ special rapporteur for the occupied Palestinian territories, has found herself in the middle of a political firestorm, all because of comments she never actually made.

After a manipulated video of Albanese calling Israel the “enemy of humanity” went viral,  there were calls for her resignation across Europe. The UN has labelled these allegations part of a wider smear campaign to discredit Albanese and the UN itself.

A curated view of Israel for African journalists

For decades, the Israeli government and pro-Israel organisations have sponsored trips for politicians, celebrities and journalists, inviting them to visit and, as they put it, tell the “real story” about Israel. In the past, those invitations largely targeted figures from Western countries. But since October 7th, there has been a noticeable shift towards African journalists and social media influencers.

The objective is clear: to shape coverage in African media and influence audiences across the continent.

The Listening Post’s Nic Muirhead reports on the African journalists taking part in these fully funded, tightly managed visits to Israel.

Featuring:
Hassen Lorgat – Media critic and activist
Makhudu Sefara – Editor, Sunday Times
Timnit Gebru – Founder, the Distributed AI Research Institute (DAIR)
Njahira Gitahi – Reporter, The Standard

Source link

NASA rules out March launch for manned moon mission over technical issues | Space News

Artemis 2 is a precursor to the US space agency’s planned astronaut moon landing with Artemis III scheduled for 2028.

NASA chief Jared Isaacman says Artemis 2 – the first crewed flyby mission to the moon in more than 50 years – will not launch next month because of technical problems.

Workers detected an issue with helium flow to the massive Space Launch System (SLS) rocket that will “take the March launch window out of consideration”, Issacman said in a post on social media Saturday.

Recommended Stories

list of 4 itemsend of list

Solid helium flow is essential for purging the rocket’s engines and pressurising its fuel tanks.

“I understand people are disappointed by this development. That disappointment is felt most by the team at NASA who have been working tirelessly to prepare for this great endeavor,” Isaacman said.

NASA’s next opportunity for the launch would be at the beginning or end of April.

The US space agency hopes to put humans back on the moon as China forges ahead with a rival effort that is targeting 2030 at the latest for its first crewed mission.

China’s uncrewed Chang’e 7 mission is expected to be launched in 2026 for an exploration of the moon’s south pole, and testing of its crewed spacecraft Mengzhou is also set to go ahead this year.

Multiple postponements

NASA surprised many late last year when it said Artemis 2 could happen as soon as February – an acceleration explained by the administration of US President Donald Trump’s wish to beat China to the punch.

But the programme has been plagued by delays. The uncrewed Artemis 1 mission took place in November 2022 after multiple postponements and two failed launch attempts.

Then, technical problems in early February – which included a liquid hydrogen leak – cut short a so-called wet dress rehearsal for the Artemis 2 launch. That was finally completed earlier this week.

The wet dress rehearsal was conducted under real conditions – with full rocket tanks and technical checks – at Cape Canaveral in Florida, with engineers practising the manoeuvres needed to carry out an actual launch.

The space agency revealed the latest technical problem just one day after targeting March 6 for the launch of the Artemis 2 mission.

The towering SLS rocket and Orion spacecraft will be rolled back into the Vehicle Assembly Building at the Kennedy Space Center in Florida to investigate the technical issues and make any necessary repairs, Isaacman said. He said a bad filter, valve, or connection plate could be to blame for the stalled helium flow.

Isaacman added that a complete briefing will follow in the coming days.

The goal of the Artemis 2 mission, a 10-day flight around the moon and back, is to “explore the Moon for scientific discovery, economic benefits, and to build the foundation for the first crewed missions to Mars”, according to NASA.

The planned Artemis 2 crew includes three US astronauts – Reid Wiseman, Victor Glover, and Christina Koch – and Canadian astronaut Jeremy Hansen. The mission is poised to be the farthest human flight into space ever, and the first crewed moon mission since the US Apollo programme more than half a century ago.

Artemis 2 is a precursor to NASA’s planned astronaut moon landing with Artemis 3, which is scheduled for 2028.

Source link

Visualising AI spending: How does it compare with history’s mega projects? | Technology News

Spending on AI is forecast to skyrocket to $2.5 trillion in 2026, dwarfing even the largest scientific and infrastructure projects.

World leaders and tech executives are convening in New Delhi for the India-AI Impact Summit 2026, focusing on the role of artificial intelligence in governance, job disruption and global collaboration.

However, behind these discussions lies the financial reality. Over the past decade, AI has drawn one of the largest waves of private investment in modern history, totalling trillions of dollars.

According to Gartner, a United States-based business and technology insights company, worldwide spending on AI is forecast to total $2.5 trillion in 2026, a 44 percent increase over 2025.

To understand the magnitude of these investments, Al Jazeera visualises the staggering amounts by comparing them with some of the largest projects ever created by humanity. We also highlight which countries are spending the most on AI and provide insights into expenditures on data centres, models, services, and security.

What does $1bn look like?

To help understand a trillion dollars, it is useful to first visualise what millions and billions of dollars look like by using a stack of US dollar bills.

If you break these amounts down using $100 bills, here is how they stack up:

  • $1,000 would form a stack about 1cm (0.393-inch) high.
  • $10,000 would form a stack approximately 10cm (3.93-inch) high.
  • $1m would fit inside a briefcase.
  • $10m would fit inside a very large suitcase.
  • $100m would fit on an industrial pallet stacked waist-high.
  • $1bn would create a building approximately 5.2 metres (17 feet) high, with a width and a length of about 2 metres (6.6 feet) each.

INTERACTIVE-What does $1bn look like-1771490010

Another way to think of it is if you spent $1 every second, it would take:

  • 11.5 days to spend $1m
  • 31 years to spend $1bn
  • 31,000 years to spend $1 trillion

In more tangible terms, $1bn is roughly equivalent to:

  • The estimated cost of the Grand Egyptian Museum in Giza, one of the largest archaeological museums in the world
  • The cost of constructing two to three modern football stadiums, depending on size and design
  • Buying 10 luxury private jets (at $100m each)
  • Buying 6.3 tonnes of gold (at $5,000 per ounce)
  • Buying 1 million high-end iPhones at retail price

$1.6 trillion already spent on AI

Over the past decade, AI-related investments have surged nearly 13-fold.

According to the 2025 AI Index Report by Stanford University, between 2013 and 2024, total global corporate investment in AI reached $1.6 trillion. This substantial expenditure dwarfs even the largest scientific and infrastructure projects of the 20th and 21st centuries.

To put the scale of AI investment into perspective, consider how it compares with some of the most ambitious and expensive projects in modern history. All figures are adjusted to 2024 US dollars:

  • The Manhattan Project (1942-46): $36bn
  • The International Space Station (1984-2011): $150bn
  • The Apollo Program (1960-73): $250bn
  • The US Interstate Highway System (1956-92): $620bn

INTERACTIVE-AI investment dwarfs world's mega projects-1771490007

In just over a decade, investment in AI has surpassed the cost of developing the first atomic bomb, landing humans on the moon and the decades-long effort to build the 75,440km (46,876-mile) US interstate highway network.

Unlike these landmark projects, AI funding has not been driven by a single government or wartime urgency. It has flowed through private markets, venture capital, corporate research and development, and global investors, making it one of the largest privately financed technological waves in history.

Global corporate investments in AI cover a vast array of operations, including mergers and acquisitions, minority stakes, private investments, and public offerings. These monumental expenditures highlight the extensive financial commitment to advance AI.

Which countries are spending the most on AI?

The AI investment surge is concentrated in just a few countries, where private capital has fuelled thousands of startups and shaped global innovation hubs.

The US has dominated AI spending, accounting for roughly 62 percent of total private AI funding since 2013. Between 2013 and 2024, US companies spent $471bn on AI. Chinese companies are the second-largest spenders at $119bn, followed by the United Kingdom at $28bn.

These figures exclude government spending, such as the US CHIPS Act or European national AI subsidies.

Global private investment in AI by country, 2013-24:

  • US: $471bn, supporting 6,956 newly funded AI companies
  • China: $119bn, 1,605 startups
  • UK: $28bn, 885 startups
  • Canada: $15bn, 481 startups
  • Israel: $15bn, 492 startups
  • Germany: $13bn, 394 startups
  • India: $11bn, 434 startups
  • France: $11bn, 468 startups
  • South Korea: $9bn, 270 startups
  • Singapore: $7bn, 239 startups
  • Others: $58bn

INTERACTIVE-Which countries are spending the most on AI-1771490012

AI spending to total $2.5 trillion in 2026

AI spending is forecast to skyrocket to $2.5 trillion in 2026, driven by a massive global build-out of data centres and services, according to Gartner.

The bulk of the spending is expected to go towards:

  • AI infrastructure: $1.37 trillion
  • AI services: $589bn
  • AI software: $452bn
  • AI cybersecurity: $51bn
  • AI platforms for data science and machine learning: $31bn
  • AI models: $26bn
  • AI application development platforms: $8.4bn
  • AI data: $3bn

By 2027, Gartner is forecasting that AI spending will surpass $3.3 trillion.

INTERACTIVE-AI forecast to total $2.52 trillion in 2026-1771490006

Source link

ByteDance pledges fixes to Seedance 2.0 after Hollywood copyright claims | Science and Technology News

Hollywood groups say the AI video tool uses the likeness of actors and others without permission.

China’s ByteDance has pledged to address concerns over its new artificial intelligence video generator, after Hollywood groups claimed Seedance 2.0 “blatantly” violates copyright and uses the likenesses of actors and others without permission.

The company, which owns TikTok, told The Associated Press news agency on Sunday that it respects intellectual property rights and pledged action to strengthen safeguards.

Recommended Stories

list of 3 itemsend of list

The tool, called Seedance 2.0, is available only in China for now and lets users generate high-quality AI videos using simple text prompts.

The Motion Picture Association (MPA) said last week that Seedance 2.0 “has engaged in unauthorized use of US copyrighted works on a massive scale”.

“By launching a service that operates without meaningful safeguards against infringement, ByteDance is disregarding well-established copyright law that protects the rights of creators and underpins millions of American jobs. ByteDance should immediately cease its infringing activity,” Charles Rivkin, chairman and CEO of the MPA, said in a statement on February 10.

Screenwriter Rhett Reese, who wrote the Deadpool movies, said on X last week, “I hate to say it. It’s likely over for us.”

His post was in response to Irish director Ruairi Robinson’s post of a Seedance 2.0 video that went viral and shows AI versions of Tom Cruise and Brad Pitt fighting in a post-apocalyptic wasteland.

Actors union SAG-AFTRA said on Friday it “stands with the studios in condemning the blatant infringement” enabled by Seedance 2.0.

“The infringement includes the unauthorized use of our members’ voices and likenesses. This is unacceptable and undercuts the ability of human talent to earn a livelihood,” SAG-AFTRA said in a statement.

“Seedance 2.0 disregards law, ethics, industry standards and basic principles of consent. Responsible AI development demands responsibility, and that is nonexistent here.”

ByteDance said in response that it has heard the concerns regarding Seedance 2.0.

“We are taking steps to strengthen current safeguards as we work to prevent the unauthorised use of intellectual property and likeness by users,” it told the AP.

Jonathan Handel, an entertainment journalist and lawyer, told Al Jazeera the developments mark “the beginning of a difficult road” for the film industry.

Until courts make a significant ruling, AI-generated videos will have major implications on the film industry,” he said.

“Digital technology moves a lot quicker, and we are going to see in several years full-length movies that are AI-generated,” he said.

These tools are trained primarily on unlicensed data, Handel said, and the output could resemble faces and scenes from famous movies, “and so you’ve got copyrights, trademarks, all of those rights are implicated here”.

Source link