bots

Three jobs most at risk of being taken over by AI as 100,000 bots to enter workforce by next year

THE three jobs most at risk of being taken over by AI have been revealed.

British workers will be forced to compete with a surge of 100,000 virtual AI employees by the end of 2026, research by Agentic AI firm Gravitee has suggested.

two men are looking at a computer screen in an office

1

Office jobs such as software engineering, HR management and customer service are at riskCredit: Getty

Software engineers, HR managers and customer service managers appear to be most at risk.

Almost half of company executives surveyed by Gravitee said more than half the work of these roles could be done by an AI agent.

It means companies could cut these jobs to slash business costs.

The other jobs at risk include salesperson, marketing manager and communications manager.

The AI firm surveyed 250 executives at large UK companies and all said they were planning to use AI agents in some way.

Most of the companies said they would ’employ’ between 16 to 20 AI agents to carry out tasks previously carried out by human employees.

Meanwhile 65% of the firms said they would be making job cuts over the same period, suggesting much of the work done by their current employees could be performed more cheaply by AI bots.

Still, nearly a quarter of companies said they plan to increase their headcounts to support the AI expansion.

Rory Blundell, chief executive at Gravitee, said: “Agentic AI will offer a huge productivity boom to firms of all sizes, with AI agents completing rote, repetitive tasks while workers are freed up to focus on more interesting challenges.

“However it’s clear that, at least in the short term, there’s a risk of widespread job cuts until companies know how to balance the investment with the return, which may be shortsighted. Not least because AI Agents, without proper governance, can cause widespread chaos and disruption.”

Netflix admits it used AI to make ‘amazing’ scene in hit TV show – but did YOU spot it?

A report by the International Monetary Fund (IMF) has warned that AI could hit three in five workers and even replace their jobs.

Companies and even the government have already begun to slash their workforce to make way for the technology.

The UK’s biggest accountancy firms – PwC, Deloitte, KPMG and EY – have reportedly slashed the number of junior roles available by up to 29 per cent, while the government is axing at least 10,000 roles in the civil service.

Amazon, Telstra, Salesforce and Fiverr have all announced plans for lay-offs in the last 12 months, with jobs going instead to AI agents. 

There were 315,550 available entry-level roles in November 2022 (when the popular AI tool ChatGPT was launched) across the UK, which dropped to 214,934 last month, according to the jobs site Adzuna.

The 40 roles most at risk

Similar research has previously been carried out by Microsoft, which revealed the 40 jobs most likely to be taken over by AI.

Researchers looked at 200,000 conversations between people in the US and Microsoft’s AI chatbot, called Copilot, as part of the study.

They found that the jobs most likely to be affected by AI are those in the communications industry.

Interpreters and translators are the jobs most at risk, followed by historians, and passenger attendants such as air hosts and hostesses or train ticket inspectors.

Sales representatives came fourth on the list, then writers and authors, and customer service representatives.

Other jobs on the list included radio DJs, concierge workers, teachers, public relations specialists and telephone operators.

Jobs that involve physically working with people, operating or monitoring machinery, or doing manual labour tend to be the least-impacted by AI.

This includes nursing assistants, sewage workers, painters and plasterers, roofers, cleaners, and massage therapists.

The jobs most and least at risk from AI

THESE are the professions MOST at risk from AI:

  • Interpreters and translators
  • Historians
  • Passenger attendants
  • Sales representatives of services
  • Writers and authors
  • Customer service representatives
  • CNC tool programmers
  • Telephone operators
  • Ticket agents and travel clerks
  • Broadcast announcers and radio DJs
  • Brokerage clerks
  • Farm and home management educators
  • Telemarketers
  • Concierges
  • Political scientists
  • News analysts, reporters, journalists
  • Mathematicians
  • Technical writers
  • Proofreaders and copy markers
  • Hosts and hostesses
  • Editors
  • Postsecondary business teachers
  • Public relations specialists
  • Demonstrators and product promoters
  • Advertising sales agents
  • New accounts clerks
  • Statistical assistants
  • Counter and rental clerks
  • Data scientists
  • Personal financial advisors
  • Archivists
  • Postsecondary economics teachers
  • Web developers
  • Management analysts
  • Geographers
  • Models
  • Market research analysts
  • Public safety telecommunicators
  • Switchboard operators
  • Postsecondary library science teachers

AND these are the professions that are safest from the risks of AI:

  • Dredge operators
  • Bridge and lock tenders
  • Water treatment plant and system operators
  • Foundry mold and coremakers
  • Rail-track laying and maintenance equipment operators
  • Pile driver operators
  • Floor sanders and finishers
  • Orderlies
  • Motorboat operators
  • Logging equipment operators
  • Paving, surfacing, and tamping equipment operators
  • Maids and housekeeping cleaners
  • Roustabouts (oil and gas)
  • Roofers
  • Gas compressor and gas pumping station operators
  • Helpers–roofers
  • Tire builders
  • Surgical assistants
  • Massage therapists
  • Ophthalmic medical technicians
  • Industrial truck and tractor operators
  • Supervisors of firefighters
  • Cement masons and concrete finishers
  • Dishwashers
  • Machine feeders and offbearers
  • Packaging and filling machine operators
  • Medical equipment preparers
  • Highway maintenance workers
  • Helpers–production workers
  • Prosthodontists
  • Tire repairers and changers
  • Ship engineers
  • Automotive glass installers and repairers
  • Oral and maxillofacial surgeons
  • Plant and system operators (all other)
  • Embalmers
  • Helpers–painters, plasterers, and similar
  • Hazardous materials removal workers
  • Nursing assistants
  • Phlebotomists

Source link

Cloudflare to block AI crawler bots by default

Internet firm Cloudflare has started blocking AI web crawlers to prevent them from “accessing content without permission or compensation,” by default according to an announcement on Tuesday.
EPA-EFE/WU HAO

July 1 (UPI) — Cloudflare announced it will begin blocking AI web crawlers to prevent them from “accessing content without permission or compensation,” from all of its clients beginning on Tuesday.

Cloudflare blocking AI crawler bots builds off the tool launched in September last year that allowed publishers the ability to block crawlers with one click but announced Tuesday the option to block them will be implemented by default for all of its clients.

“AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate,” said Matthew Prince, co-founder and CEO of Cloudflare. “This is about safeguarding the future of a free and vibrant Internet with a new model that works for everyone.”

Cloudflare is a content delivery network which helps businesses and applications by caching the data closer to end users and it estimated that 16% of global internet traffic goes directly through it’s service in a 2023 report.

The company also announced it will implement a Pay Per Crawl program that will allow some publishers to set a price that can be viewed by companies to decide whether they want to pay the fee for its content.

AI crawlers are automated bots with the intent to extract large amounts of data from websites, to train large language models from companies such as OpenAI and Google.

AI crawlers are typically seen as more invasive and selective when it comes to the data they consumer. They have been accused of overwhelming websites and significantly impacting user experience,” Matthew Holman, a partner at U.K. law firm Cripps, said.

“If effective, the development would hinder AI chatbots’ ability to harvest data for training and search purposes,” he added. “This is likely to lead to a short term impact on AI model training and could, over the long term, affect the viability of models.”

Source link

Trading bots are evolving: What happens when AI cheats the market?

ADVERTISEMENT

Malevolent trading practices aren’t new. Struggles against insider trading, as well as different forms of market manipulation, represent a long-running battle for regulators.

In recent years — however — experts have been warning of new threats to our financial systems. Developments in AI mean that automated trading bots are not only smarter, but they’re more independent too. While basic algorithms respond to programmed commands, new bots are able to learn from experience, quickly synthesise vast amounts of information, and act autonomously when making trades.

According to academics, one risk scenario involves collaboration between AI bots. Just imagine: hundreds of AI-driven social media profiles begin to pop up online, weaving narratives about certain companies. The information spread isn’t necessarily fake, but may just be the amplification of existing news. In response, real social media users start to react, highlighting the bots’ chosen message.

As the market is tipped by the crafted narrative, one investor’s roboadvisor rakes in profits, having coordinated with the gossiping bots. Other investors, who didn’t have the insider information, lose out by badly timing the market. The problem is, the investor profiting may not even be aware of the scheme. This means that charges of market manipulation can’t necessarily be effective, even if authorities can see that a trader has benefitted from distortive practices.

Social platforms are changing trading

Alessio Azzutti, assistant professor in law & technology (FinTech) at the University of Glasgow, told Euronews that the above scenario is still a hypothesis — as there’s not enough evidence to prove it’s happening. Even so, he explains that similar, less sophisticated schemes are taking place, particularly in “crypto asset markets and decentralised finance markets”.

“Malicious actors… can be very active on social media platforms and messaging platforms such as Telegram, where they may encourage members to invest their money in DeFi or in a given crypto asset, to suit themselves,” Azzutti explained.

“We can observe the direct activity of human malicious actors but also those who deploy AI bots.”

He added that the agents spreading misinformation may not necessarily be very sophisticated, but they still have the power to “pollute chats through fake news to mislead retail investors”.

“And so the question is, if a layman, if a youngster on his own in his home office is able to achieve these types of manipulations, what are the limits for the bigger players to achieve the same effect, in even more sophisticated markets?”

The way that market information now spreads online, in a widespread, rapid, and uncoordinated fashion, is also fostering different types of trading. Retail investors are more likely to follow crazes, rather than relying on their own analysis, which can destabilise the market and potentially be exploited by AI bots.

The widely-cited GameStop saga is a good example of herd trading, when users on a Reddit forum decided to buy up stock in the video game company en masse. Big hedge funds were betting that the price would fall, and subsequently lost out when it skyrocketed. Many experts say this wasn’t a case of collusion as no official agreement was created.

A spokesperson from ESMA, the European Securities and Markets Authority, told Euronews that the potential for AI bots to manipulate markets and profit off the movements is “a realistic concern”, although they stressed that they don’t have “specific information or statistics on this already happening”.

“These risks are further intensified by the role of social media, which can act as a rapid transmission channel for false or misleading narratives that influence market dynamics. A key issue is the degree of human control over these systems, as traditional oversight mechanisms may be insufficient,” said the spokesperson.

ESMA highlighted that it was “actively monitoring” AI developments.

ADVERTISEMENT

Is regulation ready?

One challenge for regulators is that collaboration between AI agents can’t be easily traced.

“They’re not sending emails, they’re not meeting with each other. They just learn over time the best strategy and so the traditional way to detect collusion doesn’t work with AI,” Itay Goldstein, professor of finance and economy at the Wharton School of the University of Pennsylvania, told Euronews.

“Regulation has to step up and find new strategies to deal with that,” he argued, adding that there is a lack of reliable data on exactly how traders are using AI.

Filippo Annunziata, professor of financial markets and banking legislation at Bocconi University, told Euronews that the current EU rules “shouldn’t be revised”, referring to the Regulation on Market Abuse (MAR) and the Markets in Financial Instruments Directive II (MiFID II).

ADVERTISEMENT

Even so, he argued that “supervisors need to be equipped with more sophisticated tools for identifying possible market manipulation”.

He added:  “I even suggest that we ask people who develop AI tools for trading on markets and so on to include circuit breakers in these AI tools. This would force it to stop even before the risk of manipulation occurs.”

In terms of the current legal framework, there’s also the issue of responsibility when an AI agent acts in a malicious way, independent of human intent.

This is especially relevant in the case of so-called black box trading, where a bot executes trades without revealing its inner workings. To tackle this, Some experts believe that AI should be designed to be more transparent, so that regulators can understand the rationale behind decisions.

ADVERTISEMENT

Another idea is to create new laws around liability, so that actors responsible for AI deployment could be held responsible for market manipulation. This could apply in cases where they didn’t intend to mislead investors.

“It’s a bit like the tortoise and the hare,” said Annunziata.

“Supervisors tend to be tortoises, but manipulators that use algorithms are hares, and it’s difficult to catch up with them.”

Source link