Sat. Nov 2nd, 2024
Occasional Digest - a story for you

The union pushback is an effort to rein in the adoption of technology that is proliferating with few regulatory constraints, as attempts to regulate the wide-ranging field in the U.S. and Europe struggle to keep up with the pace of developments. If regulators don’t act, workers’ advocates fear, AI could pose threats to privacy, organizing and even entire jobs.

Much of the recent focus has been on advances in so-called generative AI, which offers tantalizing prospects of efficiency for employers but also the threat of upheaval for certain white-collar and creative industries, where unions have significant sway. The technology is based on machine learning models, through which it can produce something new, including lucid text, based on patterns drawn from a pool of existing information.

AI, algorithms and other automated tools are being used in everything from screening job applicants and evaluating employee performance to tracking workers’ movements — even their keystrokes. Recent innovations in the technology, most notably ChatGPT, may have marked a tipping point in its spread to the workplace, with implications for a much broader set of workers.

“With this latest iteration of AI, obviously, a whole lot of additional folks are starting to think it could impact them,” said Beth Allen, a spokesperson for the Communications Workers of America. “It’s hard to think of an area of our membership that it might not end up touching down the road.”

Organized labor has long fretted about the threat of technology displacing workers. Automation has been one of the major sticking points, for instance, in the ongoing contract negotiations between unionized dockworkers and West Coast port operators, an impasse that’s gone on for nearly a year.

But newer tools fueled by AI have many on edge because of the threat they pose to industries previously considered secure from such forces — including some where unions had been making inroads in recent years.

Advances in generative AI could eventually expose “the equivalent of 300 million full-time jobs to automation,” economists at Goldman Sachs estimated last month, with two-thirds of all occupations affected in some fashion.

Office and administrative tasks, as well as legal work, would be most vulnerable to automation, with around 45 percent of work potentially exposed; arts and media could have more than a quarter of current work automated — close to the share for all industries overall, the report said.

“There was an assumption in the past that if you were a professional your skills were always going to be needed,” said Patricia Campos-Medina, executive director of Cornell University’s Worker Institute. “Now we’re starting to see the same level of insecurity … other workers have had to deal with since the Industrial Revolution.”

In the WGA’s case, the union is demanding that AI won’t be used to write or rewrite material covered by the contract and that guild members’ work won’t be used to train AI. The industry has balked at those terms, as well as other planks of WGA’s proposal.

“Writers find themselves in the middle of a forest fire where our compensation has been driven down, our working conditions are being driven down, and writing as a career is under attack,” August said. “AI feels like an asteroid that’s headed towards us. You can’t put one over the other; you have to deal with both.”

The Alliance of Motion Picture and Television Producers did not return a request to comment Thursday afternoon.

In another example of the conflict, the actors union SAG-AFTRA, which is scheduled to start its own negotiations with TV and movie studios next month, has drawn a line against allowing AI developers to use digital voice simulations or likenesses without negotiation.

“We think AI, and generative AI in particular, is probably one of the most consequential new technological developments we’ve ever seen for our members, and we want to be on top of it,” said Duncan Crabtree-Ireland, SAG-AFTRA’s national executive director and chief negotiator.

The union pushback is a concerted effort to constrain the adoption of technology that is proliferating with few regulatory constraints, as attempts to regulate the wide-ranging field in the U.S. and Europe struggle to keep up with the pace of developments.

The issue is thorny across the board — it spans ethics, intellectual property rights, computer science and classroom education — leaving the dizzying task of dealing with the fast-developing technology to regulators and lawmakers whose operative speed is nowhere near as fast.

There’s a bevy of proposals circulating in Congress, but little consensus as of yet.

“Surveillance technology and artificial intelligence are becoming more pervasive by the day and right now, there are no guardrails in place to prevent employers from misusing and abusing them. That needs to change,” Sen. Bob Casey (D-Pa.), who has introduced a bill to restrict use of AI and other surveillance tools in the workplace, said in a statement to POLITICO.

The Biden administration has sought to fill that void, announcing plans to look into how companies are using AI and other automated tools on their employees. On Thursday, the White House pledged to formulate draft guidelines regarding federal agencies’ use of AI, building upon its “Blueprint for an AI Bill of Rights” released last year.

However, regulatory efforts to date have been rather scattershot — due in part to the complexity of the issue and the overlapping agencies tasked with policing the workplace, let alone society writ large.

For instance, National Labor Relations Board General Counsel Jennifer Abruzzo in October set her sights on uses of electronic monitoring that may interfere with employees’ rights to organize and discuss work conditions.

Abruzzo told POLITICO that employers’ use of algorithmic tools on workers risks “significantly impairing or negating their ability to engage with one another,” and that the NLRB is investigating several cases involving potential unlawful conduct.

Meanwhile, a cavalcade of federal agencies, including the Justice Department and Equal Employment Opportunity Commission, issued a warning in April against using AI to discriminate or otherwise violate civil rights laws — such as by algorithmically screening out certain racial groups or older job applicants.

However, officials made clear they were relying on existing authority to do so, rather than creating AI-specific policies.

“Even with the rapid developments of technology … our statutes that we enforce still apply. There’s no sort of carve out or anything for use of AI in the workplace,” said Eunice Ikene, an attorney adviser to EEOC Chair Charlotte Burrows.

There’s broad, bipartisan interest in tackling AI at the EEOC, according to Ikene and Victoria Lipnic, a former GOP commissioner. But specific rules on the technology appear years away, particularly as President Joe Biden’s appointees remain unable to act without buy-in from Republicans.

Absent commissioners’ formal authorization, the EEOC, which is tasked with enforcing workplace anti-discrimination laws, can issue interpretive guidance and other memos on the subject — though the effect of such moves is limited and susceptible to legal challenges.

Another option could be to harness the government’s influence by tweaking the federal guidelines on the hiring process, in conjunction with other agencies, Lipnic said.

In the meantime, while Washington tries to get its arms around the issue, fissures between workers and management are likely to proliferate as AI technology proliferates and evolves.

“Should government officials be looking at it? Absolutely,” August said. “Things are just moving so quickly that we have to be addressing this now in our contract.”

Source link