Unionised journalists across the United States are mobilising to demand clear labour agreements on AI deployment in newsrooms, signalling a pivotal clash over technology, trust, and job security within the industry.

A growing number of unionised journalists in the United States are drawing a line over artificial intelligence, arguing that newsroom contracts should spell out who gets to decide how the technology is used, whether it can affect bylines, and what safeguards should exist before management rolls out new tools.

The latest flashpoint came at ProPublica, where staff staged a daylong strike in early April outside the outlet’s Lower Manhattan offices. According to reporting by Poynter and other labour publications, the walkout was tied not only to AI policy but also to wages, job security and layoff protections. The ProPublica Guild, which represents roughly 140 to 150 editorial and business workers, has been negotiating its first contract for more than two years, and members voted overwhelmingly to authorise the strike.

What has made the dispute especially significant is that it is being watched as a test case for the wider industry. Agnel Philip, a ProPublica data reporter and unit chair, told CJR the newsroom has been cautious so far, but said journalists want a place at the table before AI becomes embedded in editorial workflows. Tyson Evans, the company’s chief product and brand officer, said ProPublica does not believe a contract is the right mechanism for making detailed promises about technology that is still evolving, though he said the organisation has pledged not to use AI to create digital replicas of employees’ work.

Other outlets are confronting the same issue in different ways. At EdSource in California, union members have pushed for contract language that would let reporters strip their bylines from stories involving AI used without consent, while also requiring union approval for generative AI tools. At the New York Times, the guild has been pressing for revenue sharing from licensing, disclosure of AI use and the right to remove a byline if a reporter’s work is altered without their knowledge. A recent bargaining session saw management reject or modify most of those proposals, according to an associate editor on the guild’s bargaining committee.

McClatchy’s newsrooms have become another battleground. Reporters at several of its papers, including the Sacramento Bee and the Miami Herald, have objected to a “content scaling agent” built with Anthropic’s Claude, which repackages articles for different audiences while retaining the original byline. The Sacramento Bee, after agreeing a contract with AI provisions in February, has taken a different approach: staff there are withholding bylines from stories produced with the tool, signalling that they do not want the resulting text attributed to them. In Pennsylvania, where one McClatchy paper is not unionised, AI-assisted pieces are labelled more directly.

The arguments are as much about public trust as they are about labour terms. Ariane Lange, an investigative reporter at the Bee, told CJR she did not want readers to assume she had signed off on AI-generated material attached to her name. Bryan Clark of the Idaho Statesman said reporters worry that refusing to attach their bylines could damage page-view performance in systems management already monitors closely. Yet even as tensions rise, some newsrooms, including CBS and Vermont’s VT Digger, have recently agreed contracts with AI guardrails, suggesting that bargaining over the technology is beginning to settle into a new front in media labour disputes. As one NYU journalism professor, Hilke Schellmann, told CJR, the danger is that silence now could harden into industry norm later.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph: - Paragraph 1: [2], [3] - Paragraph 2: [2], [4], [5], [6] - Paragraph 3: [1], [2] - Paragraph 4: [1] - Paragraph 5: [1], [2], [6]

Source: Noah Wire Services