The New York Times has severed ties with a freelancer over AI-assisted plagiarism, highlighting growing tensions and standards struggles as newsrooms increasingly rely on artificial intelligence tools amidst concerns over originality and trust.
The latest sign of the uneasy peace between journalism and artificial intelligence came not from a research lab or a product launch, but from a book review. According to The Guardian, The New York Times cut ties with freelance writer Alex Preston after discovering that AI had been used in the drafting of a review that also bore similarities to a Guardian piece on the same title. The Times, the report said, treated the matter as a breach of editorial standards.
The incident matters because it landed just as more journalists have begun speaking openly about using AI in their day-to-day work. The Wall Street Journal recently profiled Fortune business editor Nick Lichtenberg, who has used AI to accelerate his output, while Wired highlighted several prominent reporters who now rely on the tools for editorial tasks, including some writing assistance. That shift suggests a broader normalisation of AI in newsrooms, even if many editors and reporters still regard it with suspicion.
But the Preston case also showed how brittle that acceptance remains. The Wrap reported that he admitted to using AI to help draft the review, while other accounts said the overlap with the Guardian article triggered an internal review at the Times. However the mechanics are described, the message for publishers is the same: a single lapse can quickly harden into a public trust problem, especially when AI is involved in work that depends on originality and attribution.
The fallout has already reached beyond the freelancer himself. Axios reported that union leaders at The New York Times sent management a letter saying the paper’s AI standards are too vague and inadequate, using the plagiarism episode to press for clearer rules. That wider debate is likely to intensify as media companies push deeper into AI, even as they insist the technology must remain bounded by strict editorial oversight. In that sense, the scandal is less an isolated mistake than a warning about how far newsrooms can go in embracing AI before trust snaps.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph: - Paragraph 1: [2], [3] - Paragraph 2: [1] - Paragraph 3: [2], [3], [4], [5] - Paragraph 4: [6], [2]
Source: Noah Wire Services
Verification / Sources
- https://mediacopilot.substack.com/p/journalism-ai-plagiarism-nyt - Please view link - unable to able to access data
- https://www.theguardian.com/books/2026/mar/31/the-new-york-times-drops-freelance-journalist-who-used-ai-to-write-book-review - The Guardian reports that The New York Times severed ties with freelance journalist Alex Preston after discovering he used artificial intelligence to assist in writing a book review. The review contained similarities to a previous piece published in The Guardian, leading to an internal investigation and the termination of Preston's contract for violating journalistic standards.
- https://www.thewrap.com/media-platforms/journalism/new-york-times-cuts-ties-with-writer-ai/ - The Wrap details how The New York Times ended its relationship with freelance writer Alex Preston after he admitted to using AI tools to help draft a book review. The review inadvertently incorporated elements from a Guardian review of the same book, prompting the Times to sever ties with Preston for breaching its journalistic standards.
- https://www.marsmag.com/2026/04/02/new-york-times-critic-caught-using-ai-i-made-a-huge-mistake/ - MARS Magazine reports on the incident where freelance journalist Alex Preston used AI to assist in writing a book review for The New York Times. The review contained similarities to a Guardian review, leading to Preston admitting his mistake and the Times severing ties with him for violating journalistic standards.
- https://www.resultsense.com/news/2026-04-01-new-york-times-drops-freelancer-who-used-ai-for-book-review - Resultsense covers the story of The New York Times terminating its relationship with freelance journalist Alex Preston after he used AI to help draft a book review. The review contained passages from a Guardian review, leading to an internal investigation and the severing of ties with Preston for breaching journalistic ethics.
- https://www.axios.com/2026/04/07/new-york-times-ai-standards - Axios reports that The New York Times' editorial union leaders sent a letter to management arguing that its artificial intelligence standards are 'woefully inadequate' and too vague. This follows an incident involving AI-driven plagiarism by a freelance book reviewer, highlighting the need for clearer AI guidelines in journalism.
- https://www.breitbart.com/the-media/2026/03/31/nolte-new-york-times-book-reviewer-out-after-ai-use/ - Breitbart discusses the termination of freelance book reviewer Alex Preston by The New York Times after he used AI to assist in writing a book review. The review contained similarities to a Guardian review, leading to the Times severing ties with Preston for violating its journalistic standards.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first emerged. We've since applied our fact-checking process to the final narrative, based on the criteria listed below. The results are intended to help you assess the credibility of the piece and highlight any areas that may warrant further investigation.
Freshness check
Score: 8
Notes: The incident involving Alex Preston's use of AI in drafting a book review for The New York Times was reported by The Guardian on 31 March 2026. The Media Copilot article, dated 21 April 2026, references this event, indicating that the content is relatively fresh. However, the Media Copilot article appears to be a commentary piece rather than a direct news report, which may affect its freshness score.
Quotes check
Score: 7
Notes: The Media Copilot article includes direct quotes from The Guardian's reporting, such as Alex Preston's statement: "I made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in." While these quotes are attributed to The Guardian, the Media Copilot article does not provide direct links to the original sources, making independent verification challenging.
Source reliability
Score: 6
Notes: The Media Copilot article is published on a Substack platform, which is a self-publishing service. This raises concerns about the editorial oversight and fact-checking processes in place. The article references The Guardian and The Wrap, both reputable sources, but the lack of direct links to these sources in the Media Copilot article diminishes its overall reliability.
Plausibility check
Score: 9
Notes: The reported incident of Alex Preston using AI to draft a book review for The New York Times, which led to the discovery of similarities with a Guardian review, is plausible and aligns with known events. The involvement of AI in content creation and the subsequent issues of plagiarism are consistent with current discussions in the media industry.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary: The Media Copilot article presents a commentary on the incident involving Alex Preston's use of AI in drafting a book review for The New York Times. While the event is plausible and aligns with known facts, the article's reliance on secondary sources without direct links, its categorisation as a commentary piece, and concerns about the reliability and independence of the source diminish its overall credibility. Given these factors, the content does not meet the standards for factual reporting and is not covered under our indemnity.