A Canadian PR executive highlights growing challenges in distinguishing human-written content from AI, as detection tools struggle with false positives amid increasing reliance on automated writing and editing processes.

A Canadian public relations executive has argued that the rush to police artificial intelligence is creating its own problem: human writing is increasingly being treated with suspicion. Jennifer Farr, senior account director at Earnscliffe, said she pitched an op-ed to a major Canadian publication, only to learn that the draft had been rejected after being flagged by an AI detection tool, despite having been written collaboratively with her client in a live video meeting. Her account captures a growing unease in communications and publishing, where the appearance of polish can now be mistaken for machine authorship.

The concern is not hard to understand. As generative AI becomes more widely used, editors and publishers are under pressure to avoid running material that was created by software rather than a person. Yet AI detectors have their own limitations. Research and industry explainers note that these systems rely heavily on statistical patterns, which makes them prone to false positives when human prose happens to look too structured or predictable.

That creates a particular headache for agencies and other collaborative writing environments. Drafts are often shaped through discussion, editing and repeated tightening, producing clean copy that can resemble the style associated with AI-generated text. Analysts have also warned that some detectors may penalise non-native English writing and other forms of straightforward, formal prose, while still struggling to identify AI text that has been lightly edited to sound more human.

Farr’s point is that authenticity has become harder to define in practice. In her view, the question is no longer simply whether a piece was written by a person or a model, but whether the process behind it was transparent, credible and defensible. That ambiguity matters because the industry still lacks a reliable rulebook for separating genuine human drafting from machine-assisted writing.

Academic research has added weight to that uncertainty. A recent study published on ScienceDirect found that most AI detector findings were false, reinforcing doubts about how much confidence publishers should place in automated screening. The broader lesson, according to reviewers of the technology, is that detection tools may be useful as a warning system, but they are not yet precise enough to serve as a final arbiter of authorship.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph: - Paragraph 1: [2], [3] - Paragraph 2: [4], [5] - Paragraph 3: [3], [6] - Paragraph 4: [1], [2] - Paragraph 5: [7], [5]

Source: Noah Wire Services

Verification / Sources

  • https://www.prdaily.com/my-op-ed-was-flagged-as-ai-it-wasnt/ - Please view link - unable to able to access data
  • https://www.contentellect.com/are-ai-detectors-accurate/ - This article discusses the accuracy of AI detection tools, highlighting their tendency to misclassify human-written content as AI-generated. It explains that AI detectors rely on statistical patterns, which can lead to false positives when human writing coincidentally matches these patterns. The piece also addresses issues like bias against non-native English speakers and the challenges in detecting AI-generated content that has been edited to appear more human-like.
  • https://www.techwyse.com/blog/general-category/ai-detection-software-false-positives - This blog post examines the phenomenon of false positives in AI detection software, where human-written content is incorrectly flagged as AI-generated. It explores the causes of these inaccuracies, including the reliance of detection tools on specific patterns and the challenges in distinguishing between human and AI writing. The article also discusses the implications of such misclassifications in various fields, including education and publishing.
  • https://www.tryleap.ai/learn/ai-detection-false-positives - This article delves into the reasons why AI detection tools often flag human-written content as AI-generated. It explains that detectors score statistical patterns, and any human writing that coincidentally matches those patterns gets flagged. The piece also discusses writing styles most likely to get flagged, such as non-native English, and the challenges in accurately distinguishing between human and AI writing.
  • https://www.techopedia.com/ai-content-detection-flaws-best-practices - This article explores the flaws in AI content detection tools, highlighting their tendency to misclassify both human and AI-generated text. It discusses how detection mechanisms rely on predictability and pattern analysis, which can lead to inaccuracies. The piece also covers the challenges in detecting AI-generated text that has been humanized to bypass detection and the need for best practices to improve accuracy.
  • https://www.digitalocean.com/resources/articles/top-ai-content-detectors - This article reviews various AI content detectors, discussing their accuracy and effectiveness. It highlights that AI content detectors vary widely in accuracy, with even the best tools frequently misidentifying human-written content as AI-generated. The piece explains that these false positives occur because detectors rely on statistical patterns and word frequency analysis, flagging text that contains specific phrases or structures common in AI writing.
  • https://www.sciencedirect.com/science/article/pii/S305047592600093X - This study examines the reliability of AI detection tools in academic writing, finding that most AI detector findings are false. It discusses how adversarial generative dynamics in the AI–detector arms race favour AI tools, leaving detectors behind. The article also addresses how human alterations weaken detector sensitivity, making AI detection results unreliable, and the threat of undisclosed AI usage to publishing ethics.

Noah Fact Check Pro

The draft above was created using the information available at the time the story first emerged. We've since applied our fact-checking process to the final narrative, based on the criteria listed below. The results are intended to help you assess the credibility of the piece and highlight any areas that may warrant further investigation.

Freshness check

Score: 10

Notes: The article was published on April 27, 2026, and does not appear to be recycled or republished content. No earlier versions with differing figures, dates, or quotes were found. The narrative is original and timely.

Quotes check

Score: 10

Notes: The article includes direct quotes from Jennifer Farr, which are unique to this piece. No identical quotes were found in earlier material, and the wording is consistent across sources. The quotes can be independently verified through the author's statements.

Source reliability

Score: 8

Notes: The article is published on PR Daily, a reputable source within the public relations industry. However, it is a niche publication, which may limit its reach and influence. The content is authored by Jennifer Farr, a senior account director at Earnscliffe, lending credibility to the insights shared.

Plausibility check

Score: 9

Notes: The claims made in the article align with known issues regarding AI detection tools and their limitations. Similar concerns have been raised in other reputable sources. The narrative is plausible and consistent with industry discussions. However, the lack of specific examples or data points slightly reduces the score.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary: The article presents a timely and original discussion on the challenges of AI detection tools misidentifying human-written content. While the source is reputable within its niche, the reliance on a single author's perspective and the use of some specialized sources for verification slightly reduce the overall confidence in the content's independence. However, the plausibility of the claims and the lack of significant issues with freshness, quotes, paywall, and content type support a PASS verdict with medium confidence.