Recent experiments with Anthropic’s Claude Opus 4.7 reveal that advanced generative AI systems can identify writers from diverse and unpublished texts, threatening anonymity and raising ethical questions about AI’s pattern recognition capabilities.
As generative AI systems become better at pattern recognition, they are also becoming more useful at something far less celebrated: identifying who wrote a text. That unsettling possibility sits at the centre of a recent essay by Kelsey Piper in The Argument, where she described feeding unpublished writing into Anthropic’s Claude Opus 4.7 and watching it repeatedly name her as the author, even when the material came from different periods of her life and from very different registers.
Anthropic describes Claude Opus 4.7 as its most capable model, built for long-running, complex work and tuned for reliability, with the company saying it has been extensively tested for safety and security. Its public materials also emphasise strong performance on reasoning, coding and document analysis. But Piper’s experiments suggest that those same pattern-matching strengths can be turned towards a more intrusive use: attributing authorship from short stretches of text, even when the writer is using an unpublished draft or a piece written years earlier.
What makes the account especially striking is that it was not limited to one genre or one obvious sample. According to Piper, the model identified her from a short excerpt of a political column, a school report, a fantasy manuscript and even a college application essay she wrote 15 years ago. Other systems were less consistent, but the result was still enough to underscore a broader point: for people with a substantial public writing history, anonymity may now be far more fragile than many assume.
That concern is not new, but AI is giving it fresh force. Stylometry, the long-established practice of analysing writing style to infer authorship, has been used for years in scholarship, journalism and investigations. What is changing is the speed and accessibility of the process. Tools that once required specialist effort can now be run in seconds, and even when they are wrong they may still be persuasive enough to send a researcher or journalist further down the trail.
There are also limits. The New York Times recently reported on John Carreyrou’s efforts to identify Bitcoin’s pseudonymous creator, Satoshi Nakamoto, which showed how hard it can be to move from linguistic clues to a defensible conclusion. His work combined stylistic observations with real-world leads, illustrating that text analysis alone is rarely decisive. Yet the fact that AI can now perform similar screening so quickly means the threshold for suspicion has been lowered, even if the final verdict remains uncertain.
That is the deeper warning in Piper’s piece. Anonymous writing has never been perfectly secure, but it once offered a meaningful buffer between a voice and a name. With models such as Claude Opus 4.7, that buffer is shrinking. Even if the results are imperfect, they are likely to be good enough to encourage more probing, more cross-checking and more attempts at unmasking. In practice, that may matter almost as much as perfect accuracy.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph: - Paragraph 1: [2], [5] - Paragraph 2: [2], [3], [5] - Paragraph 3: [5] - Paragraph 4: [5] - Paragraph 5: [1] - Paragraph 6: [1], [5], [6]
Source: Noah Wire Services
Verification / Sources
- https://www.techdirt.com/2026/04/27/the-risks-of-anonymity-in-the-age-of-generative-ai/ - Please view link - unable to able to access data
- https://www.anthropic.com/claude/opus - Anthropic's Claude Opus 4.7 is their most capable AI model, designed to handle complex tasks with reliability. It plans deliberately, uses memory to learn across sessions, and drives long-running work forward with minimal oversight. The model is tailored for enterprise workflows, managing complex, multi-day projects end-to-end with professional polish and strong performance on spreadsheets, slides, and documents. Extensive testing ensures that Opus 4.7 meets Anthropic’s standards for safety, security, and reliability, with a model card covering safety results in depth.
- https://www.anthropic.com/news/claude-opus-4-7 - Anthropic's Claude Opus 4.7 demonstrates strong substantive accuracy on BigLaw Bench for Harvey, scoring 90.9% at high effort with better reasoning calibration on review tables and noticeably smarter handling of ambiguous document editing tasks. It correctly distinguishes assignment provisions from change-of-control provisions, a task that has historically challenged frontier models. Substance was consistently rated as a strength across evaluations: correct, thorough, and well-cited. The model also shows significant improvements in coding capabilities, with a meaningful jump in performance on CursorBench, clearing 70% versus Opus 4.6 at 58%. For complex multi-step workflows, Opus 4.7 is a clear step up, with a 14% improvement over Opus 4.6 at fewer tokens and a third of the tool errors.
- https://www.anthropic.com/claude/opus - Anthropic's Claude Opus 4.7 is their most capable AI model, designed to handle complex tasks with reliability. It plans deliberately, uses memory to learn across sessions, and drives long-running work forward with minimal oversight. The model is tailored for enterprise workflows, managing complex, multi-day projects end-to-end with professional polish and strong performance on spreadsheets, slides, and documents. Extensive testing ensures that Opus 4.7 meets Anthropic’s standards for safety, security, and reliability, with a model card covering safety results in depth.
- https://boingboing.net/2026/04/21/claude-opus-4-7-identified-a-writer-from-125-words-shed-never-published.html - Kelsey Piper, a writer at Vox's Future Perfect, tested Anthropic's Claude Opus 4.7 by inputting 125 words of an unpublished political column. The AI model identified her as the author, even though she hadn't logged in and the test was run in Incognito mode. The same model identified her from a school progress report she'd written about a student's Pokémon essays, a genre entirely outside her published work, and from a movie review of a 1942 WWII comedy she'd never publicly reviewed. It took 500 words of unpublished fiction to reach the same conclusion. It took a 15-year-old college application essay. ChatGPT and Gemini mostly guessed wrong, where Opus 4.7 succeeded. Piper writes in The Argument that anyone who has written prolifically under their real name has probably lost meaningful anonymity. She tested friends with minimal online presence, and Claude failed to identify them — but it did guess close mutual friends from the same social circle, picking up stylistic tics that spread through communities. The threshold for deanonymization will probably drop as models improve and training data grows.
- https://www.tomsguide.com/ai/i-tested-anthropics-new-claude-opus-4-7-and-its-the-first-ai-that-actually-reasons-through-tasks - The article reviews Anthropic’s latest AI update, Claude Opus 4.7, highlighting its marked improvement in reasoning and task execution. The author rigorously tested the new model using diverse prompts and found it significantly more precise, autonomous, and discerning than earlier AI tools. Noteworthy capabilities include building a fully functional task-tracking web app without needing clarifying questions, conducting self-verifying research with accurate source validation, and analyzing visual content to suggest practical home improvements. The model also displays human-like taste and judgment, providing realistic cover letters and app designs, and shows impressive document analysis, extracting insights from PDFs and flagging inconsistencies. When faced with complex decisions, such as choosing between job offers, Claude enhances decision-making by asking meaningful follow-up questions and offering tailored recommendations. Overall, Opus 4.7 transitions from being a reactive chatbot to a strategic partner, showcasing reasoning, self-awareness, and creative thinking, albeit with a higher computational cost. The author deems it the most advanced publicly available AI yet.
- https://www.itpro.com/security/anthropic-claude-opus-claude-mythos-cyber-capabilities - Anthropic has released its new AI model, Opus 4.7, touting significant enhancements in coding and knowledge work, particularly excelling at complex and long-duration software engineering tasks. Users report that the model can now autonomously handle high-difficulty coding projects with minimal supervision. However, in light of concerns following the limited release of Claude Mythos—Anthropic's most powerful AI capable of identifying thousands of zero-day vulnerabilities—the company intentionally reduced Opus 4.7's cybersecurity capabilities. Using new training techniques, Anthropic “differentially reduced” the model’s potential for misuse in cybersecurity contexts, and added safeguards to detect and block high-risk or prohibited requests. These changes are part of an effort to responsibly manage the release of powerful AI models like Mythos, whose cyber strengths remain gated through Project Glasswing. While Mythos demonstrated a notable ability to autonomously perform certain cyberattacks in testing, external evaluations like that from the UK’s AI Security Institute stress its limitations compared to human capabilities. Anthropic has invited security professionals to join its Cyber Verification Program to use Opus 4.7 for legitimate purposes. This marks a shift toward specialized AI models, balancing innovation with safety amid increasing concerns over misuse by malicious actors.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first emerged. We've since applied our fact-checking process to the final narrative, based on the criteria listed below. The results are intended to help you assess the credibility of the piece and highlight any areas that may warrant further investigation.
Freshness check
Score: 8
Notes: The article was published on April 27, 2026, which is recent. However, similar discussions about AI's ability to identify authorship from unpublished texts have appeared in other sources, such as Boing Boing on April 21, 2026. (boingboing.net) This suggests that while the topic is current, the specific content may not be entirely original.
Quotes check
Score: 7
Notes: The article references Kelsey Piper's essay in The Argument, but the exact quotes from Piper are not provided. Without direct access to Piper's original essay, it's challenging to verify the accuracy and context of the quotes used in the article.
Source reliability
Score: 6
Notes: Techdirt is a technology-focused news site known for its commentary on digital rights and policy. While it is a reputable source within its niche, it is not as widely recognized as major news organizations like the BBC or Reuters. This may affect the perceived reliability of the information presented.
Plausibility check
Score: 8
Notes: The claims about AI models identifying authorship from short excerpts of text are plausible, given the advancements in AI and machine learning. However, the article does not provide detailed evidence or studies to support these claims, which raises questions about the robustness of the findings.
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary: While the article addresses a timely and relevant topic, there are concerns regarding the originality of the content, the ability to verify quotes, and the independence of the verification sources. These factors contribute to a medium level of confidence in the article's overall reliability.