The collapse of startups is unveiling a new revenue stream as messy workplace data, from Slack messages to emails, is sold to AI developers , raising ethical and privacy issues in the process.
A growing number of collapsed startups are discovering that the by-products of everyday work can be turned into a lucrative asset for the artificial intelligence industry. What once looked like administrative clutter, from Slack conversations and Jira tickets to email chains and cloud archives, is now being sold as training material for AI systems that need to learn how real businesses operate.
Forbes reported that Shanna Johnson, who shut down the transcription company cielo24, was able to monetise more than a decade of internal records through SimpleClosure, a firm that manages company wind-downs. The material included years of workplace messaging, email correspondence and large stores of files documenting the company’s operations. The sale reportedly brought in hundreds of thousands of dollars, suggesting that the digital remnants of a failed business can hold unexpected value.
The interest is being driven by a wider shift in AI development. According to Forbes, model makers have already consumed much of the public web and are now seeking data that better reflects how people actually work inside organisations. Ali Ansari of micro1 told the publication that such data helps AI systems confront the messiness of real environments, while SimpleClosure chief executive Dori Yona described demand from AI labs as resembling a gold rush. The company is now preparing Asset Hub, a marketplace for code repositories, message archives and similar corporate records.
The economics are striking. AfterDawn reported that payments for these data sets can range from $10,000 to $100,000 depending on the scale of the archive and the number of employees involved. SimpleClosure has reportedly completed nearly 100 deals in the past year and recovered more than $1 million for founders. But the trade-off is obvious: even when personal details are stripped out, workplace messages often contain information that can still identify people or reveal sensitive business behaviour.
That is where the backlash is likely to sharpen. Marc Rotenberg of the Center for AI and Digital Policy told Forbes that selling internal communications to third parties raises substantial privacy concerns, especially because workers probably never expected their messages to be repurposed in this way. His organisation has urged US regulators to scrutinise emerging AI business practices, and the issue is likely to draw further attention as more companies look for ways to profit from the traces they leave behind when they shut their doors.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph: - Paragraph 1: [2], [3] - Paragraph 2: [2] - Paragraph 3: [2], [3] - Paragraph 4: [3] - Paragraph 5: [2]
Source: Noah Wire Services
Verification / Sources
- https://www.breitbart.com/tech/2026/04/19/companies-are-selling-workers-private-messages-and-emails-as-ai-training-data/ - Please view link - unable to able to access data
- https://www.forbes.com/sites/annatong/2026/04/16/ais-new-training-data-your-old-work-slacks-and-emails/ - Forbes reports on defunct startups liquidating their Slack archives, Jira tickets, and email threads as premium training data for AI labs. Shanna Johnson, CEO of cielo24, partnered with SimpleClosure to sell her company's 13-year digital footprint, generating significant revenue. This practice highlights AI companies' increasing reliance on authentic workplace data to develop competent AI agents, as publicly available internet content becomes insufficient. SimpleClosure's CEO, Dori Yona, describes the demand from AI companies as a 'gold rush,' with payments ranging from $10,000 to $100,000 per company. The practice raises significant privacy concerns, with experts questioning the ethics of selling internal communications to third parties.
- https://www.afterdawn.com/amp/news/2026/04/18/defunct-companies-sell-employees-emails-slack-messages - AfterDawn reports that defunct companies are selling their former employees' email and Slack messages to AI companies. This practice has emerged as a new revenue stream for companies that have ceased operations. AI companies are interested in real, company-internal discussions to train language models, as there is limited public material available from real working life. Payments for company message collections range between $10,000 and $100,000, depending on the size of the message history and the number of employees. Privacy issues are significant, as messages likely contain personally identifiable information, even if names and contact details have been removed. The European Union's General Data Protection Regulation (GDPR) may not permit such trade in Europe.
- https://www.basilai.app/articles/2025-12-03-slack-ai-training-private-messages-employees-discover-corporate-surveillance.html - Basil AI discusses the discovery that Slack's AI features are analyzing millions of private workplace conversations, including those marked as 'private.' When companies enable Slack AI, the system gains access to all messages, files, and conversations within the workspace, potentially using them as training data for machine learning algorithms. This raises concerns about digital surveillance and the extent to which employee communications are being monitored and utilized for AI development without explicit consent.
- https://www.computerworld.com/article/3808822/linkedin-sued-for-training-ai-on-users-private-messages.html - Computerworld reports on a lawsuit filed in California accusing LinkedIn of using private messages on its platform to train AI models. The lawsuit alleges that in August 2024, LinkedIn introduced a new privacy setting that automatically enrolled users in a program allowing their personal data to be used for AI training. LinkedIn denies the allegations, describing them as 'false and unfounded,' and states that user data sharing for AI purposes has not been enabled in the UK, the European Economic Area, or Switzerland.
- https://www.aljazeera.com/news/2025/11/24/are-tech-companies-using-your-private-data-to-train-ai-models - Al Jazeera examines how tech companies are using users' private data to train AI models. Meta, Google, and LinkedIn have rolled out AI features that can access users' public profiles or emails. While Google and LinkedIn offer users ways to opt out of AI features, Meta's AI tool provides no means for users to decline. Meta collects user content set to 'public' mode, including photos, posts, comments, and reels, for AI training purposes. The article highlights the lack of transparency and user control over how personal data is utilized in AI development.
- https://www.foxbusiness.com/fox-news-tech/major-companies-ai-snoop-employees-messages-report-reveals - Fox Business reports that companies like Walmart, Delta, T-Mobile, Chevron, and Starbucks are reportedly monitoring employee conversations on messaging apps using software from an AI startup called 'Aware.' This practice raises concerns about employee privacy and the extent to which companies are using AI to monitor internal communications. The use of AI to 'snoop' through employees' messages highlights the growing intersection of workplace surveillance and artificial intelligence technologies.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first emerged. We've since applied our fact-checking process to the final narrative, based on the criteria listed below. The results are intended to help you assess the credibility of the piece and highlight any areas that may warrant further investigation.
Freshness check
Score: 8
Notes: The article was published on April 19, 2026, and references events from April 16, 2026, indicating recent developments. However, similar reports have appeared in other reputable sources, such as Forbes on April 16, 2026, and AfterDawn on April 18, 2026, suggesting that the narrative is not entirely original. (forbes.com)
Quotes check
Score: 7
Notes: The article includes direct quotes from individuals like Ali Ansari and Dori Yona. While these quotes are attributed, they are also present in the Forbes article from April 16, 2026, raising concerns about the originality of the content. (forbes.com)
Source reliability
Score: 6
Notes: The article originates from Breitbart, a publication known for its conservative stance. While it cites reputable sources like Forbes and AfterDawn, the reliance on a single source with a known bias may affect the overall reliability of the information presented.
Plausibility check
Score: 9
Notes: The claims about defunct companies selling internal communications as AI training data are plausible and align with recent industry trends. Similar practices have been reported by other reputable sources, such as Forbes and AfterDawn. (forbes.com)
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary: The article presents plausible claims about defunct companies selling internal communications as AI training data, supported by reports from reputable sources like Forbes and AfterDawn. However, the heavy reliance on a single source with a known bias (Breitbart) and the lack of additional independent verification sources raise concerns about the overall reliability and independence of the information presented. Additionally, the presence of similar reports in other reputable sources suggests that the content may not be entirely original. Given these factors, the article does not meet the necessary standards for publication under our editorial guidelines.