Emerging cases of digital employee clones in China spark debate over privacy, consent, and the future of work as AI advancements blur lines between technology and human identity.
China’s latest AI workplace experiment has triggered intense debate online after examples emerged of former employees being digitally “kept on” after leaving their jobs. The most discussed case began in March, when a 24-year-old engineer at the Shanghai Artificial Intelligence Laboratory built a project called "colleague.skill" in just four hours, feeding it internal chats, emails and documents so it could mirror how staff made technical decisions and communicated. By 20 April, the project had attracted 15,500 GitHub stars, a level of attention reserved for only a tiny fraction of the platform’s projects. The idea was framed as a way to preserve corporate knowledge, but many Chinese users saw something far more unsettling: a worker’s digital double continuing the job after the human had gone.
The reaction sharpened when a second case surfaced involving Zhang Xuefeng, a well-known higher-education adviser who died in March. According to the material described online, a developer used his published articles, interviews, speeches and conversations to build an AI chatbot that allowed students and other users to keep talking to a version of him after his death. The project was reportedly created without the consent of Zhang’s family or his company, adding a sharper privacy and ethics dimension to the backlash.
Chinese media reports and tech coverage suggest the controversy reflects how quickly AI tools have moved from specialist use to ordinary office work in China. Popular workplace communication platforms now offer official plug-ins that make it easier to deploy open-source and commercial models, while some companies have gone further and require staff to use AI products, even tying monthly token use to performance assessments. OfficeChai said that by the end of 2025, 60 per cent of Chinese employees were using AI at least once a week, compared with 37 per cent in the United States.
The trend is not confined to China. ECNS reported in April that a gaming company in Shandong created a digital employee to carry on handling consultations, interview scheduling and presentation work after an HR specialist resigned, with the former worker’s consent. Separate reports from other outlets have described similar AI clones and “digital humans” being built from employees’ work histories, as well as celebrity-style avatars that can be queried for advice. The commercial appeal is obvious: companies can preserve know-how and keep operations running. But the backlash shows that many people are not yet comfortable with the idea of a bot inheriting a job, a voice or even a reputation.
Europe offers a very different legal backdrop. Under the EU’s GDPR, employee chats, emails and work documents are treated as personal data, and firms generally cannot reuse them without consent. That is why cases involving OpenAI in Italy and X in Ireland drew attention, even though neither ended in a penalty. For now, that framework means European workers are better insulated from the most aggressive versions of this practice, but the pace of AI development is fast enough that new grey areas are likely to emerge.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph: - Paragraph 1: [2], [3] - Paragraph 2: [3], [6] - Paragraph 3: [2], [4] - Paragraph 4: [2], [3], [6], [7] - Paragraph 5: [1], [5]
Source: Noah Wire Services
Verification / Sources
- https://index.hu/kulfold/2026/04/25/mesterseges-intelligencia-kina-munkahely-tartalom-atadas-avatar/ - Please view link - unable to able to access data
- https://www.ecns.cn/m/news/sci-tech/2026-04-07/detail-ihfcmemi3020223.shtml - A Chinese gaming company in Shandong Province has developed an AI-powered 'digital employee' to continue handling tasks after a former HR specialist's resignation. The digital avatar manages consultations, interview scheduling, and prepares presentations and spreadsheets. The AI was trained using the former employee's work data, with their consent, to replicate their communication style and decision-making processes. (ecns.cn)
- https://www.scmp.com/news/people-culture/trending-china/article/3349365/chinese-firm-slammed-using-ex-employees-data-create-ai-human-continue-working - A Chinese company has faced criticism for creating an AI 'human' using a former employee's data to continue work post-resignation. The AI handles tasks like answering inquiries and scheduling meetings, sparking debates over consent and privacy. (scmp.com)
- https://www.moneycontrol.com/news/trends/chinese-company-builds-ai-clone-of-employee-keeps-him-working-after-resignation-13886667.html - A Chinese firm has developed an AI-powered digital avatar of a former HR employee to handle routine tasks after his resignation. The AI was trained on the employee's work data, raising concerns about consent, privacy, and job security. (moneycontrol.com)
- https://legalclarity.org/does-gdpr-apply-to-your-employee-data/ - The General Data Protection Regulation (GDPR) applies to employee data for any organisation, including those in the United States with employees in the European Union. The regulation governs how personal information is collected, used, and protected in the employment context. Companies with even a single employee in an EU country must ensure their human resources data procedures are compliant. (legalclarity.org)
- https://en.sedaily.com/international/2026/04/14/chinese-firm-deploys-ai-clone-of-former-employee-sparking - A Chinese gaming company has deployed an AI clone of a former employee, trained on their work data, to continue business operations. This has sparked controversy over privacy and consent, as the AI operates within the company's internal chat system. (en.sedaily.com)
- https://techxplore.com/news/2026-04-china-rein-ai-digital-humans.html - China is moving to govern the growing 'digital human' industry more tightly, issuing draft rules on how AI avatars are developed and deployed. The regulations aim to prevent harm to children, social instability, and the creation of avatars resembling individuals without their consent. (techxplore.com)
Noah Fact Check Pro
The draft above was created using the information available at the time the story first emerged. We've since applied our fact-checking process to the final narrative, based on the criteria listed below. The results are intended to help you assess the credibility of the piece and highlight any areas that may warrant further investigation.
Freshness check
Score: 7
Notes: The article was published on April 25, 2026. Similar reports emerged in mid-April 2026, with the earliest known publication on April 19, 2026. (techxplore.com) The narrative appears to be original, but the proximity of publication dates suggests potential overlap or recycled content. (caixinglobal.com)
Quotes check
Score: 6
Notes: Direct quotes from the article cannot be independently verified. The absence of verifiable sources raises concerns about the authenticity of the quotes. (techxplore.com)
Source reliability
Score: 5
Notes: The article originates from Index.hu, a Hungarian news outlet. While it is a reputable source within Hungary, its international reach and recognition are limited. This raises questions about the source's reliability and potential biases. (techxplore.com)
Plausibility check
Score: 8
Notes: The claims about AI avatars being used to replicate employees' work after their departure are plausible and align with recent developments in AI applications in the workplace. (caixinglobal.com) However, the lack of independent verification sources diminishes the overall credibility.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary: The article presents plausible claims about AI avatars replicating employees' work post-departure, aligning with recent AI developments. However, the lack of independently verifiable sources, reliance on unverifiable quotes, and the limited international recognition of the source raise significant concerns about the content's credibility and accuracy. (techxplore.com)