To inform, connect, and empower stakeholders in business, politics and society.
Global Neighbours gmbH/e.v Johannesgasse 15/3/12 1010 Vienna, Austria
+43 1 7146848
contact@globalneighbours.com

When Zhou Tianyi, a 24-year-old researcher at the Shanghai Artificial Intelligence Research Institute, built an open-source AI agent in late March, his aim was straightforward: to preserve his team’s institutional memory.
By importing daily collaboration data from workplace apps such as Feishu and DingTalk, the system — known as colleague.skill — could automate weekly reports, execute workflows and review code. Within days, the project had earned nearly 15,000 stars on GitHub.
But Zhou soon found that his invention was being pulled into a darker corporate narrative.
People began using phrases such as “refining colleagues,” he said, emphasizing that his intention had simply been to preserve knowledge, not replace workers.
Zhou’s discomfort underscores a broader transformation sweeping through workplaces worldwide in 2026 as technology firms and traditional enterprises aggressively invest in “skill distillation,” a process that captures employees’ operational habits, business logic, workflow rhythms and decision-making patterns and converts them into standardized AI systems capable of executing tasks independently.
The ultimate goal is to create digital employees that can replicate human roles.
The open-source community’s reaction revealed a deeper societal yearning, however. Users began creating secondary projects to preserve interpersonal chemistry and emotional memories, realizing that what is lost when an employee leaves is not just technical know-how, but the unique working rhythm established with specific individuals.
The concept of distilling employee skills has ignited social controversy, exposing a stark divide between companies, AI service providers and employees.
Employers are increasingly viewing workers as a source of data and are seeking to compress labor costs, while AI service providers are racing to commercialize these newly digitized capabilities. Employees, meanwhile, are caught in the middle: eager to use AI to improve efficiency yet fearful of becoming little more than instruments training their own replacements, while largely excluded from sharing in the economic gains generated.
The shift is reshaping the modern workplace. By collecting data ranging from mouse movements to keyboard activity, companies are training AI systems on the expertise and work patterns of employees, raising concerns that some roles will eventually be automated. Western technology firms such as Meta Platforms Inc. and Amazon.com Inc. have already invested heavily in such strategies, even as layoffs across the sector fuel worries about AI-driven displacement. In China, the trend is raising new legal and ethical questions around data ownership, labor rights and employee consent.
Future of Work
Five models for collaboration between humans, AI agents and robots


Turning workers into AI
Automating workflows is not a new idea. Earlier generations of enterprise automation relied heavily on robotic process automation, or RPA, which used software bots and basic AI to simulate human interactions with graphical user interfaces. These systems automated repetitive, rule-based tasks such as predefined keystrokes, system navigation and data extraction.
But the technology has evolved rapidly. Yang Fangxian, founder of the open-source AI platform 53AI, compared traditional RPA systems with fixed mechanical arms on an assembly line that simply repeat preprogrammed actions. Modern AI systems, by contrast, possess what he described as a true “brain,” enabling them to understand context and make more nuanced judgments.
The enterprise playbook for distilling human skills generally follows four steps.

First, companies collect large volumes of data from employees, ranging from mouse trajectories and keyboard rhythms to internal communications and standard operating procedures. Second, the information is cleaned and stripped of sensitive details to create training samples. Third, companies train models to build reusable AI capabilities. Finally, the AI systems are deployed to handle routine work, while human employees are pushed toward supervisory or higher-level decision-making roles.
A mid-level executive at a large software-as-a-service (SaaS) company said rapid advances in AI coding capabilities had caused a chain reaction across the industry. AI systems have evolved from simple predictive tools into multi-agent platforms capable of handling tasks such as data scraping, framework construction and tone optimization with minimal human intervention.
According to the executive, those technological breakthroughs encouraged companies to quietly begin testing employee data collection programs in 2024, expand them in 2025 and move toward large-scale workforce replacement in 2026.
Global technology firms are at the forefront of this trend.
On April 22, Meta, Facebook’s parent company, informed its American employees of a new internal project called the “Model Capability Initiative.” According to an internal notice, the program was designed to capture mouse clicks, keyboard inputs and screen context data to train AI agents. The notice said Meta’s latest large language model, Muse Spark, required faster development and the company needed to “make full use of our daily work.”
Employees said the monitoring systems were deployed before the announcement was issued, and workers were not given an opt-out option. The rollout coincided with another round of layoffs at Meta, intensifying concerns among employees that the expertise of departing workers was being preserved in the form of trainable AI assets.
An executive at a leading AI model company said the strategy mirrors earlier ambitions by Elon Musk’s Grok project to create AI employees capable of scaling reusable human expertise into deployable digital labor systems rather than merely serving as productivity tools.
AI upends the labor market
The economic implications of the shift are substantial.
A global survey of 10,000 executives released by the World Economic Forum in January painted a bleak outlook for workers. More than 54% of respondents said AI would significantly replace existing jobs, while 44.6% expected the technology to improve corporate profit margins. Only 12.1% believed AI would lead to higher wages.
The structural shift is already visible in China’s labor market.
Although overall hiring conditions in 2026 remain weaker than in previous two years, demand for AI-related positions tied to large language models and algorithms has surged, along with salary premiums for qualified candidates. Recent graduates entering the technology sector said AI fluency and logical reasoning skills have become baseline job requirements.
According to a report by the McKinsey Global Institute, demand for “AI fluency” in job postings increased sevenfold over the past two years.
The World Economic Forum warned that if AI adoption continues to outpace workforce retraining, the resulting displacement could contribute to rising unemployment, weaker consumer confidence and broader social instability. To mitigate those risks, it urged companies to adopt “no regrets” strategies, including cross-generational work models that pair younger and older employees to strengthen AI adaptability.
A November McKinsey report estimated that AI technologies could automate 57% of working hours in the United States. Roles built around highly standardized and quantifiable tasks — such as basic customer service, data entry, financial accounting, preliminary contract review and routine quality inspection — are expected to face the greatest disruption.
Still, the transition may not prove to be entirely zero-sum.

The McKinsey report argued that the current wave of AI development is fundamentally about redesigning workflows rather than automating isolated tasks. In that case, the future workplace is less about machines replacing humans and more about collaboration among workers, AI agents and robots.
Because automation often targets individual tasks rather than entire occupations, McKinsey estimated that roughly 72% of existing human skills would remain relevant, even as the environments in which those skills are applied are being reshaped. By restructuring workflows and organizational systems around human-machine collaboration, the report estimated that the U.S. economy could unlock as much as $2.9 trillion in annual value by 2030.
For now, a more stable division of labor between humans and AI is beginning to emerge.
Li Jingmei, joint chief executive of Beijing Langboat Technology Co. Ltd., a large language model developer, said enterprise AI adoption is already moving workers away from pure execution roles and toward oversight. Tasks such as drafting minutes, for example, are increasingly handled by AI systems, while people focus on reviewing and validating the results.
Workers whose responsibilities consist primarily of standardized and measurable tasks remain the most vulnerable, Li said. By contrast, employees who develop “T-shaped” capabilities — combining deep industry expertise with broad communication skills, empathy and complex decision-making abilities — are likely to be harder to replace.
Software industry shrinks
The rise of autonomous AI agents is sending shockwaves through the technology sector as well, challenging the traditional SaaS business model built around standardized subscriptions and licenses.
In February, Anthropic’s release of an enterprise legal plug-in called Claude Cowork triggered a sharp sell-off in the U.S. SaaS stocks, with the sector falling nearly 40% since the start of the year. Investors are increasingly embracing the concept of Result-as-a-Service, or RaaS, in which companies pay directly for measurable outcomes — such as generated sales leads or completed workflows — rather than software tools themselves.
“Previously, you bought software. Now, you buy a digital employee, which is pure productivity,” Yang, founder of 53AI, said.
The change has created opportunities for Chinese SaaS companies pivoting toward customized AI agents. But industry executives warn that the longer-term outlook for the sector is far more uncertain.
One technology executive predicted that as major AI firms continue rapidly improving general-purpose models, the market space available for middleware providers and SaaS companies could contract sharply. In that case, the executive said, the overall SaaS market could eventually shrink by roughly two thirds.
The pressure is already reshaping specialized professions such as software engineering and design.
“Hardly anyone writes code entirely by hand anymore,” the executive said, adding that more than 90% of developers now work more slowly than leading AI coding systems on many routine tasks.
The design industry is undergoing a similar transformation. AI tools are becoming deeply integrated into creative workflows, allowing non-specialists to complete tasks that previously required professional expertise while enabling digital employees to handle larger amounts of routine production work.
Even the rapidly expanding large AI model sector itself may not be immune.
Although major AI companies are currently engaged in aggressive hiring, some executives believe advances in self-improving AI systems could eventually reduce the industry’s own dependence on human engineers. Several overseas technology firms have begun limiting direct human involvement in parts of the model training process to accelerate iteration cycles, according to the executive.
“In the future, researchers working on large models may no longer participate directly in development,” the executive said. “Their role will increasingly focus on adjustment and monitoring.”
China draws legal lines
While U.S. markets have generally rewarded technology companies pursuing AI-driven efficiency gains with higher valuations, China’s stricter labor protection laws are creating significant legal friction.
In April, the Hangzhou Intermediate People’s Court concluded a closely watched case involving an employer that reduced a worker’s salary by 40% and later dismissed them after arguing that AI systems could perform the role more efficiently.
The court ruled the company’s actions illegal and ordered it to pay 261,000 yuan ($38,300) in compensation. Legal observers described the decision as China’s first effective judgment establishing that AI-driven technological upgrades do not constitute valid grounds for unilateral employment adjustments.
Shan Qidi, a lawyer at Zhejiang Kinding Law Firm, said a company’s proactive adoption of AI does not qualify as the kind of “objective significant change” required under Chinese labor law to justify terminating or substantially altering employment contracts.
The process of collecting employee data for AI training also raises complex questions concerning privacy rights, trade secrets and consent.
Because the relationship between employer and employee is inherently unequal, Shan said, companies face legal risks if workers are pressured into acquiescing over the collection and use of personal workplace data.
Legal experts warned that the risks become greater when AI systems are used to make automated decisions involving performance reviews, promotions or layoffs.
Song Xiaoran, a partner at Beijing Chance Bridge Law Firm, said using employee data to train AI systems goes beyond the normal scope of human resources management and therefore requires specific and clearly defined authorization. Employees, Song said, retain the right to request anonymization of personal data, restrict how information is used after resignation and prohibit the commercialization of their personal behavioral traits.
To reduce those risks, some developers are building stricter safeguards into enterprise AI systems.
Zhou said his open-source agent relies entirely on local processing to reduce the risk of data leaks, operates within predefined boundaries using hardware isolation and preserves ultimate human decision-making authority.
While companies have the right to pursue greater efficiency, Shan said, businesses cannot treat employees as an unlimited source of exploitable training data without eventually facing serious legal consequences.
Liu Peilin contributed to the story
Contact reporter Han Wei (weihan@caixin.com)
caixinglobal.com is the English-language online news portal of Chinese financial and business news media group Caixin. Global Neighbours is authorized to reprint this article.