Reporter’s Notebook: How Millions of Douyin Users Decide What Stays and What Disappears
By Guan Cong


A week ago, while scrolling through Douyin — China’s version of TikTok — I flagged a video I suspected of spreading false information. Instead of a simple report submission, however, I stumbled upon an option inviting users to participate in community moderation, an experiment in user-driven governance.
Curious, I clicked through. A quick identity verification, a multiple-choice test on platform policies, and I was in. Just like that, I had become a Douyin “community reviewer.”
Every day since, I have been assigned 20 short videos and asked a deceptively simple question: Should this content be recommended to others?
Behind the scenes, I am not alone. My reviewer dashboard revealed that there are 5.96 million people like me — everyday users enlisted to help shape what gets seen and what disappears into the algorithmic void. Douyin, owned by ByteDance Ltd., considers our judgments when deciding whether certain videos should be denied exposure.
What I didn’t expect was how murky this job would be. The videos I reviewed mostly came from a gray zone — not egregiously violating the rules but not entirely benign either. Their quality was generally low: imitations of viral skits, daily life vlogs and spliced movie clips. But mixed in were videos containing statements and scenes that demanded something more than casual scrutiny.
Take, for example, a popular genre known as “follow-alongs,” where users re-enact trending jokes or dance routines. The ones I reviewed contained no clear rule violations. Why had they been flagged? I had no way of knowing.
After I cast my vote, a red-and-blue graph appeared, showing the percentage of reviewers who had voted “yes” or “no.” Over time, I noticed patterns: Reviewers were quick to reject advertisements. They were sharp at spotting crude content. They recoiled at panic-inducing claims.
But often, they weren’t simply deciding whether a video should be recommended — they were answering a different question entirely: Should this video be taken down?
In my first week, I reviewed 140 videos. About half were too ambiguous to call. Many made assertions that sounded factual but were nearly impossible to verify by common users like myself.
One particularly jarring video claimed: “In 1420, Emperor Zhu Di ordered the skinning alive of 3,000 palace maids.” Seventy-five percent of reviewers opposed its recommendation, citing it as unverified and grotesque. I tried searching for corroboration across different search engines. Nothing. Finally, I consulted a university professor specializing in Chinese history. “How would an ordinary person verify something like this?” I asked.
“I wouldn’t know either,” he admitted. “Maybe check The History of the Ming Dynasty? But in today’s Chinese internet landscape, good luck finding authoritative information.”
Some misinformation was easier to debunk. A video alleging that “North Korean artillery bombed eight U.S. and South Korean tanks” received 41% approval from reviewers. Another, claiming that SpaceX’s latest Starship test failure was due to the company being “taken over by Indian engineers,” garnered 64% support. Many reviewers commented that these videos helped “understand global affairs.” Both were ultimately deemed fit for recommendation.
Misinformation filters seemed to weaken when videos dealt with serious public issues. Unverified claims like “A fire at Yongbo Supermarket in Jixi County started in an illegally built structure” or “A fraud scheme in Lingbao Jetour Auto scammed 60 villagers” were widely approved, with users reasoning that such content “raised awareness” or “helped with rights protection.”
When 65% of reviewers approved a video claiming “The town government failed to install a promised fence along the river,” one dissenting user suggested referring to official government statements or professional media reports on complex public issues.
The problem is systemic. In the traditional media ecosystem, consumers place a degree of trust in news organizations. On social media, that trust is fractured, mediated through an opaque recommendation algorithm that doesn’t take responsibility for accuracy. Simply appearing in a feed confers credibility, yet platforms are largely shielded from liability when falsehoods spread.
This creates fertile ground for misinformation. Some videos appeal directly to users’ interests. Others are odd trivia. Some are framed as historical facts, even when they aren’t.
How do tales of “a massacre 600 years ago,” “North Korean artillery,” or “an Indian-engineered SpaceX failure” influence how users interpret contemporary events? It is a question for media scholars — but it is also one the public should be asking.
Social platforms rarely intervene unless misinformation causes a public relations crisis or financial loss. Over the past six months, many of the biggest social media controversies in China have originated from the same gray area I was reviewing. In an ecosystem where attention is currency, misinformation thrives because platforms neither effectively filter it, nor teach users how to discern truth from fiction. Instead, they provide a middle-ground solution: the illusion of participatory content governance.
Algorithms predict users’ behavior, but they don’t recognize opinions or comprehend meaning. Authenticity remains an afterthought in machine moderation. Reviewing content, I often felt like I was shifting between different identities — sometimes a journalist scrutinizing facts, sometimes a concerned parent, sometimes just another casual user.
Who, in the end, gets to decide what people see? Can AI plus millions of “volunteers” really fill this role? Or, as Douyin once told the press, should professional journalists be brought into the process?
Platforms promote users’ engagement but shy away from fostering media literacy. Under government mandates, Douyin will remind users to take screen breaks, sometimes flagging videos using AI-generated clips. But it will never remind them to question the accuracy of what they are watching.
Maybe the entire community review program is a repackaged game — one that neither reflects public opinion nor resolves the underlying conflicts between Douyin and its critics. What it does reveal, however, is the algorithmic chaos shaping our digital reality.
As generative AI models such as DeepSeek become wildly popular, users are forming a habit of “just asking AI” for everything. But AI hallucinates. And its hallucinations aren’t random — they stem from the information it learns. The problem? Much of that information is already unverifiable.
Raising media literacy is not just an individual responsibility but a shared burden between users and platforms. If content ecosystems become dominated by unverifiable noise, the consequences will be felt on both sides.
Contact translator Denise Jia (huijuanjia@caixin.com)
caixinglobal.com is the English-language online news portal of Chinese financial and business news media group Caixin. Global Neighbours is authorized to reprint this article.
Image: sorapop – stock.adobe.com