When it comes to Chinese Internet companies, the English media always accuse these Internet companies of participating in the censorship of the Chinese government in their macro-narratives.
But in fact, Weibo and WeChat just like Facebook, are facing hundreds of millions of pieces of rumors and fake news that could damage the lives and property of its users.
The Internet in China, like the Internet in the US or any other country, has many rumors and fake news, and the number of those who touch the red line set by the Chinese government is very small.
According to the 2019 Internet rumor Management report released by Tencent in December 2019, they handled a total of 84,000 rumors in 2018, and articles written to combat rumors have been read 110 million times.
The main rumors fall into three categories. The first category is health and medical rumors or fake news. For example, “Malaria can fight cancer”, “cooked fruit has a medical effect”, “vitamin D and calcium tablets can lead to cancer” and so on.
The second category is “food safety” rumors, such as “Chinese favorite chicken is carcinogenic”, “children eating lychees on an empty stomach will lead to death”, “a low-salt diet is actually more unhealthy” and so on.
The third category is rumors of “public events”, such as “Please call the police if you see this woman, she is a human trafficker on the run for 30 years”, “the national highway will be free during the following periods”, “the pronunciation of these Chinese characters will change next year” and so on.
Weibo, Today’s Headline, and Baidu face the same problem as Tencent: how to deal with rumors that do not clearly violate Chinese law.
A clear understanding of this seems to be helpful to the fake news of Twitter, Google and Facebook.
How do Chinese Internet companies tell whether a content is a rumor or fake news?
WeChat and Weibo mark rumors through mixed reader reports, artificial intelligence, authoritative media, or individual voluntary statements.
To put it simply, any WeChat user who reads the content in WeChat can use a report button to report what he thinks is fake news to WeChat.
Upstream, WeChat has set up a center for refuting rumors, which can be joined by scientific research institutions, authoritative media, government departments, hospitals, prestigious doctors, scholars, media people, and so on.
The reported content accumulates in the backstage of WeChat, and after reaching a certain amount, Wechat will conduct a simple initial manual review of these reports. If WeChat’s initial manual censorship believes that there is a good chance that the content is a rumor or false news, it will be pushed into the center of refuting rumors as a task.
People or organizations in the rumor refuting center who may be related to the rumor will receive this task, and if they have time, they can confirm the fake news and write a short article to explain the errors in the original content.
When this task is completed, the original article will be deleted from WeChat. But it is not simply deleted, but replaced by the refutation of rumors written by experts.
You are reading Panda!Yoo
A blog about modern Chinese culture and consumption trends. If you are interested in Chinese food, drinks, games, movies, novels, dramas, please follow us.
For those recurrent rumors and fake news, WeChat will mark them as rumors directly through artificial intelligence and link to articles that have been published in the past to refute the rumors.
Of course, the people or organizations in the rumor refutation center can also take the initiative to mark as rumors what has not yet accumulated enough reports.
People or organizations in the rumor refuting center will not receive cash rewards directly from Tencent, they will only get reputation and traffic.
In the case of Weibo and WeChat, the fact-checking content will exist in the form of articles and will be given more recommendations by the algorithm. In the process, organizations or individuals who write fact-checking can earn followers and advertising revenue in the same way as other content publishers.
According to the above-mentioned report, there were 774 organizations in WeChat’s rumor-refuting centers in 2019. This number seems to be higher than that of Facebook.
“Authority” is a vague concept, and WeChat and Weibo usually do not distinguish who is more authoritative between institutions and individuals qualified for fact-checking. In most cases, showing which fact check depends on who wrote the fact check first.
But for the long term, it uses a model similar to Quora, so that the most plausible explanation is displayed in a more prominent place.
How to eliminate the influence of dispelling rumors and fake news?
One of the big problems in refuting rumors is that when a rumor is corrected, it has spread widely.
People don’t usually share fact-checking content the way they share rumors, but Weibo and WeChat use a very special approach to this. WeChat and Weibo have adopted different ways to invalidate the fake news that has been spread.
On Weibo, if the rumor or fake news itself does not violate the law or raise real-world security issues, it will be retained and can even continue to be retweet. But a prominent text will appear at the top of the Weibo, which has different copywriters, but in most cases informs the reader that “This weibo’s content is wrong” and contains a link.
The link points to a long article explaining what errors were contained in the original weibo. If the content has been marked as “fake news” and some of it is confirmed to be true, another new link will be overwritten.
When it comes to public events, this reversal may occur several times. For example, in a typical case of serious domestic violence, the victim will first issue a complaint, but then the suspect may declare that the victim faked everything, and finally the police may release the results of the investigation.
In this case, the Weibo replaces the link with a special hashtag. In this hashtag, the content of all parties involved in the event is manually topped to ensure that no one user will see only one party’s speech because of the lack of attention.
Facebook launched this feature in 2016, four years after Weibo. And Facebook does not use this feature frequently.
On WeChat, Tencent offers two ways to eliminate the impact of rumors or fake news: “Tencent refuting rumors”(腾讯较真辟谣) and “rumor filter”(谣言过滤器).
“Tencent refutes rumors”: this is a Mini Program running on WeChat, which contains a large number of articles from third-party authorities to correct inventory rumors. Users can enter keywords to search for corrections to rumors, ask their own questions about a rumor, or talk to a robot customer service to verify that content is a rumor.
“Rumor Filter”: this is an official Tencent official account that silently checks users’ reading history on Wechat when users follow it and give it special permissions. When a user reads an article and the article is marked as fake news for a period of time, the user will receive a notification from the “rumor filter” that contains error correction information for the original fake news.
Baidu and Bytedance’s news product Today’s Headline uses a method similar to Tencent’s “rumor filter” to actively push fact-checking information to users who have read the fake news.
Facebook didn’t have this feature until COVID-19. As a result, fake news can plant wrong information into the minds of more readers, while counterparties cannot.
The principle of punishment for rumor publishers on the Chinese Internet is to make them lose their profits.
First of all, we would like to make a brief introduction to the distribution form of the content in WeChat.
WeChat is basically for ordinary users. Anyone will have a feed, called WeChat Moment. You can post your content here, and your friends can see it just like Facebook but they can’t forward it.
Only WeChat official account posts can be forwarded, which is similar to Facebook Page. But unlike Facebook Page, create a WeChat official account requires strict real-name authentication or corporate authentication. Content posted by an official account can be forwarded to chat, group chat, or the user’s own WeChat Moment.
WeChat also allows you to post links from other than WeChat in chat or WeChat Moment, and when you click on these links, the web page will open in WeChat’s in-app browser.
Basically, if you want to get traffic and revenue from WeChat, you need a WeChat official account. And WeChat’s punishment for rumor publishers is to stop you from getting traffic or revenue.
WeChat has three different ways of dealing with fake news from these three different content sources:
WeChat Moment posted by individual users of WeChat
Because the WeChat Moment posted by individuals does not have the forwarding function, WeChat usually does not deal with the fake news posted here.
Unless the WeChat Moment is a screenshot and widely circulated through chat or other Apps. At this point, the original publisher of the WeChat Moment containing fake news may be punished by law rather than by Tencent.
Fake news posted by WeChat Official Account
Most of the content widely spread on WeChat comes from WeChat official accounts because it’s the only way you can get traffic in WeChat, attract other people’s ideas, and profit from it.
When an official account publishes content that is judged to be a rumor or fake news, it will be punished step by step.
First, it will be suspended from advertising sharing and other commercial tools, then its publish function will be temporarily disabled, and finally permanently disabled.
Most official accounts stop posting rumors when the first step occurs because the main reason they post rumors is to get more traffic to earn advertising revenue.
Links from outside WeChat
WeChat’s in-App browser has a filter function that can play a role in the following situations:
- The web page is not marked as a rumor, but when the content involves “cash, money transfer, financial management, investment, WeChat password” and so on. The page will be allowed to load, but a red text will appear at the top of the browser to remind the user that there may be fraudulent content on the page.
- The web page has been marked by a large number of users as rumors, and the web page is not allowed to load.
- Multiple pages in the same domain have been marked as rumors by a large number of users, and web pages are not allowed to be loaded in the same domain until the owner of the domain applies for unfreezing.
In all cases, users can still open it in browsers other than WeChat by copying the URL. But for the elderly or children with low digital literacy, this is a difficult operation, which avoids considerable losses.
Considering that WeChat is the main concentration of Internet traffic in China, WeChat has the ability to manage external fake news to a certain extent-if you set up an independent website in China and do not want to be blocked by WeChat, you need to make sure that your site is not marked as a rumor by WeChat.
Weibo adopted stricter measures in August 2020: it requires all webmasters who want to open it in Weibo’s in-App browser to register their real names. To ensure that Weibo users do not see fake news and rumors (and, of course, pornographic content) from third parties in their in-App browsers.
Facebook actually has a similar function, but it only seems to work when blocking spammer.
Although it has adopted a more active anti-rumor strategy than Facebook and Twitter, there are still a large number of fake news and rumors on the Internet in China. Because in fact, it seems to be human nature to spread rumors.
The main groups that spread rumors and fake news on the Internet in China are the elderly, but they do not necessarily trust the content of fake news.
Another Tencent survey of the digital literacy of the elderly shows that in double-blind tests, the ability of the elderly and young people to identify fake news is almost the same. However, when they know the content is not credible, some elderly people still share rumors or fake news with old friends or children in order to express their concern for others. This is mainly because they are out of touch with the real world and lack access to social currency.
As a result, Chinese Internet companies are trying to focus on “reducing harm” rather than “stopping the spread of rumors”. This can be seen in WeChat’s punishment for rumor spreaders, whose punishment mechanism is to make those who produce rumors unprofitable, rather than blocking the spread of rumors.
In addition, Chinese Internet companies have their own payment systems, which makes scams based on fake news financially reversible.
For example, in 2016, an article written by a father of a terminally ill child was read more than a million times on WeChat, in which the father hinted that his child was dying because he had no money to cure his illness. Readers initially donated about 150,000 yuan to it through the channels provided on WeChat.
But soon, it was found that the child had undergone adequate treatment before, but it did not work, money was not the key to the treatment. The article written by his father seems to be aimed at using children’s diseases to obtain donations to improve their lives.
This angered netizens, and Tencent later froze the author’s financial account on WeChat and returned everything to the donor.
This seems to be a revelation for Facebook.