A Blog by Jonathan Low


Mar 9, 2020

Chinese Tech Companies Are Selling Their AI-Based Censorship Services Globally

Effective Chinese suppression of news about the coronavirus is expected to boost global sales. JL

Shan Li reports in the Wall Street Journal:

Because of China’s demands that online platforms remove objectionable content, including anything politically sensitive, Chinese tech giants like Alibaba and Tencent are developing sophisticated content-moderation systems that target political content—and are selling those systems. $60,000 will buy a five-year subscription flagging 15 million items monthly. Content moderation in China will grow to a $70 billion industry over the next five years, and employ one million people. “The global norm is toward censorship over expression.”
For China’s tech companies, content-moderation tools are becoming a big business, and one that could spread Chinese-style censorship around the world.
U.S. tech companies already use content-moderation systems to screen out pornography, hate speech and extreme violence online, but they have largely resisted using them to filter political content.
Because of China’s demands that online platforms remove all objectionable content, including anything politically sensitive, Chinese companies are taking a much different road. Tech giants like Alibaba Group Holding Ltd. and Tencent Holdings Ltd. are developing sophisticated content-moderation systems that intentionally target political content—and are selling those systems to anyone who wants to use them.

Most of the clients are other Chinese companies that partly use them to avoid attracting Beijing’s ire, scrubbing out mentions of President Xi Jinping or other Chinese leaders, and to manage debate on sensitive topics such as recent Hong Kong protests or the 1989 massacre in Tiananmen Square, according to people who use or provide the services. Others can trawl for politically sensitive comments in languages common to restive regions like Xinjiang and Tibet.
The widening availability of such tools makes China’s largest tech companies bigger partners in Beijing’s censorship program and allows companies to more efficiently and cheaply police digital content on their own, based on regulators’ guidelines.
It also increases the odds other countries will use content moderation the same way, as Chinese systems become easier to obtain.
“The global norm is trending toward censorship over expression,” said Matt Perault, a former director of public policy at Facebook Inc. and now director of Duke University’s Center on Science & Technology Policy. “Many countries looking to import the tools and policy to govern their internet will pick China’s off-the-shelf technology.”
The coronavirus epidemic, which has infected more than 100,000 people world-wide, intensified debate around China’s methods of censorship. While President Xi has repeatedly called on officials to “strengthen the guidance of public opinions,” critics of the government said officials suppressed information that could have helped contain the virus. China says it was quick to share information on the epidemic.
A study released this week by University of Toronto’s Citizen Lab showed that Chinese social-media platforms began scrubbing keywords related to the coronavirus as far back as December. That suggests that platforms—including the country’s most popular messaging app, WeChat—were pressured by authorities to censor information even in the early weeks of the outbreak, according to the report. 
Yet the swift scrubbing of any government criticism from Chinese platforms—including an outpouring of anger after the death of Li Wenliang, a Wuhan-based doctor punished by authorities for sounding early alarms about the virus—could effectively serve as advertising for Chinese content-moderation tools—a proof of concept on a mass scale, experts said.
Chinese companies are using the epidemic as an opportunity to “market themselves as a necessary part to help the government fix their trust crisis and increase their propaganda ability in the future,” said Rui Hou, a doctoral candidate studying Chinese censorship at Queen’s University in Kingston, Ontario.
“For any government that’s not running away from the China model, it could be incredibly compelling,” said Duke University’s Mr. Perault.
Some Chinese companies are offering censorship services as an add-on for cloud-computing clients.
Alibaba’s cloud division sells a bare-bones content- moderation package at about $240, according to its website. For that price, the e-commerce giant can help clients screen up to 90,000 bits of text, images or videos a month and filter out pornography, drug use and “sensitive political figures.” Splashing out $60,000 will buy a five-year subscription flagging up to 15 million items monthly.
An Alibaba Cloud spokesperson said its services help clients “maintain a user-friendly online environment in accordance to local regulations.” Customers can “determine what content is appropriate to be used on their websites.”
Tencent, which owns WeChat, lets clients prepay about $9,000 for filtering 36 million images for pornography and other content, including what it describes as “politically sensitive” materials such as “political figures, political spoofs, famous political events, etc.”
In response to questions, Tencent said its technology is used by clients to “improve their operations security and efficiency” and “identify unlawful information,” citing pornography and “cyber violence.”
For now, foreign customers for these tools are relatively rare, say employees at companies offering these tools.
Since 2017, a Singapore-based subsidiary of Chinese social-media giant YY Inc. has been selling a content-moderation system powered by artificial intelligence to Indonesia’s government to help clean up “negative contents” online in areas such as gambling and terrorism, the Singapore company said in a press release.
The subsidiary is also talking with governments in Egypt, India and the Middle East to sell similar services, said a spokesman for the Singapore subsidiary, Bigo Technology. The parent company, YY Inc., didn’t respond to requests for comment.
Western companies such as Amazon.com Inc.’s Amazon Web Services and Microsoft Corp. also sell content moderation. On its website, Amazon doesn’t mention filtering political content, but does spell out categories like nudity, violence and “visually disturbing” content, like corpses.
Amazon didn’t comment. Microsoft declined to comment.
Baidu Inc., best known as China’s largest search engine, touts its services in banner ads online that promise to help companies monitor problematic content across tens of thousands of digital posts, without having to hire their own staff to do it.
The services are based on some of the same underlying technologies that Facebook, Twitter Inc. and other social media use for curbing violence, speech designed to incite crimes, or pornography. Chinese companies target those problems, too.
A price list published by Baidu shows it charges about 17 cents for every thousand images it scans for “politically sensitive” material, more than it charges to flag violence and terrorism-related material or pornography.
Although Baidu doesn’t specify what might count as politically sensitive, a demo on its website shows a picture of former U.S. President Barack Obama speaking in front of an American flag. It says its algorithm concluded the picture depicted a public figure with 98% certainty, and that it contained “politically sensitive content.” Baidu declined a request for comment.
In China, companies have refined censorship tools partly out of necessity: Authorities demand that platforms for entertainment, social media, e-commerce and other purposes ensure that politically objectionable comments are scrubbed. Companies risk fines or permanent shutdown if they fail to comply.
One Chinese media company started paying Alibaba for content-moderation products after authorities closed its website—which broadcasts podcasts and other content about the gaming industry—for nearly a month, according to an employee of the firm. Authorities offered little explanation for the shutdown, but pointed to a documentary posted on the company’s site that explored downtrodden lives of laborers in southern China, the employee said.
The company, fearing another lengthy shutdown, now uses Alibaba’s service to delete obvious taboos such as pornography and violence, but also errs on the side of caution on politics.
“Now we just delete any mention of Xi Jinping,” even if positive, the employee said. “You never know what will be acceptable today and not OK tomorrow.”
People.cn, the online arm of the Communist Party’s People’s Daily newspaper, also offers services that screen for objectionable content. Revenue from its services—dubbed “content risk control”—jumped 166% in 2018 versus the year before, according to its annual report, though it didn’t provide specific revenue figures. That helped boost annual profit by nearly 140%, the biggest increase since 2011, it said.
People.cn’s chairman recently predicted in a speech that content moderation in China will grow to a $70 billion industry over the next three to five years, and employ one million people.
Since the coronavirus epidemic started, People.cn appears to be seizing the moment to market its services to potential government clients, said Queen’s University’s Mr. Hou, by posting analysis of how local authorities are keeping citizens informed about the spread of the virus. Such posts help advertise media-advising services that include censorship technologies, he said.
The rise of AI-driven technologies has enabled some businesses, especially small to medium-size ones, to operate without armies of human censors.
The AI algorithms used in the U.S. and China often rely on similar technologies, such as natural-language processing and machine learning. Tools that help power recommendation engines for services like YouTube or Spotify can also be used.
It is hard to say definitively whether Chinese or U.S. censorship tools are more effective. One recent study on facial-recognition algorithms—which can also be used for censorship purposes—concluded that Chinese firm Megvii Technology Ltd. beat IBM and Amazon in detecting skin colors and genders.
In October, U.S. lawmakers added Megvii and 27 other Chinese entities to an export blacklist, citing what they described as the entities’ role in Beijing’s oppression of Muslim minorities in northwest China’s Xinjiang region.
China’s most popular dating app, Tantan, uses a Megvii facial-recognition system during the sign-up process to create a verified account, a Megvii spokeswoman said. But it also helps keep users from posting fake profile photos using images of Chinese leaders or ones that touch on sensitive topics like the Hong Kong protests, said people familiar with the matter.
The dating app also monitors posts made by users, the people said. In tests by The Wall Street Journal, a photo of President Xi posted on the app was scrubbed within 30 seconds and one of Chinese Premier Li Keqiang was deleted within a minute.
“The government doesn’t care if you post a thousand pictures of Angelababy,” said a person close to Tantan, referring to a Chinese pop star and actress. “But one photo of Xi Jinping is not acceptable.”
It’s unclear what tools Tantan used. Tantan didn’t respond to requests for comment.
Even with all the technology available, many Chinese companies still rely on humans to make decisions on complex content like satirical videos. Many work in smaller Chinese cities with lower labor costs that are becoming hubs for content moderation and censorship, much like Bangalore and Manila became known for call centers in the 1990s.
In Jinan, the capital of Shandong province, one high floor of the city’s tallest office building is dotted with miniature Chinese flags and posters depicting President Xi with excerpts from his speeches. An employee said content moderators for People.cn occupy the entire floor.
Bytedance Inc., the owner of video app TikTok abroad and various services in China, runs offices in Jinan dedicated to content moderation, according to a person familiar with the matter. In one building, a “Content Quality Center” that operates 24 hours a day filled up three floors with locked doors, frosted glass and posters on security procedures.
A Bytedance spokeswoman said the offices focus on content moderation of Bytedance’s own products in China, and that it isn’t considering making content-moderation tools available to third parties.


Post a Comment