ICE’s Counterterrorism and Criminal Exploitation Unit, Cutrell said, receives 1.2 million “investigative leads” per year — from visa overstays, tips and other immigration violations — and prioritizes them by potential threat. The agency, Cutrell said, believed an automated system would provide a more effective way to continuously monitor the 10,000 people determined to be the greatest potential risk to national security and public safety.
Among those are foreigners who enter the country on temporary or visitor visas and then apply for permanent residency. ICE said it believed that system could provide nonstop tracking of social-media behavior for “derogatory” information that could weigh against their applications, including radical or extremist views.
Contract-request documents in June 2017 said the automated system should contribute to its agents' work and “generate a minimum of 10,000 investigative leads annually.” The ICE official said the revised labor-contract request, instead of using that quota, will probably call for roughly 180 people to monitor the social-media posts of those 10,000 foreign visitors whom ICE flagged as high-risk, generating new leads as they keep tabs on their social-media use.
The monitoring program would only look at publicly visible social-media posts, according to ICE, and would stop once the inspected person was granted legal residency in the United States.
But industry critics and Democrat lawmakers said social-media-scanning algorithms would chill free speech and are unproved in their ability to forecast a possible crime or terrorist attack. Three ranking Democrats on the House Committee on Homeland Security wrote a letter to the Department of Homeland Security last month saying the program would be “ineffective, inaccurate … and ripe for profiling and discrimination.”
ICE’s acting director, Thomas Homan, responded that the program had been intended to bolster the agency’s “analytical tools” that agents use to vet foreign visitors, including through social media. Leads “enhanced using analytical tools,” he added, were reviewed by senior analysts before being used in investigations.
Several major tech and contracting firms, including IBM, Booz Allen Hamilton and Deloitte, attended an “industry day” session in Virginia in July 2017 in which immigration officials discussed the contract. But some companies later voiced unease over the proposal: An IBM spokeswoman said the firm “would not work on any project that runs counter to our company’s values, including our long-standing opposition to discrimination.”
Though AI systems are being deployed more often to help flag objectionable and dangerous content, tech giants such as Facebook still depend largely on their human workforces to make content-moderation decisions, saying the software isn't yet nuanced enough to comprehend speech, assess context and decide objectively on its own.
Asked whether a social-media-scanning AI could predict the difference between a good person and a terrorist, the Cambridge Analytica whistleblower Christopher Wylie told the Senate Judiciary Committee Wednesday that “the most advanced neural network made yet” still reflects the systemic prejudices of the data on which it’s been trained.
“There is no mathematical way to determine whether someone is a bad person,” Wylie said.