A Blog by Jonathan Low

 

Aug 11, 2016

Software That Can Predict When An Employee Is About To Commit A Cybercrime

Predictive capabilities are useful. But they'd better be correct. JL

Olivia Goldhill reports in Quartz:

A recent study found that internal employees were behind 43% of data breaches. Software.
aims to predict whether an employee is likely to commit insider trading or intellectual property theft before they actually do so. Rather than actually reading each employee’s messages, the software looks for warning shifts in behavior.
When it comes to cyber attacks, Russian spies aren’t the only ones to worry about. Businesses forced to confront the growing risk of cybercrime are waking up to the fact that it’s often someone on the inside who’s responsible. In other words, as a 2014 Oxford University study found, employees are increasingly attacking their own companies. Information theft is a common goal of such attacks, and a recent study by Intel Security found that internal employees were behind 43% of data breaches.
You’d think it might be easy to catch a culprit who’s inside the office, but employees aren’t exactly announcing their criminal plans over company email. One solution comes in the form of detailed behavioral analytics: Software that can comb through hundreds of thousands of emails, chats, financial trades, login times, and other online activity to flag suspicious employee activity.
In recent years, a handful of data-monitoring companies, such as RedOwl, Palantir, and Splunk, have started offering such software.
Brian White, chief operating officer of the RedOwl Analytics, explains that his company’s product aims to predict whether an employee is likely to commit insider trading or intellectual property theft before they actually do so.
Rather than actually reading each employee’s messages, the software looks for warning shifts in behavior. Abruptly changing languages is one such sign—“I might be writing to you in English and then all of a sudden change to Spanish. Why’d I do that?,” says White. “That might be indicative that they’re trying to obscure their behavior.”
A significant increase in the number of external versus internal emails is another potential sign of loyalties moving outside the company. And even getting to work early or staying unusually late could indicate trouble. After all, those extra hours could be put to hard work—or they could be a chance to steal information while the office is quiet.
“Think about yourself, probably pretty much every day looks the same,” says White. “You go to work around the same time, you’re sending a certain amount of emails. There are spikes and changes but often those are driven by deadlines or holiday. We’re looking for something outside the norm.”
The software, which White says is used by clients including major global banks and defense contractors, has been able to spot employees committing theft and sharing sensitive information with people they shouldn’t.
But is it right for employees to have their online activity so scrutinized? Though it may seem like a privacy invasion, Tom Sorell, politics and philosophy professor at Warwick University and head of the university’s security and ethics group, says that profiling people based on their data is common and often perfectly reasonable.
After all, he says, monitoring whether employees visit pornography sites or Facebook while at work is a simple form of profiling, as bosses are using data about such sites to determine whether an employee is properly focused. As long as employees know that their online activity is being monitored, and for what purpose, Sorell says watching for indicators of criminal behavior is legitimate. “I’m against a position that say all kinds of intrusion are just terrible,” he says, “and all kinds of monitoring just terrible.”
James Connelly, political theory professor at the University of Hull, agrees. “You could argue that it’s perfectly acceptable to violate privacy in such a corporate system when people might be misusing that system to counter-act the purposes of the organization,” he adds.
However, RedOwl’s software also has the ability to flag employees looking to leave the company. An increasing number of external emails can be taken as “a sign that you’re looking to leave,” says White.
This form of monitoring is more worrying, say Sorell. After all, people have the right to look for different jobs and determine their career.
But more than privacy issues, the bigger concern for Sorell is that the software could lead to false positives. Though data-based predictions might reveal employees with criminal intent, it could also lead to suspicion of perfectly good corporate citizens.
White says RedOwl’s software isn’t “trying to be Minority Report,” and the goal is simply to help companies pro-actively identify potential threats. But aiming to predict someone’s criminal intent raises questions of when it’s ethical to step in, and how to respond to warning signs. RedOwl says it knows of cases where employees stealing data were caught and forced to return the information, but that clients don’t often report how they use their data analysis.
Of course, if the software flags an employee as suspicious and an investigation shows they’ve committed a crime or clearly have criminal intent, then
punishment is the appropriate response. But what if an employee is repeatedly flagged by the predictive software but can’t be definitively shown to have committed a crime?
“At what point can you possibly be justified in saying, ‘I know you’re going to do x or y,’ when you haven’t done anything at all other than say certain things?” says Connelly. “It raises the question of pre-emptive punishment for deeds people haven’t done.”

0 comments:

Post a Comment