A Blog by Jonathan Low

 

Apr 12, 2019

Digital Bouncers: Why Companies Won't Reveal 'Trust' Scores That Rate Customers

As even supposedly 'invulnerable' systems - like blockchain - are hacked, organizations are taking more sophisticated steps to protect themselves, their users - and their data.

Enterprises preserve security so their systems won't be gamed, but they also want deniability if - as is almost certain to happen - the algorithms used to rate customers display biases or make mistakes.

Smart companies recognize that transparency about processes is, in the long run, a less expensive and more secure solution. JL


Christopher Mims reports in the Wall Street Journal:

How many of us realize our account behaviors are shared with fraud detection companies we’ve never heard of? And why can’t we access this data to correct or delete it? 16,000 signals inform the “Sift score,” a rating of 1 to 100. Algorithms always have biases, and companies are unaware of those unless they’ve conducted an audit, not yet standard practice. Even the most sophisticated don’t seem fully aware of how their systems are behaving. (They) market themselves as smarter discriminators between “good” and “bad” customers (but) “sometimes your best customers and your worst customers look the same.”
When you’re logging in to a Starbucks account, booking an Airbnb or making a reservation on OpenTable, loads of information about you is crunched instantly into a single score, then evaluated along with other personal data to determine if you’re a malicious bot or potentially risky human.
Often, that’s done by a service called Sift, which is used by startups and established companies alike, including Instacart and LinkedIn, to help guard against credit-card and other forms of fraud. More than 16,000 signals inform the “Sift score,” a rating of 1 to 100, used to flag devices, credit cards and accounts owned by any entities—human or otherwise—that a company might want to block. This score is like a credit score, but for overall trustworthiness, says a company spokeswoman.
One key difference: There’s no way to find out your Sift score.
Companies that use services like this often mention it in their privacy policies—see Airbnb’s here—but how many of us realize our account behaviors are being shared with companies we’ve never heard of, in the name of security? How much of the information one company shares with these fraud-detection services is used by other clients of that service? And why can’t we access any of this data ourselves, to update, correct or delete it?

Some Red Flags that Affect Your Trust Score

• Is the account new?
•Are there are a lot of digits at the end of an email address?
• Is the transaction coming from an IP address that’s unusual for your account?
• Is the transaction coming from a region where there are a lot of hackers, such as China, Russia or Eastern Europe?
• Is the transaction coming from an anonymization network?
• Is the transaction happening at an odd time of day?
• Has the credit card being used had chargebacks associated with it?
• Is the browser different from what you typically use?
• Is the device different from what you typically use?
• Is the cadence of the way you typed out your password typical for you? (tracked by some advanced systems)
Sources: Sift, SecureAuth, Patreon
According to Sift and competitors such as SecureAuth, which has a similar scoring system, this practice complies with regulations such as the European Union’s General Data Protection Regulation, which mandates that companies don’t store data that can be used to identify real human beings unless they give permission.
Unfortunately GDPR, which went into effect a year ago, has rules that are often vaguely worded, says Lisa Hawke, vice president of security and compliance at the legal tech startup Everlaw. All of this will have to get sorted out in court, she adds.
Another concern for companies using fraud-detection software is just how stringent to be about flagging suspicious behavior. When the algorithms are not zealous enough, they let fraudsters through. And if they’re overzealous, they lock out legitimate customers. Sift and its competitors market themselves as being better and smarter discriminators between “good” and “bad” customers.
Algorithms always have biases, and companies are often unaware of what those might be unless they’ve conducted an audit, something that’s not yet standard practice.
“Sift regularly evaluates the performance of our models and tries to minimize bias and variance in order to maximize accuracy,” says a Sift spokeswoman.
“While we don’t perform audits of our customers’ systems for bias, we enable the organizations that use our platform to have as much visibility as possible into the decision trees, models or data that were used to reach a decision,” says Stephen Cox, vice president and chief security architect at SecureAuth. “In some cases, we may not be fully aware of the means by which our services and products are being used within a customer’s environment,” he adds.
Digital Bouncers
Companies use these scores to figure out who—people or potential bots—to subject to additional screening, such as a request to upload a form of ID.
Someone on a travel service buying tickets for other people might be a scammer, for instance. Or they might be a wealthy frequent flyer.
“Sometimes your best customers and your worst customers look the same,” says Jacqueline Hart, head of trust and safety at Patreon, a service for supporting artists and creators, which uses Sift to screen transactions on its site. “You can have someone come in and say I want to pledge $10,000 and they’re either a fraudster or an amazing patron of the arts,” she adds.
When an account is rejected on the grounds of its Sift score, Patreon sends an automated email directing the applicant to the company’s trust and safety team. “It’s an important way for us to find out if there are any false positives from the Sift score and reinstate the account if it shouldn’t have been flagged as high risk,” says Ms. Hart.There are many potential tells that a transaction is fishy. “The amazing thing to me is when someone fails to log in effectively, you know it’s a real person,” says Ms. Hart. The bots log in perfectly every time. Email addresses with a lot of numbers at the end and brand new accounts are also more likely to be fraudulent, as are logins coming from anonymity networks such as Tor.
These services also learn from every transaction across their entire system, and compare data from multiple clients. For instance, if an account or mobile device has been associated with fraud at, say, Instacart, that could mark it as risky for another company, say Wayfair—even if the credit card being used seems legitimate, says a Sift spokeswoman.
The risk score for any given customer, bot or hacker is constantly changing based on that user’s behavior, going up and down depending on their actions and any new information Sift gathers about them, she adds.
For Our Protection?
These trustworthiness scores make us unwitting parties to the central tension between privacy and security at the heart of Big Tech.
Sift judges whether or not you can be trusted, yet there’s no file with your name that it can produce upon request. That’s because it doesn’t need your name to analyze your behavior.
“Our customers will send us events like ‘account created,’ ‘profile photo uploaded,’ ‘someone sent a message,’ ‘review written,’ ‘an item was added to shopping cart,’” says Sift chief executive Jason Tan.It’s technically possible to make user data difficult or impossible to link to a real person. Apple and others say they take steps to prevent such “de-anonymizing.” Sift doesn’t use those techniques. And an individual’s name can be among the characteristics its customers share with it in order to determine the riskiness of a transaction.
In the gap between who is taking responsibility for user data—Sift or its clients—there appears to be ample room for the kind of slip-ups that could run afoul of privacy laws. Without an audit of such a system it’s impossible to know. Companies live under increasing threat of prosecution, but as just-released research on biases in Facebook ’s advertising algorithm suggest, even the most sophisticated operators don’t seem to be fully aware of how their systems are behaving.
That said, sharing data about potential bad actors is essential to many security systems. “I would argue that in our desire to protect privacy, we have to be careful, because are we going to make it impossible for the good guys to perform the necessary function of security?” says Anshu Sharma, co-founder of Clearedin, a startup that helps companies combat email phishing attacks.
The solution, he says, should be transparency. When a company rejects us as potential customers, it should explain why, even if it pulls back the curtain a little on how its security systems identified us as risky in the first place.
Mr. Cox says it’s up to SecureAuth’s clients, which include Starbucks and Xerox, to decide how to notify people who were flagged, and a spokeswoman said the same is true for Sift

0 comments:

Post a Comment