A Blog by Jonathan Low

 

Jul 12, 2017

How I Learned To Stop Worrying and Love My Threat Model

The threats to most technology users contain common elements. And perhaps the most frequent is the belief that individuals or their enterprises have thought of everything. JL

Sean Gallagher reports in ars technica:

In the physical world of our daily lives, we assess threats based on real-world data. If we see someone coming down the sidewalk we want to avoid, we cross the street. Threat intelligence in the digital world is a bit more complicated: identify threats based on motive, resources, and capabilities. Focus on the likely threats. Most of us are not enemies of the state. Every person who uses the Internet and mobile applications faces a common set of security and privacy threats. (But) the things you do today may not work tomorrow.
I have a healthy level of paranoia given the territory I inhabit. When you write things about hackers and government agencies and all that, you simply have a higher level of skepticism and caution about what lands in your e-mail inbox or pops up in your Twitter direct messages. But my paranoia is also based on a rational evaluation of what I might encounter in my day-to-day: it's based on my threat model.
In the most basic sense, threat models are a way of looking at risks in order to identify the most likely threats to your security. And the art of threat modeling today is widespread. Whether you're a person, an organization, an application, or a network, you likely go through some kind of analytical process to evaluate risk.
Threat modeling is a key part of the practice people in security often refer to as "Opsec." A portmanteau of military lineage originally meaning "operation security," Opsec originally referred to the idea of preventing an adversary from piecing together intelligence from bits of sensitive but unclassified information, as wartime posters warned with slogans like "Loose lips might sink ships." In the Internet age, Opsec has become a much more broadly applicable practice—it's a way of thinking about security and privacy that transcends any specific technology, tool, or service. By using threat modeling to identify your own particular pile of risks, you can then move to counter the ones that are most likely and most dangerous.
Threat modeling doesn't have to be rocket science. Most people already (consciously or subconsciously) have a threat model for the physical world around them—whether it's changing the locks on the front door after a roommate moves out or checking window locks after a burglary in the neighborhood. The problem is that very few people pay any sort of regular attention to privacy and security risks online unless something bad has already happened.
That's not from a lack of effort by employers and industry. Collectively, society spends billions on information security each year, and it's commonplace for employees of all sorts to go through some kind of digital security training these days. But neither the security industry nor the media have helped normalize threat modeling. The public regularly gets bombarded with bits of tradecraft (or worse, security "folkways") every day—every time a new malware threat emerges, a television journalist will inevitably tell viewers that their best protection is "a complex password."
And though it's easy to find advice on how to "stay safe" digitally, much of the good advice doesn't seem to really stick. Perhaps it's because that advice doesn't always match with the actual needs of the people looking for it.
"There's a lot of stuff going on, and we as technologists tend to jump to advice like 'use Signal' or 'use Tor'  without asking, 'what matters to you?'" said Adam Shostack, who developed tools and methodologies for developers to do threat modeling for their software while at Microsoft. Shostack helped develop the CVE standard for tracking software vulnerabilities and is now an independent author, a consultant, and a member of the Black Hat Review Board.

Demystifying the threat model

Recently, Shostack has been working with the Seattle Privacy Coalition (SPC) on a privacy threat model for the people of Seattle based on Shostack's approach to threat modeling for software developers. Intended to demystify threat modeling for average people, Shostack's generalized approach boils down to a quartet of questions:
  1. What are you doing? (The thing you're trying to do, and what information is involved.)
  2. What can go wrong? (How what you're doing could expose personal information in ways that are bad.)
  3. What are you going to do about it? (Identifying changes that can be made in technology and behavior to prevent things from going wrong.)
  4. Did you do a good job? (Re-assessing to see how much risk was reduced.)
What Shostack's approach doesn't directly address are the specific sources of threats to privacy and security. That's something Shostack doesn't see as being particularly helpful, since that part of threat modeling isn't necessarily something the average person can deal with. "Telling people to be anxious all the time does little good," he said.
But other security experts Ars spoke with felt that understanding what types of threats a person is most likely to encounter is a key part of building a personal threat model—one along the lines of the Electronic Frontier Foundation's five-question structure:
  1. What do you want to protect? (The data, communications, and other things that could cause problems for you if misused.)
  2. Who do you want to protect it from? (The people, organizations, and criminal actors who might seek access to that stuff.)
  3. How likely is it that you will need to protect it? (Your personal level of exposure to those threats.)
  4. How bad are the consequences if you fail?
  5. How much trouble are you willing to go through in order to try to prevent those? (The money, time and convenience you're willing to dispense with to protect those things.)
I've tried to consolidate the two approaches above into a set of steps for the average mortal—or at least, for someone helping the average mortal. The Ars Threaty Threat Assessment Model (or, as some readers have demanded, the Ars Threaty McThreatface Assessment Model) squeezes it all into three compound questions and a shampoo bottle instruction:
  • Who am I, and what am I doing here?
  • Who or what might try to mess with me, and how?
  • How much can I stand to do about it?
  • Rinse and repeat.
For the TL;DR, you could skim ahead to "how much can I stand to do about it?" But with threats constantly changing and evolving, helping people first understand how to assess their risks leads to better security in the long term compared to merely following a quick set of tips. It's the teach a person to fish approach, and it starts with a simple question.
Who you are, what you are doing, and where you are doing it are all major factors in determining what threats you face. For instance, statistics show your risk of dying in a car crash is dramatically lower when you're sitting in your living room. Electronic threats to your privacy, person, and treasure will be different based on who you are and what you do—and what you have done in the past.
Where you work, your social and political activities, your notoriety, social connections, travel, and other factors all play into your threat model, too. Such characteristics introduce different sets of potential risks to your security and privacy, and these traits could attract different sorts of potential adversaries.
Of course, some activities invite risk in and of themselves based on the kind of information being exposed. In the world of threat modeling, these are often referred to as "assets"—the important pieces of information you want to use in an activity but simultaneously want to protect:
  • Credit card data: yours, or (if you sell stuff) a customer's.
  • Banking data: account numbers, routing numbers, e-banking usernames and passwords.
  • Personally identifying information: Social Security number, date of birth, income data, W-2s, passport numbers, drivers' license or national ID numbers.
  • Intellectual property: like that treatment for an Ars action movie I've been working on.
  • Sensitive personal or business information and communications: e-mails and texts that could be used to embarrass, blackmail, or imprison you.
  • Politically sensitive information or activities that could get you in trouble with your employer, the government, law enforcement, or other interested parties.
  • Travel plans that could be used to target you or others for fraud or other forms of attack.

  • Other business or personal data that are financially or emotionally essential (family digital photos, for example).
  • Your identity itself, if you are trying to stay anonymous online for your protection.
Pieces of information that could be used to expose your assets are just as essential to protect as the assets themselves. Personal biographical and background data might be used for social engineering against you, your friends, or a service provider. Keys, passwords, and PIN codes should also be considered as valuable as the things that they provide access to.
Other "operational" information about your activities that could be exploited should also be considered, including the name of your bank or other financial services provider. For instance, a spear-phishing attack on the Pentagon used a fake e-mail from USAA, a bank and insurance company that serves many members of the military and their families.
          What could possibly go wrong?
Stuff happens. Sometimes it's accidental, like that time a Department of Veterans Affairs analyst took home my personal health data (and that of 26.5 million fellow veterans) on an external hard drive, and the data was stolen.
In the physical world of our daily lives, we assess threats based (hopefully) on real-world data. If there have been recent burglaries in the neighborhood, we up our vigilance. In certain higher-risk neighborhoods, we take precautions. If we see someone coming down the sidewalk we want to avoid, we cross the street. Threat intelligence in the digital world is a bit more complicated, but it's essentially the same principle. We identify possible threats based on motive, resources, and capabilities.
In this part of modeling, it's important to focus on the most likely threats to your assets and not get caught up in protecting against unicorns. Most of us are not enemies of the state (at least not yet) or of interest to some foreign power. We should likely be more concerned about criminals trying to steal information they can turn into financial gain or use to coerce or deceive us into giving them money directly.
But there are other sorts of threats individuals and organizations need to concern themselves with in threat modeling. At the most basic level, every person who uses the Internet and mobile applications faces a common set of security and privacy threats. Some of these threats are obvious and immediate; others are less intuitive but potentially more damaging in the long term:
  • Criminals using "crimeware" to steal personal information they can use to empty bank accounts, make fraudulent credit card purchases with, or sell to others for identity theft purposes.
  • Fraudsters who attempt to fool victims into paying them by disguising their identity, or who extort money from victims by exploiting information they
  • obtained through deception.
  • Ransomware operators who demand payment to recover access to personal digital assets.
  • Identity thieves and other criminals who leverage personal information stolen from data collected by a legitimate third party (a bank, a retailer, a government agency).
  • Companies who collect large quantities of personal data and misuse it or sell it to others who do, or who store it insecurely and inadvertently aid criminals.
  • People who have a personal motivation to violate your privacy because they are stalking you, don't like your tweet or your comment on Ars, or just because they can.
Other attackers may focus on you as a means to gain access to a bigger target. If you work in finance or in the accounting department of nearly any company, you might be personally targeted by criminals ultimately aiming for your company's systems. If you're a systems administrator, someone might go after you to gain access to the systems you manage. If you're a journalist, a non-governmental organization employee, a government employee, or a government contractor, someone may have an intelligence-gathering interest in your work. Your employer or your government may want to keep tabs on your activities and opinions for other reasons, too.
Each of these types of "threat actors" has different levels of skill and available resources. They also have different motives for—and levels of commitment to—getting your stuff. Typically, attackers motivated by money will not expend more resources than the value of what they're going after. It's unusual for a criminal to spend a year or hundreds of thousands of dollars just for the ability to make a few hundred, and they'll likely not target individuals specifically unless they're a gateway to a big haul.
Of course, there's one other major threat everyone faces: ourselves. Accidental disclosure or casual information leakage over e-mail, social media, or other channels can be just as bad as being hacked. Such mistakes can allow others to gain further access to private information by someone who happens across it.

And how?

The next part of answering the "what could go wrong" question is looking at the ways adversaries or accidents could break your security or privacy. There are six primary types of attacks, as defined by STRIDE—a threat model Microsoft developed for software developers:
  • Spoofing identity: using some sort of token or credential to pretend to be an authorized user or another piece of trusted software—or, someone posing as someone else in an e-mail or on social media to gain your trust.
  • Tampering with data: maliciously altering data to cause a software failure
  • or to cause damage to the victim. This could be to lower the user's trust of the information, or it might be an effort to create an error in software that allows the attacker to launch their own commands on the targeted device.
  • Repudiation: the ability to do something (conduct a transaction, change information, access data) without having a record to prove it happened (such as an event log). This is less of an issue for average users and more of a problem for software developers, but it can still be an issue in some fraud attacks.
  • Information disclosure: your data gets exposed, either through a breach or accidental public exposure.
  • Denial of service: making it impossible for someone to use the application or information, whether it's your personal website or someone trying to boot you off a game network.
  • Elevation of privilege: gaining a greater level of access to an application or to data than allowed by altering the restrictions on the user or the application (getting "root," escaping the browser sandbox to install malware, etc.).
The degree to which you're vulnerable to any of these sorts of attacks will depend on
the software, hardware, and operating systems you use, the networks you are connected to, and how accessible any of them are to would-be attackers. All of that encompasses your "attack surface," as it's referred to by security professionals. But you're part of the attack surface as well. If the wrong software, website, or person gets within your boundaries of trust—a phishing attack, a malicious mobile application you downloaded from an "alternative" app store, or a human being on the other end of a phone call or e-mail—it can be just as bad as (or worse than) any other "sophisticated" breach.
That can be extended to how you use e-mail, Web services, and social media—who you share information with, and how much—and who and what you trust to have access to your information, whether it's on your smart phone, your computer, or in the cloud.

How much can I stand to do about it?

Developers can fix bugs in their software. But as an individual end-user of technology, how you go about dealing with these threats is somewhat limited.
"The challenge is that consumers are close to the very end of the consumption chain with technology," Shostack said. "Their choices are essentially between computing products. The question of threat modeling becomes, 'What can I do at a reasonable cost to address those things?'"
For those looking for a quick answer, the TL;DR reply is:
  • Back up your stuff to the cloud or a disk drive you detach from your device;
  • Use a password manager to automate your use of separate passwords for every website;
  • Update your software whenever alerted to do so.
"Do those three things, and you're doing better than most," Shostack said.
Updating software and operating systems as soon as updates are available is critical to reducing your attack surface, because the moment the bugs that are patched become widely known they're more likely to be exploited. Password managers will help eliminate the risk of password reuse across multiple sites. And backups, kept detached from your computer or mobile device, will minimize the amount of data lost if you get hit with ransomware or something else destructive. These three things together will vastly cut down on the attack surface available to the most common
threats.
Unfortunately, these steps aren't always enough, and they're not always possible or practical for everyone. Not all software vendors automatically alert users to updates. Many consumer Wi-Fi router manufacturers don't update their firmware for older models, and owners of others may miss updates since they seldom use the administrative Web console for their routers. And while some operating system upgrades are free, the new hardware and software required to support them generally isn't.
Patching, data backup solutions, and password managers also aren't going to prevent other threats such as phishing or efforts that exploit human weaknesses. A password manager may block fake sites attempting credential-stealing, but it won't protect against an attack like the Google account OAuth worm or from other attacks that rely on sites you've already given credentials for (such as malicious Twitter or Facebook links).
This is where Opsec best practices come in. These are steps anyone can take personally to reduce opportunities for attack:
  • Use a non-jailbroken smart phone or tablet and put a password on it rather than a PIN if possible.
  • Put a "security freeze" on your credit report with the three major credit reporting agencies to prevent unauthorized credit applications using your personal data. (It costs about $10 to $15 per agency to set these up.)
  • Use specific e-mail accounts for each credit card or banking account—and use "throwaway" e-mail accounts to register on other websites as usernames. Thanks to Web mail, you can have as many e-mail accounts as you want. Having
  • specific accounts for each financial account means that you can safely delete anything coming to other accounts if it appears to be from another bank—such e-mails are probably phishing attacks. This also reduces the threat of someone using an account and password stolen from another site to try to gain access to your more important Web accounts.
  • Aggressively use privacy settings on social media to limit who sees posts that could give up information that someone else could use to convince others that they are you.
  • When you can't update, mitigate. In some cases, you may just have to completely disconnect older hardware that you can't replace from the Internet and find other means to connect. For example, my mother, who is an avid digital photographer, has a lot invested in an older photo printer that is only supported on older versions of Mac OS X via Firewire. But the drivers for that printer are no longer supported on the latest macOS, and the printer doesn't function over a Thunderbolt-to-Firewire adapter. So to mitigate the risks of continuing to use an older Mac OS X iMac to print her photos, she doesn't connect that computer to the household Wi-Fi. Instead, she moves the photos over to the computer by USB thumbdrive after editing them on another computer.

Rinse and repeat

Threat models are always changing, and the things you do today may not work tomorrow. And it's often hard to tell if you're covered for all the possible threats. This is what companies hire penetration testers for—to find the gaps in what an organization has done to secure itself. Corporations may have dedicated "red teams" to regularly look for gaps in security and identify what needs to be fixed, but most of us don't personally have the resources for our own personal red team.
Fortunately, organizations such as Google's Project Zero are constantly searching for bugs for us, and many vendors are quick to patch bugs when they're found. But the job of tracking the updates for the software and devices we use is largely on us. Regular checks for updates are key to keeping on top of your personal threat model.
Also key is a regular re-assessment of how your risk exposure has changed. Re-checking privacy settings, Wi-Fi router firmware, and other things that don't necessarily alert you when updates are due on a regular basis is part of keeping risks under control.
None of this is a guarantee. But if you've done a risk model and you've done what you can to minimize your exposure to security and privacy risk, at least the impact of something bad happening will (hopefully) be manageable. Threat models don't offer perfection, but they're pretty good for avoiding a full-blown disaster.


0 comments:

Post a Comment