Thursday, December 20, 2012

Real-world security topics

Lots of security research is very theoretical and abstract, but there is plenty of concrete and immediate information to keep abreast of.

To wit

  • A nice post on the SpiderLabs blog reviews the recent Microsoft BlueHat conference and the Seattle B-Sides conference, as well as some other related work.
    The talks at BlueHat started with Ellen Cram Kowalczyk’s talk on Fraud and Abuse in which she talked about how prevalent fraud is and how easy password recovery question answers can be recovered from online data. Examples of randomly generated passwords were presented and one attendee was able to recite it back to her. She ended her talk with a Internet scavenger hunt for answers to what could be password recovery questions for an online account. A simple Internet search can deliver information about your mother’s maiden name, your first car, last attended school, and even the food you hate most.
  • Piecing together the bits of your identity from information scattered around the net is one of the ways that computers can figure out who you are, even when you're trying not to let them know. The somewhat awkward term "deanonymization" describes this process, and Professor Arvind Narayanan of Princeton brings us up to date on the current state of the art in this area: New Developments in Deanonymization.
    Let’s move on to the more traditional scenario of deanonymization of a dataset by combining it with an auxiliary, public dataset which has users’ identities. Srivatsa and Hicks have a new paper with demonstrations of deanonymization of mobility traces, i.e., logs of users’ locations over time. They use public social networks as auxiliary information, based on the insight that pairs of people who are friends are more likely to meet with each other physically.
    The Srivatsa/Hicks paper is here: Deanonymizing Mobility Traces: Using Social Networks as a Side-Channel.
    We examine a two step solution to match the contact graph against the social network. In the first step we bootstrap the matching problem by exploiting inherent heterogeneity in the graphs to identify landmark nodes. In the second step we extend a mapping between landmark nodes to all the nodes in the graph by identifying discriminating features in the original graph.
    As Professor Narayanan observes, this is not just academic research:
    I have been approached multiple times by organizations who wanted me to deanonymize a database they’d acquired, and I’ve had friends in different industries mention casually that what they do on a daily basis to combine different databases together is essentially deanonymization.
  • Since a substantial part of the problem involves malware that, in one way or another, tricks you into revealing something that you shouldn't have revealed, an interesting approach is to make your computer more alert, and have it do a better job of trying to stop you from accidentally being fooled. Google's Chrome browser has always been an exemplar of this approach, but two other recent unrelated ideas caught my eye:
    • A nice paper by Chuan Yue suggests various ways in which browsers can help users by alerting them to situations where an inappropriate password request may be being made: Preventing the Revealing of Online Passwords to Inappropriate Websites with LoginInspector.
      The key idea of LoginInspector is to continuously monitor a user’s login actions and securely store hashed domain-specific successful login information to an in-browser database. Later on, whenever the user attempts to log into a website that does not have the corresponding successful login record, LoginInspector will warn and enable the user to make an informed decision on whether to really send this login information to the website.
    • A MIT team have built a nice GMail extension that looks at your inbox and helps you figure out when an email contains untrustworthy information: Lazy Truth
      The LazyTruth inbox extension surfaces pre-existing verified information to debunk viral rumors when the information is needed most: in our inboxes. The gadget is triggered by the unique phrases used in the most common viral emails tracked by factchecking and urban rumor websites.

      When you receive a viral email full of fallacies, LazyTruth retrieves and displays a verified rebuttal, and provides you with the original sources. It all happens right in your inbox, without requiring you to search anywhere.

  • As Ross Anderson points out so clearly, a major component of computer security involves how the economic incentives are arranged. Economic incentives are a social contract, not a technology, and so you have to think about them differently.

    A particularly vivid illustration of these economic incentives occurs with vulnerability disclosure, as Princeton Professor Ed Felton discusses on his blog: You found a security hole. Now what?.

    As a researcher I have always felt that when a company is willing to engage constructively, the ethical course is to cooperate with them for the benefit of the public.

    That approach becomes harder to sustain when the perceived risk of legal action, whether due to an overzealous lawyer or a research error, gets larger.

    At the same time, an alternative outlet for vulnerability information is emerging–selling the information. In principle it could be sold to the maker of the flawed product, but they probably won’t be the high bidder. More likely, the information will be sold to a government, to a company, or to a broker who will presumably re-sell it.

Personally, I've never discovered an important security vulnerability in someone else's software on my own. However, I have been involved with the opposite side of the coin: I've had multiple situations in which security researchers have contacted me (more precisely: my organization) with notice of a newly-discovered vulnerability.

Interestingly, I've had about the same number of such incidents in my open source work as in my industry work.

Happily, in all the situations I was involved with, the researchers notified us first (as far as we know), and were willing to give us a reasonable amount of time to repair the problem and distribute a fix prior to announcing the vulnerability to the world.

I believe that this record of success is at least partly due to the fact that the organizations I've been involved with have also responded appropriately: gratitude to the external party who notified us of the problem, and immediate response to address the issue.

It doesn't seem like it ought to be impossible for the world to work this way, but as the iPad identity leak incident demonstrates, it's easy for this fragile approach to spin wildly out of control.

No comments:

Post a Comment