DefCon for Developers

My colleague Adam Kujawa recently wrote a great post about the Malwarebytes experience at the hacker convention DefCon this year.

By popular demand, here’s a round-up of my top four favorite DefCon talks from a development perspective:

1. “Stiltwalker”, by “DC949” (http://www.dc949.org/projects/stiltwalker)

I am sure everyone is familiar with reCAPTCHA. You have likely wasted hours of your life (in the aggregate) on it. The basic idea is that there are tasks (image or audio recognition of words or letters) that a machine cannot successfully do reliably (usually!) but that are very easy for humans, and so performance on these tasks can distinguish a real person from a machine, like a bot on a forum or message board. The Stiltwalker talk was about a machine-learning attack on audio CAPTCHA: the speakers found that they could train a neural net to “beat” it using not much more than a few basic background-subtraction tricks. Depending on the precise implementation of CAPTCHA they tested, they could get 60-99% accuracy. This is easily enough to consider the system “broken.” Really cool! Actually, I notice it’s already up on Wikipedia: http://en.wikipedia.org/wiki/ReCAPTCHA#Security

2. “Hammer: Smashing Binary Formats into Bits” by Meredith Patterson and Dan “TQ” Hirsch

The lead-in to this talk was something to the effect of “have you ever used parser generators like Yacc or Bison? Don’t you hate them? Here’s something better.” Patterson and Hirsch then launched into an argument for “language-theoretic security” (basically, how virtually every parsing-bug-turned-security-flaw could be obviated with intuitive robust parsing – think along the lines of SQL injection). See http://www.cs.dartmouth.edu/~sergey/langsec. Then they showed a parsing library they have written called “Hammer” (https://github.com/UpstandingHackers/hammer) which has quite honestly the prettiest syntax I’ve ever seen in a parsing library. I really want to find some time to play around with it.

3. “No More Hooks: Trustworthy Detection of Code Integrity Attacks” by Xeno Kovah and Corey Kallenberg

The idea here is very familiar to us at Malwarebytes: when malware runs at the same integrity level as security software (for instance, in the Windows kernel), many security and software professionals (you know who you are) throw up their hands and say “sorry, you’re screwed!” Malware could be screwing with your code so that even the results of your security software shouldn’t be trusted, they say. Might as well give up and reformat.

But our business at Malwarebytes is to do better, and so we ask: is there a way to verify the integrity of your in-memory code in a way that is actually robust? This problem is clearly of interest to us; it’s fundamentally what our Malwarebytes Chameleon has to do! One can verify that a file “came from Malwarebytes” by checking the digital signature of the image file on disk, but how do you know that your in-memory code hasn’t been modified, hooked, patched, etc.?

This talk proposed a timing-based system: basically, the speakers defined a hash function over the space of their code in memory, and then they hand-optimized the assembly until it ran as fast as they thought it could. Then the system is as follows:

(a) a server sends the program a “seed” or “salt” to initialize the hash function (this is so the malware can’t simply pre-compute the hash value, and patch the security software to send the appropriate value).

(b) the program hashes its own code in memory starting from this seed/salt, and returns the hash value to the server.

(c) the hash value has to match what the server thinks it should be, and also has to have been returned sufficiently fast that there likely wasn’t an “interception”.

I was skeptical. Here were my objections, and their responses (some of which they brought up preemptively in the talk itself, and some of which they discussed afterwards with me directly):

Doug: “You’re not measuring the time it takes to compute the hash function, you’re measuring ( the time it takes to compute the hash function + the network round-trip time ). Doesn’t the standard deviation of the latter dwarf the magnitude of the former?”

The speakers showed data from their own network that demonstrated that this was not true, that they could compute the hash function reliably in ~2 ms (+- a very small amount) and that the network round-trip time was ~150 ms (roughly, as I recall) +- 0.3 ms. So indeed, they found that the hash function computation time was smaller than the standard deviation of the network round-trip time.

Doug: “Well, OK, but that happens to be the case on your internal corporate intranet with identical hardware set up specifically for this test. Would that be the case under anything but the most carefully controlled conditions, or over the internet? Basically, how generic is this result?”

The speakers said no, that this system was basically only practical under a very carefully controlled network topology with carefully chosen hardware. However, they then proposed an alternative system to eliminate this objection. They used a hardware TPM on each client machine as the “trusted stopwatch” to time the hash function computation, rather than the server’s clock. The hash and duration would be signed by the hardware TPM and the signature verified on the server, to validate that this was in fact the authentically-calculated time.

Doug: “OK, that’s a good idea. Of course, all this could be bypassed if an attacker patches out the entire set of instructions to validate with the server in the first place.”

The speakers agreed, but said that in this case (the “denial-of-service” case), that should be an immediate red flag that there was some kind of compromise or interference since the client hadn’t checked-in for a while, and the system administrator would be able to be notified immediately.

I was fairly satisfied! Certainly there are other potential weaknesses here: the speakers identified a potential time-of-check-time-of-use weakness in the system, and I wondered myself whether a “partial denial-of-service attack” was possible, where the server would still not notice something was up, while some of the time malicious interference could still occur. And of course, they could detect the difference between patched and unpatched code on their uniform corporate network with a single kind of hardware; who knows how well it would work in a heterogeneous environment. But overall it was an interesting idea and well thought-out.

4. “We Have You By The Gadget” by Mickey Shkatov and Toby Kohlenberg

You know those Windows Sidebar gadgets that sit on the side of the desktop in Windows Vista and Windows 7? (http://en.wikipedia.org/wiki/Windows_Desktop_Gadgets) They are basically HTML and Javascript running in an IE interpreter on the desktop. However, despite that, they can execute shell code and are therefore fundamentally the same as any other programs. That means you can exploit them if you can convince people to run your specially-crafted malicious gadgets. No great surprise there (news flash! there is a danger if you run other peoples’ malicious code on your system!), but the takeaway message is that HTML- and Javascript-based scripts are still just code, and can still be malware. (Also, lots of gadget writers seem to suck: they don’t digitally-sign their gadgets, they pull Javascript in cleartext from a server and execute it on-the-fly. This is basic bad practice, people.)

 —

Overall, DefCon was an insightful and rewarding experience for us development folk. We saw interesting talks, we met interesting people, and we learned a lot. Malwarebytes will be back next year!

ABOUT THE AUTHOR

Doug Swanson

I’m the Chief Technology Officer and I like to write code on Notepad.