CS 261 Homework 2 Solutions

Instructions

Questions are in black.

Suggested answers are in red.

Other possible answers are in green. (For a perfect score, it suffices to give only the answers in red. The answers in green are optional alternatives, answers suggested by other students, or comments from me.)

Question 1

  1. It is popular these days to use cookies as a means of authentication for access to web pages. [...]
    What are the weaknesses of this scheme?

    An eavesdropper can capture the cookie with a sniffer, then use this cookie to gain unauthorized access.

    There are many other weaknesses, of course. For instance, cookies might leak through cross site scripting vulnerabilities (see here or here or here for more).

    Also, web browsers use only name-based authentication when deciding whether to send cookies (e.g., is the domain name in the current URL the same as the one that set the cookie?), so an attacker who can spoof DNS responses can steal cookies. TCP hijacking would also do the trick.

  2. Another mechanism in the standard uses challenge-response authentication. [...]
    Assume the crypto algorithm is perfect (there are no mathematical shortcut attacks against it). What attacks are there on this scheme?

    Use TCP hijacking to take control of the connection after the legitimate client has correctly responded to the challenge; this gives the hijacker unauthorized access.

    Another possibility is a man-in-the-middle attack, perhaps using DNS spoofing, TCP hijacking, or some other way to steal the channel before the authentication process even begins.

Question 2

For this question, you will practice doing a security audit of some interesting program. [...]

Your audits can be found on the Sardonix website. Since your responses varied greatly according to what program you chose to audit, I thought I would merely mention some statistics about the audits you performed.

4 (of 20) people specifically praised the programs they audited as well-designed and/or easy to verify for correctness (the size of these programs varied from 5k to 60k lines of code, mostly on the smaller side).

7 (of 20) specifically lamented that their program was too complex or poorly-structured to verify or audit (4k-217k lines of code).

Also, about 10 out of 20 people found potentially exploitable security holes (4k-35k LoC).

It was interesting that one program happened to be independently audited by two people, and they found a different set of bugs. Of course, it seems highly unlikely that any audit could be exhaustive, considering the time constraints you were given, so this probably should come as no surprise. Still, it makes one wonder how many security holes were missed for every one you found.

Looking over all these statistics, one possible lesson is that you can find poorly-structured programs of all sizes, but all else being equal, the programs that are best-structured and easiest to verify also tend to be small. In addition, it seems that programs of all sizes can have security holes, but in this exercise, most security holes were found in smaller programs, so maybe bugs in larger programs are harder to find. Finally, it is a little depressing that only a few hours of study uncovered vulnerabilities in many security-critical programs.

I hope this problem hinted at the prevalence of insecure applications out there. I hope it also got you thinking about how to build programs that can be successfully audited, not just by the original programmer, but by others as well. And, being acquainted you with one of the tools that can help you avoid certain classes of security holes is not a bad thing, either. Thanks for contributing to the quality of various widely-used software packages through this exercise!