Reading: Protection is a Software Issue

Last semester, I got a fair bit of active paper reading done because classes required that; this semester, not so much. This is alarming. Further, this space which was intended to chronicle my everyday adventures as a graduate student has not been put to much use. I think I'll write down summaries of interesting papers I come across, just to get into the practice and as a way of motivating myself to keep notes. (With practice, fluency and clarity should follow, right? I guess a wiki would be ideal, but something's better than nothing, eh?)

I have been thinking about security and protection a bit; security that doesn't compromise efficiency, maintainability, ease of construction and the such – properties that we expect from any reasonable software construction toolkit. Roshan pointed me at a 1995 paper when I had a chance to pick his brain on the matter: Protection is a Software Issue (postscript file) by Brian Bershad et al.

The premise is familiar: software crashes. And it is embarrassing to all of us, real computer scientists, engineers and aspirants alike. We have tried hardware approaches and software approaches, and hardware supported capability systems from software (such as x86 "rings"); they do help, but they haven't solved the problem entirely. Separation of privileges in hardware (such as memory protection and privileged instructions) do not solve protection problems at higher levels of abstractions (such as files, sockets, processes, window system objects…); this duty belongs to the operating system.

"It's all a bunch of if statements!"

This paper argues that software protection models have advantages over hardware, because software is flexible, precise (ed: this is one questionable claim indeed), amenable to optimizations, and offers higher levels of granularity. Protection mechanisms are classified into two: those that names things (where things are resources), and those that denies access to things so they can't be misused. In both hardware and software, they're implemented in terms of (1) conditionals, (2) data scoping, (3) address spaces, and (4) memory protection.

The first two are software mechanisms, and last two are usually hardware mechanisms. Choosing a suitable mechanism for a resource is an engineering problem of balancing between suitability and execution cost. Co-operation (or "synergy" if you will) between hardware and software is assumed.

Protection mechanisms are all about constraints. What if we can assume constraints statically? Checks can be delegated to compile time, this simplifying run time. Clearly assumptions about the code – do not violate interfaces! do not use touch things you aren't supposed to touch! do not overrun your CPU time! – improves structure and performance of run time systems. In summary, we'll use fewer if statements, with confidence.

And… that's it?

Reading a 1995 paper about protection in 2012 is a little underwhelming, or could be of mainly historical interest, to use broad euphemisms. What's really interesting would be to look at where we went from there. Much work has been done since; see compcert and crash-safe, for example. For a while I was interested in the work Jonathan Shapiro and group have been doing at Johns Hopkins, including EROS, Coyotos, and BitC. Amal Ahmed taught an entire graduate seminar course at IU on Language-Based Approaches to Security. This was before I came to IU, and they seem to have read a fascinating amount of material, and I'm naturally bummed that I missed it. Amal has also linked to Robert Harper's Languages and Logics for Security course at CMU, which is again rich with content.

And the conclusion is? There's an awful lot to read, so go read!