Main menu:

Site search

Categories

Archive

ESP: MSR’s little helper

The Javascript/Treehydra version of the outparam usage checker is finally nearing completion: all that’s left is packaging it as a patch that can go into mozilla-central (plus the inevitable future debugging). In my last post, I mentioned that the checker is based on ESP, an program analysis technique invented at Microsoft Research. A few people have asked for a post about ESP (the paper is good, but very dense if you don’t have a PL research background), so here it is.

Why ESP?
First I should explain why I bothered implementing a new outparam checker design given that I had a working version based on theorem proving. The problem was that that the theorem-proving version worked by analyzing “every” path in each method. Or it would have worked if it could analyze every path. But a method with N if statements can have 2^N paths, and N gets big enough that Mozilla has a method with 8 million paths. Worse, methods with loops have an infinite number of paths. In practice, path-based analyses have to give up after about 1000 paths, leaving the rest unanalyzed.

In short, path-based analysis is very precise, but lacks coverage of all the code paths. Conversely, the abstract interpretation approach I showed in my previous post does cover all code paths, but it mixes them up so much that it ends up being too imprecise to work at all.

When I saw this problem, I remembered ESP right away, because the whole point of ESP is to get the precision of path-based analysis with the speed and coverage of abstract interpretation. But after reviewing the paper, I couldn’t really see how to make ESP solve the problems I described before, so I went the theorem proving route. But once I got stuck on the path explosion problem, I went back to it, and eventually it hit me. Now it seems kind of obvious. So, it seems like I should be able to explain ESP and its application to outparams in a way that makes it sound simple, but that turned out to be hard. Hopefully it’s at least comprehensible.

Abstract Interpetation Redux.
Previously, I tried out abstract interpretation with pen and paper and found that it didn’t even come close to working for outparams. (Reminder: abstract interpretation means running the code in a special interpreter that (a) tracks finite(-ish) abstract states instead of the standard program state, (b) goes both ways at branches and (c) merges state when control rejoins. This has the effect of running the method on every possible input value and every path in finite time. The price is that the output is abstract states instead of full detail.) Here are the results again (the table on the right shows the abstract state after abstractly interpreting each statement):

 1   nsresult SomeMethod(nsIX **out) {      out       rv   tmp   if.temp
 2     nsresult rv = doSomething();      not-written   ?
 3     tmp = rv;                         not-written   ?    ?
 4     if.temp = NS_SUCCEEDED(tmp)       not-written   ?    ?      ?
 5     if (if.temp) {                    not-written   ?    ?    true
 6       out = mValue;                       written   ?    ?    true
 7       return NS_OK;                       written   ?    ?    true
 8     } else {                          not-written   ?    ?    false
 9       return rv;                      not-written   ?    ?    false
10     }
11   }

These analysis results are too imprecise to check the return on line 9: rv is unknown, so the analysis has to assume that the return value could be success, which is an error because out has not been written at this point. Note that the abstract interpretation never had any information about rv. Clearly, total ingorance about rv just won’t work, and any algorithm that works must track the relationship between out and rv that is created by line 2.

A Smarter Abstract State Space.
Abstract interpetation can track that relationship, but it needs to use a more complicated abstract state than the one I implicitly used above. The abstract state in my table above is a mapping of variables to abstract values. (Compare with the real program state, which is a mapping of variable to C++ values.) That’s the simplest and most common abstract state, but there’s really nothing special about it. An abstract state can be any representation of a set of program states: the game is to choose an abstract state space that is “fine” enough to represent the information we need, but no finer, so the abstract states stay small and simple.

We need a state space that can represent facts like “if.temp is true iff tmp is a success code”. I can write that more explictly as, “if.temp is true and tmp is a success code, or if.temp is false and tmp is a failure code.” And that looks just like the “or” of two mappings of variables to abstract values. So, it looks like we can use an abstract state that’s just like our original state, except allowing multiple “table rows”. If we code the abstract interpreter to use multiple rows when it can, the results of abstract interpretation will come out like this (showing the states between the statements so it’s easier to separate the rows):

 1   nsresult SomeMethod(nsIX **out) {      out         rv    tmp   if.temp
                                         not-written
 2     nsresult rv = doSomething();
                                         not-written   succ
                                         not-written   fail
 3     tmp = rv;
                                         not-written   succ  succ
                                         not-written   fail  fail
 4     if.temp = NS_SUCCEEDED(tmp)
                                         not-written   succ  succ    true
                                         not-written   fail  fail    false
 5     if (if.temp) {
                                         not-written   succ  succ    true
 6       out = mValue;
                                             written   succ  succ    true
 7       return NS_OK;
 8     } else {
                                         not-written   fail  fail    false
 9       return rv;
10     }
11   }

These results are detailed enough to check outparams perfectly!

A few things to note: In abstractly interpreting line 2, we don’t know the results exactly, but instead of generating a lot of “unknown” abstract values, we generate multiple rows, establishing the correlation among results. Now on lines 3 and 4, we have a multiple-row state, so we abstractly interpret the statements on each row independently. Finally, line 5 is a conditional guard, so at that point, we filter out all the rows that don’t match the guard (because the program wouldn’t execute this path in those states). Each of these features is another detail that has to be noticed and coded up in the analysis, but they all fit naturally into the framework of interpreting statements on abstract states.

Path Sensitivity.
This version of the analysis is actually path-sensitive, because if different paths generate different states, those states will be kept as separate rows. Here’s an example:

nsresult OtherMethod(nsIX **out1, nsIX **out2) {
                                        out1          out2         rv    if.temp
                                    not-written   not-written
  nsresult rv = doSomething();
                                    not-written   not-written   success
                                    not-written   not-written   failure
  if.temp = NS_SUCCEEDED(rv);
                                    not-written   not-written   success   true
                                    not-written   not-written   failure   false
  if (if.temp) {
                                    not-written   not-written   success   true
    out1 = mFoo;
                                A:      written   not-written   success   true
  } else {
                                B:  not-written   not-written   failure   false
  }
                                C:  // Join point -- state is union of A and B.
                                        written   not-written   success   true
                                    not-written   not-written   failure   false
  doMoreStuff();
                                        written   not-written   success   true
                                    not-written   not-written   failure   false
  if (if.temp) {
                                        written   not-written   success   true
    out2 = mBar;
                                        written       written   success   true
  } else {
                                    not-written   not-written   failure   false
  }
                                     // Join point
                                        written       written   success   true
                                    not-written   not-written   failure   false
  return rv;
}

It’s kind of hard to read, but the key point is that there are two ifs with the same guard, and to analyze the method correctly, we need to know that of the 4 possible paths, only 2 can actually be taken. State C is the important one: after finishing the first if, at the join point we merge the states by simply collecting all the rows. Each path has a different row, and the rows stay separate, so on the second if, the analysis executes the then branch only in the states generated by the first then branch.

This is actually the kind of thing the ESP authors were most concerned with in their paper. It’s pretty neat but the problems I had look very different, which is why it took me so long to see the connection.

A nice thing about this kind of path sensitivity is that if the state is the same along two branches, the rows will “rejoin” at the join point, essentially forgetting that there was a branch (because it didn’t really matter anyway). It also works with loops.

The problem is that although we don’t exactly get path explosion anymore, we can get “row explosion”: if there are M variables, and each has 2 possible abstract values, we can get 2^M rows in the state. And M can easily get big enough in Mozilla to run out of memory.

ESP.
This is where ESP comes into play. The insight of ESP is that there are some variables you care about a lot (which the ESP authors call property variables), and others you care about only as far as they relate to the property variables (which the ESP authors call execution variables). (For example, in outparams, the property variables are the outparams and any variables that whose values can reach a return statement.) So, if there are only a few property variables, then if we had a way to track only the property values path-sensitively, we can be precise on the things we care about without row explosion.

ESP does this very simply: it just takes our multiple-row states and adds a primary key, namely the set of property variables. Thus, property value combinations and relations are always tracked precisely. Execution variables are tracked as one mapping per property value combination, just as in the basic abstract interpretation. Because of primary key uniqueness, if there are K property variables, there can be no more than 2^K rows in a state, so if K is smaller than 10 or so, the states are small enough to analyze in reasonable time.

An ESP analysis looks a lot like our path-sensitive abstract intepretation, except that after each operation, it “collects” rows together to maintain the primary key uniqueness property. For example, if P is a property variable and E is an execution variable, and we need to merge this state:

    P = true,    E = false
    P = false,   E = false

with this state:

    P = true,    E = true

we take the union of rows as before to get this:

    P = true,    E = false
    P = false,   E = false
    P = true,    E = true

but then we merge together rows with the same primary key, yielding:

    P = true,    E = anything
    P = false,   E = false

The significance of ESP is for outparams is that all Mozilla methods have only a few outparams and return value variables, so the analysis runs fast no matter how many other “unimportant” variables are in the method.

A small tweak.
Actually, that’s not quite true. GCC generates a temporary variable for each return statement, so if there are 30 return statements, there are 30 temporary variables, and the state can grow to 2^30 rows. That does happen, and it does make the analysis run out of memory.
Fortunately, I was able to fix this with a just a small tweak to ESP. The temporary variables are only “live” between the point where they are created and where they are copied to another return variable, and their values don’t matter at all outside that live range. At any given point in the method, only a few temporaries are live. So I can keep the number of property values small by “demoting” return values to execution values once they are dead. And demotion is trivial to implement: just set the abstract value to any one value, because we’ll never read it anyway.

The whole outparam analysis came out to about 2500 lines of Javascript, but a lot of that was adapter code to simplify the Treehydra API, plus subsidiary analyses to find return value variables and their live ranges. The ESP framework was 450 lines, and the outparam abstract interpreter was another 800 lines. It runs in reasonable time too, without any effort optimizing it yet. I haven’t measured it exactly, but I think it’s less than 20 minutes on 1970 C++ files of Mozilla on a 4-processor machine. I guess you wouldn’t want to run it on every build, but if you’re only changing a few .cpp files, it shouldn’t be too bad.

Comments

Comment from AndersH
Time: April 19, 2008, 12:16 am

Since you are trying to store a state table compactly, you could look at BDD (binary decision diagrams), or rather in the case where a variable can have more that two states NDD (N-ary decision diagrams). It seems the operations you need (state removal on guards and state union) can be done on the compressed representation. Of course, the size of the NDD might also explode, but in that case, you could just revert to the approximated state.

Comment from AJ
Time: April 19, 2008, 1:01 pm

Awesome point. Not sure if this is because I’m familiar with the material already, but I thought your descriptions were clear and elegant. A fun read!

Only complaint is that the code font is small enough that I’m too lazy to read it all. I know you’re dealing with fixed-width column, but you’ve got 1″ margins to play with, and you could just widen the column too ;).

Comment from dmandelin
Time: April 21, 2008, 10:00 am

AndersH:

Good idea. It’s funny–I was looking at a paper on BDD-based analysis for formatting ideas for another paper during the last couple weeks, but never once thought to connect it to *this* problem. If we find we need more precision for some analysis, that could be a great project for an intern this summer (which is something I’ve been thinking about lately).

AJ:

Thanks. I don’t know how understandable it is to general audiences either, but I think I found a different, easier-to-understand explanation because I ended up coming at the problem from a different direction.

Sorry about the fonts: I know they’re not good, but I’m not exactly teh l33t WordPress user, so it’s actually a pretty big pain for me to get the ‘pre’ blocks in there at all. Also, it seems to eat inline styles if I try to add those to customize things. I know stylesheets can be edited, but I think you might need to have login access to edit PHP files or something. I guess I could ask the hosting people if they can create some better styles for code.

Comment from Shoes
Time: October 29, 2010, 5:41 am

I live to wear comfortable shoes. To do this, I have to spend much time on finding the right model. But thanks to the site “Shoes Info”, find the right shoe was much faster and easier.

Comment from Shoes store
Time: October 29, 2010, 5:42 am

I live to wear comfortable shoes. To do this, I have to spend much time on finding the right model. But thanks to the site “Shoes Info”, find the right shoe was much faster and easier!