The SAMATE Project Department of Homeland Security

Abstract of Judging the Value of Static Analysis

From SAMATE

Judging the value of static analysis

Bill Pugh

Keynote address at SASII, 8 & 9 November 2007

abstract:

There is lots of work on developing new static analysis techniques that can find errors in software. However, efforts to judge or measure the effectiveness of static analysis at improving software security or quality is much less mature. Developers and managers want to know whether static analysis techniques are worth the investment in time and money, or whether those resources could be better spent elsewhere. Government agencies are interested in developing benchmarks or checklists that will let them determine which static analysis tools are approved so that they can restrict government procurement to approved tools and to software checked with approved tools. Static analysis tool vendors are exceptionally secretive about the capabilities of their tools.

Unfortunately, there are no easy solutions. Getting from static analysis results to improved security and quality is a complicated process that isn't well understood or documented. In other areas such as parallel programming, benchmarks have proven to be corrosive to good science, focusing immense resources on narrow problems that didn't address the real problems of the field. I believe that developing standard benchmarks for static analysis would have much the same impact in the static analysis field.

I will talk about the problems associated with evaluating static analysis, and some ways that the field might be able to improve the situation.