Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Static Analysis Summit II

[SAMATE Home | IntrO TO SAMATE | SARD | SATE | Bugs Framework | Publications | Tool Survey | Resources]

November 8 & 9, 2007

at SIGAda 2007
Hyatt Fair Lakes Hotel
Fairfax, Virginia, USA

Contents

PURPOSE 

"Black-box" software testing cannot realistically find maliciously implanted Trojan horses or subtle errors which have many preconditions. For maximum reliability and assurance, static analysis must be applied to all levels of software artifacts, from models to source code to byte code to binaries. Static analyzers are quite capable and are developing quickly. Yet, developers, auditors, and examiners could use far more capabilities. As noted in the CFP the goal of this summit is to convene researchers, developers, and government and industrial users to define obstacles to such urgently-needed capabilities and try to identify feasible approaches to overcome them, either engineering ("solved" problems) or research.

This follows the first Static Analysis Summit in June 2006. The next workshop will be co-located with PLDI in 2008.

We solicit contributions of papers or proposals for discussion sessions. Contributions should describe basic research, applications, experience, or proposals relevant to static analysis tools, techniques, and their evaluation. Questions and topics of interest include but are not limited to:

  • How can embedded, SCADA, uncommon, etc. systems be analyzed?
  • Binaries need to be handled better - how is that possible?
These are important for 3rd party validation, legacy systems, contract work, and getting by without a trusted compiler. How can binary analysis capabilities be developed and minimize theft of IP?
  • Obfuscation vs. analysis - which will win?
Malware writers use obfuscation to disguise their programs and hide their exploits and behavior. Good guys need powerful analysis to crack the malware quickly. Good guys also use obfuscation to protect intellectual property, and in military application, hinder enemies from figuring out weapon systems (remember the Death Star?). They don't want the bad guys to be able to crack their techniques. So will the obfuscators or the analysts win? Why?
  • Good software starts with good design. What is the state of the art in static analysis, especially for security, at the design or requirements level?
  • Higher level function extraction.
Static analyzers identify specific patterns, but sometimes we want a tool to summarize the behavior in higher level abstractions, like sorting or encoding a message. What can be done? Where is it most beneficial? What are the approaches?
  • Temporal and inter-tool information sharing.
Static analyzers do a lot of work to produce their result, for instance, data and control flow graphs. Yet the work is generally not available, to other tools or even to the same tool on subsequent runs. Can we define useful information (and representations) to share with other tools to build assurance cases? Could we speed up or improve analysis with certificates or hints from early stages? Could we hook the output to the input and get useful incremental analysis? Are concepts like proof-carrying code helpful?
  • What is the minimum performance bar for a source code security analyzer?
  • Static analysis' contribution to software security assurance
  • Flaw catching effectiveness of methods, techniques, or tools
  • Benchmarks or reference datasets
  • Formal pattern languages to describe vulnerabilities.
Most static analyzers have means for the user to specify what to search for. Unfortunately analyzers have different means. Significant resources are going into writing a public library of security weaknesses, see http://cwe.mitre.org/ Formally describing a pattern is very tedious. It would be nice if different tools and processes could share patterns. The community should discuss what form of language or specification makes the most sense.
  • Software security assurance metrics
  • User experience drawing useful lessons or comparisons

SUBMISSIONS 

Papers should be from 1 to 8 pages long. Papers over eight pages will not be reviewed. Papers should clearly identify their novel contributions.

Discussion session proposals should give a session title and name a moderator and at least two other participants. The proposal should clearly identify the topic or question for discussion.

Submit papers and proposals electronically in PDF or ASCII text by 3 September 2007 to Wendy Havens <wendy.havens [at] nist.gov (wendy[dot]havens[at]nist[dot]gov)>. (We will need ACM copyright forms.)

We will notify submitters of acceptance by 3 October 2007.

ATTENDANCE and REGISTRATION 

You do not have to have an accepted paper or discussion proposal to attend. We invite those who develop, use, purchase, or review software security evaluation tools. Academicians who are working in the area of semi- or completely automated tools to review or assess the security properties of software are especially welcome. We are encouraging participation from researchers, students, developers, and users in industry, government, and universities.

You must register for at least one day of SIGAda 2007. There is no additional charge to attend the summit. It only costs $25 for one day registration for full-time students.

On-line registration is now closed. You can register on-site.

PROGRAM 

Thursday, 8 November

12:30 PM: Program Presentation and Charge to Attendees - Paul E. Black

12:45 : Static Analysis for Improving Secure Software Development at Motorola - R Krishnan, Margaret Nadworny, and Nishil Bharill

1:10 : Discussion: most urgently-needed capabilities in static analysis

1:40 : Evaluation of Static Source Code Analyzers for Real-Time Embedded Software Development - Redge Bartholomew

2:05 : Discussion: greatest obstacles in static analysis

2:35 : Break

2:50 : Common Weakness Enumeration (CWE) Status Update - Robert Martin and Sean Barnum

3:15 : Discussion: possible approaches to overcome obstacles

Moderator: Redge Bartholomew

3:45 : Panel: Obfuscation vs. Analysis - Who Will Win?

David J. Chaboya AFRL
Stacy Prowell CERT

4:30 : New Technology Demonstration Fair

FindBugs
FX
static analysis of x86 executables

Friday, 9 November

8:30 AM: Discussion: Static Analysis at Other Levels

Source analysis dominates. Where are the requirements or design analyzers? Some say binary analysis is intractable. Others say they have it. Everyone needs it.
Moderator: Michael Kass

9:00 : Keynote: Bill Pugh

Judging the Value of Static Analysis abstract

10:00 : Break

10:15 : A Practical Approach to Formal Software Verification by Static Analysis - Arnaud Venet (to be given by Henny Sipma)

10:40 : Discussion: inter-tool information sharing

What information would help static analysis? What information can an analyzer provide?

11:10 : Logical Foundation for Static Analysis: Application to Binary Static Analysis for Security - Hassen Saidi

11:35 : Wrap up discussion: needs, obstacles, and approaches

What should we recommend?

PUBLICATION 

Accepted papers and discussion notes will be published in Ada Letters.

IMPORTANT DATES 

23 August 2007 - on-line registration opens
  3 September 2007 - Paper submission deadline
  3 October 2007 - Author notification
22 October 2007 - Final publication-ready copy due
  1 November 2007 - Last date to register for Early Conference Rate
8 & 9 November 2007 - Summit

ORGANIZERS 

Chair 

Paul E. Black NIST paul.black [at] nist.gov (paul[dot]black[at]nist[dot]gov)

Program Committee 

Paul Anderson GrammaTech
Thomas Ball Microsoft Research
Redge Bartholomew Rockwell Collins
Ira Baxter Semantic Designs
Rogier Boon ITsec Security Services
Chandrasekhar Boyapati U. Michigan
Rod Chapman Praxis High Integrity Systems
Brian Chess Fortify
Ben Chelf Coverity
Ron Cytron Washington University
Jack Danahy Ounce Labs
Mary Ann Davidson Oracle
Dawson Engler Stanford
Gene Fredriksen Burton Group
John Hatcliff KSU
Chris Hote The MathWorks, Code Verification Products
Susan Horwitz U. Wisconsin
Joe Kiniry University College Dublin
Nikolai Mansourov KDM Analytics
W. Bradley Martin NSA
James W. Moore The MITRE Corporation
David Naumann Stevens Institute
Bill Pugh U. Maryland
Daniel J. Quinlan LLNL
Martin Rinard MIT
Robby KSU
Radu Rugina Cornell
Mark Saaltink Communications Security Establishment
Koushik Sen Berkeley
Sergei Sokolov Parasoft
Arnaud Venet Kestrel Technology
Bob Warne CESG

Abstract of Judging the Value of Static Analysis

Judging the value of static analysis

Bill Pugh

Keynote address at SASII, 8 & 9 November 2007

abstract:

There is lots of work on developing new static analysis techniques that can find errors in software. However, efforts to judge or measure the effectiveness of static analysis at improving software security or quality is much less mature. Developers and managers want to know whether static analysis techniques are worth the investment in time and money, or whether those resources could be better spent elsewhere. Government agencies are interested in developing benchmarks or checklists that will let them determine which static analysis tools are approved so that they can restrict government procurement to approved tools and to software checked with approved tools. Static analysis tool vendors are exceptionally secretive about the capabilities of their tools.

Unfortunately, there are no easy solutions. Getting from static analysis results to improved security and quality is a complicated process that isn't well understood or documented. In other areas such as parallel programming, benchmarks have proven to be corrosive to good science, focusing immense resources on narrow problems that didn't address the real problems of the field. I believe that developing standard benchmarks for static analysis would have much the same impact in the static analysis field.

I will talk about the problems associated with evaluating static analysis, and some ways that the field might be able to improve the situation.

Created March 24, 2021, Updated May 17, 2021