Static Analysis Tool Exposition (SATE 2010) Workshop
Looking for Needles in BIG Haystacks
Copyright Robert Rathe
Software must be developed to have high quality: quality cannot be "tested in". However auditors, certifiers, and others must assess the quality of software they receive. "Black-box" software testing cannot realistically find maliciously implanted Trojan horses or subtle errors which have many preconditions. For maximum reliability and assurance, static analysis must be used in addition to good development and testing. Static analyzers are quite capable and are developing quickly. Yet, developers, auditors, and examiners could use far more capabilities.
The goals of the Static Analysis Tool Exposition (SATE) 2010 are to:
- Enable empirical research based on large test sets
- Encourage improvement of tools
- Speed adoption of tools by objectively demonstrating their use on real software
This workshop has two goals. First, gather participants and organizers of SATE to share experiences, report interesting observations, and discuss lessons learned. The workshop is also an opportunity for attendees to help shape the next SATE in 2011.
The second goal is to convene researchers, tool developers, and government and industrial users of software assurance tools to define obstacles to urgently-needed software assurance capabilities and identify engineering or research approaches to overcome them. We solicit contributions describing basic research, applications, experience, or proposals relevant to software assurance tools, techniques, and their evaluation. Questions and topics of interest include but are not limited to:
- Contribution of static analysis to software security assurance
- Issues in applying static analysis to binaries
- System assurance at the design or requirements level
- Integration of, or tradeoffs between, static and dynamic analysis
- Issues in scaling static analysis to deal with large systems
- Flaw catching vs. sound analysis
- Benchmarks or reference datasets
- Formal descriptions of weaknesses and vulnerabilities in the CWE
- User experience drawing useful lessons or comparisons
- Synergies of pre- and post-production assurance
- Case studies on real applications
- Temporal and inter-tool information sharing
Who Should Attend?
Those who develop, use, purchase, or review software assurance tools should attend. Academicians who are working in the area of semi- or completely automated tools to review or assess the security properties of software are especially welcome. We encourage participation from researchers, students, developers, and assurance tool users in industry, government, and universities.
- 1 October: Workshop
RegistrationRegistration is closed.
NIST provides some visitor information including some local hotels, airports, directions, etc.
The program will consist of presentations by participants in and organizers of the 2010 Static Analysis Tool Exposition.
This is the final program.
8:30 AM Welcome to SATE 2010 - Paul E. Black, NIST, SATE organizer
8:40 SATE 2010 background, Vadim Okun, NIST, SATE organizer
9:10 Bringing Static Analysis to the Masses, Tucker Taft, SofCheck, Inc., SATE participant
9:35 Running Goanna for SATE - What we found, how and why, Ansgar Fehnker, Red Lizard, SATE participant
10:00 Coverity Analysis: Improving Quality in the Software Supply Chain, Peter Henriksen, Coverity, SATE participant
11:00 Choosing SATE Test Cases Based on CVEs, Sue Wang, SATE organizer
11:30 Bugs that Matter - Static Analysis True Positives and False Negatives, Paul Anderson, GrammaTech, SATE participant
11:55 Static Analysis Software Assurance Tools and SATE 2010, Nat Hillary, LDRA, SATE participant
12:20 PM Lunch
1:30 Observations From Analysis, Aurelien Delaitre, NIST, SATE organizer
1:50 Improving Static Analysis Results Accuracy, Chris Wysopal, Veracode, SATE participant
2:15 Our Sparrow Experience with Abstract Interpretation and Impure Catalysts, Kwangkeun Yi, Seoul National University, SATE participant
3:00 Discussion session: planning SATE 2011 Paul E. Black, NIST, SATE organizer
4:30 The use of machine learning with signal- and NLP processing of source code to detect and classify vulnerabilities and weaknesses with MARFCAT, Serguei Mokhov, Concordia, SATE participant
4:55 Closing remarks - Paul E. Black, NIST, SATE organizer
Paul E. Black (NIST) firstname.lastname@example.org
Program Planning Committee
Redge Bartholomew (Rockwell-Collins)
Steve Christey (MITRE)
Romain Gaucher (Cigital)
Raoul Jetley (FDA)
Scott Kagan (Lockheed-Martin)
Michael Lowry (NASA)
Jaime Merced (DoD)
Frédéric Painchaud (DRDC Canada)
Ajoy Kumar (DTCC)