What We Want of a Taxonomy
The SAMATE project needs a taxonomy, or classification, of software assurance (SwA) tools and techniques to
- consistently group and discuss SwA tools and techniques,
- identify gaps and research needs, and
- help decide where to put effort.
What Do We Mean by Tool, Function, etc.?
Here we use tool to mean a single distinct function or (manual) technique. Function means something producing a (software assurance) result for the user. A Source Code Security Analysis tool looking for weaknesses is a function. A parser is not (unless it, too, reports flaws while parsing).
- Although we may speak of "a tool", a single program may perform many different functions. Thus one program may be classified under several tool classes.
- This taxonomy encompasses processes and manual techniques, too. For instance, quality is best designed in at the start, and cannot be effectively "tested in" later. Correct-by-construction may be far better than debugging later. Manual code reviews have a place, too.
- This taxonomy doesn't classify underlying algorithms: it doesn't matter to tool testing how a checker catches, say, buffer overflows; only if it does. (Of course, different processing techniques may make a huge difference in the accuracy of results or scalability.) But classifying a tool shouldn't depend on possibly proprietary information about how it works.
Classification Scheme Desiderata
As far as possible, a classification scheme should be
- Unambiguous: one unique classification for each tool
- Objective: classify by mechanically comparing attributes
- Orthogonal: few nonsensical classes
- Comprehensive: few "other" or "unclassified" entries
- Easy to use: classes correspond to common notions. Also, one can find relevant classes quickly
- Usefully distinctive: doesn't combine intuitively different tools or separate tools in the same class. Also, one specification covers all tools in a class. Tools in the same class should work on the same kind of flaws.
Questions a Taxonomy Should Address
What questions need to be answered to complete the SA Tool/Function taxonomy?
Regarding Tool Classes
- What classes of tools are currently used to identify potential vulnerabilities in applications today?
- What is the order of importance of those tools (which have the highest impact in identifying or precluding application vulnerabilities)?
- What tool classes are most mature?
- What tool classes are most common?
- What are the gaps in capabilities amongst tools of the same class?
- What are the gaps in capabilities for a tool class in general?
- What classes of tools are missing from the taxonomy of SA tools below?
Regarding Tool Capabilities
- What are the capabilities that define a particular tool class?
- What capabilitiess are required for a particular class of tool?
- What is the domain for each capability?
- How would each capability be described in a functional specification for that SwA tool?
A validation of a taxonomy is to try classes of tools, which is in the Tool Survey.
A Taxonomy of Tools
This is a proposed taxonomy. We welcome your comments and suggestions.
This taxonomy is a faceted classification, possibly with further hierarchical organization within each class. There are four facets: life cycle process, automation level, approach, and viewpoint.
Life Cycle Process or Activity
Primary tool or technique are used at different times in the software life cycle. Support tools and techniques, such as management and configuration tools, apply across the life cycle. This is a unification of IEEE/EIA 12207 , Kornecki and Zalewski , and SWEBOK .
 [8 1.1]
[4 5.3.2 & 5.3.4]
 [8 1.2]
[4 5.3.3, 5.3.5, & 5.3.6]
[4 5.3.7]  [8 1.3]
|Maintenance [4 5.5] [8 1.5]|
 [8 1.4 & 1.9]
[4 5.3.7 - 5.3.11]
|Operation [4 5.4]|
- Configuration Management [4 6.2] [8 1.6]
- Software Engineering Management [4 7.1] [8 1.7]
- Software Engineering Process [4 6.3 & 7.3] [8 1.8]
Another classification or organization to be unified or considered is in NIST 800-217A.
SWEBOK  lists other categories: Miscellaneous Tools and Software Engineering Methods (Heuristic, Formal, and Prototyping).
How much does the tool do by itself, and how much does the human need to do?
0. Manual procedure
- e.g., code review
1. Analysis aid
- e.g., call graph extractor
- automated results, manual interpretation, e.g., static analyzer for potential flaws or Intrusion Detectors
- e.g., firewall
What approach does this tool or technique take to software assurance?
- proactively make flaws impossible, e.g., correct by creation
- find flaws, e.g., checkers, testers
- reduce or eliminate flaw impact, e.g., security kernel, MLS
- take actions upon an event
- report information, e.g., complexity metrics or call graphs
Can we see or "poke at" the internals? External tools do not have access to application software code or configuration and audit data. Internal tools do.
- External (black box)
- e.g., acceptance of COTS packages or Web site penetration tester
- Internal (white box)
- e.g. code scanners
- e.g. wrapper, execution monitoring
Other Useful Data
Tools would not be classified by these (one wouldn't separate commercial from academic tools functionally), but such information would be useful.
Assessment vs. Development
"DO-178B differentiates between verification tools that cannot introduce errors but may fail to detect them and development tools whose output is part of airborne software and thus can introduce errors." [7, page 19] (Emphasis in the original.)
Who fixes it? Can I get it?
- used within a company, either as a service or on their own products.
- $ (nomial, e.g., up to about $17)
- $$ (up to a few hundred dollars)
- $$$ (significant, thousands of dollars)
Cost of use is another item, but related.
What does it run on? Linux, Windows, Solaris, ...
What is the target language or format? C++, Java, bytecode, UML, ...
How well does it work? Number of bugs. Number of false alarms. Tool pedigree. Maturity of tool. Performance on benchmarks.
How long does it run or do per unit (LOC, module, requirement)? Is it quick enough to run after every edit? every night? every month? For manual methods, how often are, say, reviews? Is it scalable?
Computational complexity might be separate or a way of quantifying run time.
 NASA Automated Requirement Measurement Tools.
 DISA Application Security Assessment Tool Survey, V3.0, July 29, 2004 (to be published as a Security Techical Implementation Guide or STIG)
 CLASP Reference Guide, Volume 1.1 Training Manual, John Viega, Secure Software Inc., 2005.
 IEEE/EIA Std 12207.0-1996, Software life cycle processes
 Andrew J. Kornecki and Janusz Zalewski, The Qualification of Software Development Tools From the DO-178B Certification Perspective, CrossTalk, pages 19-23, April 2006
 Guide to the SWEBOK, Chapter 10. accessed January 2015