Find here documents from test suites and miscellaneous resources related to SARD. These are documents produced by other entities and NIST assumes no responsibility whatsoever for its use by other parties, and makes no guaranties, expressed or implied, about its quality, reliability, or any other characteristic.
Test case version numbers are inspired by semantic versioning. Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make a change in the source code that has an impact on the abstract syntax tree,
- MINOR version when you make a change in the SARIF file, and
- PATCH version when you make any other kind of changes: documentation, compilation instructions, Dockerfile etc.
The search allows you to find test cases with specific features. To narrow your search, you can define filters. A filter is composed of a key and a value:
|q: ANY WORD||q: sql injection matches test cases mentioning "sql injection".|
|author: AUTHOR||author: MIT matches test cases created by MIT.|
|application: CPE||application: cpe:2.3:a:ffmpeg:ffmpeg:1.2.2:*:*:*:*:*:*:* matches test cases using the FFmpeg 1.2.2. The value must be a Common Platform Enumeration.|
|operating system: CPE||operating system: cpe:2.3:o:microsoft:windows:*:*:*:*:*:*:*:* matches test cases created for the Windows platform. The value must be a Common Platform Enumeration.|
|cve: CVE||cve: CVE-2014-0160 matches test cases related to the given CVE.|
|language: LANGUAGE||language: java matches test cases written in Java.|
|state: STATE||state: good matches good test cases, more details on this page.|
|status: STATUS||status: deprecated matches deprecated test cases, more details on this page.|
|flaw: CWE||flaw: CWE-121 matches test cases containing CWE-121.|
|file: BASENAME||file: CWE121_Stack_Based_Buffer_Overflow__CWE129_large_14.c matches test cases containing the given file. The file name must be the basename (without any leading directory components).|
IARPA STONESOUP Documents#
About IARPA STONESOUP
These documents describe the Securely Taking On New Executable Software of Uncertain Provenance (STONESOUP) C and Java test cases that were created by the Intelligence Advanced Research Projects Activity (IARPA) specifically for use in testing static analysis tools. The documents are intended for anyone who wishes to use the test cases for their own testing purposes, or who would like to have a greater understanding of the test cases design. STONESOUP Phase 3 test cases were released to NIST as a virtual machine. Phase 1 is also available for download.
Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA or the U.S. Government.
- Overview.pdf gives a big picture of the IARPA STONESOUP Program. 1 page
- Test Case Creation Guide.pdf Describes how the test cases are organized, including the naming convention, file structure, and metadata documentation. 39 pages
- Weaknesses Documentation.pdf A number of software snippets was developed to provide discrete tests of specific weaknesses, performing no further meaningful processing. These weakness variants form the basis from which the STONESOUP is generated. 673 pages
- TEXAS User Guide.pdf The Test and Evaluation eXecution and Analysis System (TEXAS) is designed and developed to test a Performer technology’s ability to detect and mitigate software vulnerabilities and exploit through static analysis and run time countermeasures. 23 pages
- Communication API Guide.pdf The Test and Evaluation eXecution and Analysis System (TEXAS) is designed and developed to test a Performer technology’s ability to detect and mitigate software vulnerabilities and exploit through static analysis and run time countermeasures. 80 pages
- System Design Document.pdf The scope of this document is to cover the system design of the "Test and Evaluation, eXecution, Analysis System" (TEXAS) for the STONESOUP Phase 3 Test and Evaluation activity. 39 pages
- Test and Evaluation Phase 3 Final Report.pdf This document presents the final main report of the STONESOUP project in detail. Test and Evaluation were performed by Columbia University, GrammaTech and Kestrel Institute. 493 pages
- Kestrel Institute Report.pdf Ref: AFRL-RY-WP-TR-2015-0019): The research team from Kestrel Institute, Kestrel Technology, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Dynamic Object Language Labs Inc (DOLL) produced the Vulnerabilities In Bytecode Removed by Anaysis, Nuanced Confinement and Diversification (VIBRANCE). The VIBRANCE tool starts with a vulnerable Java application and automatically hardens it against SQL injection, OS command injection, file path traversal, numeric errors, denial of service, and other attacks. For a large class of attacks, the protection added by VIBRANCE blocks the attacks and safely continues execution. 162 pages
- Grammatech Report.pdf (Ref: AFRL-RY-WP-TR-2015-0017): Describes the results of the research and development of the Preventing Exploits Against Software of Uncertain Provenance (PEASOUP), a technology that enables the safe execution of software executables. 257 pages
- MINESTRONE Report.pdf (Ref: AFRL-RY-WP-TR-2015-0002): MINESTRONE is a novel architecture that integrates static analysis, dynamic confinement, and code diversification techniques to enable the identification, mitigation and containment of a large class of software vulnerabilities. 58 pages
These documents describe the Juliet Test Suite C/C++ and Java test cases that were created by the NSA’s Center for Assured Software (CAS) specifically for use in testing static analysis tools. These are intended for anyone who wishes to use the test cases for their own testing purposes, or who would like to have a greater understanding of how the test cases were created.
Juliet Test Suite can be downloaded as standalone files for v1.3 test cases and for v1.2 and v1.1 test cases. Also, Juliet test cases are individually available in the SARD, following the documents below:
- Release Log for Juliet 1.3 1 page
- Juliet 1.3 Test Suite: Changes From 1.2 36 pages
- Juliet v1.2 for C and C++ cases.pdf For C/C++ 41 pages
- Juliet v1.2 for Java cases.pdf For Java 43 pages
- Juliet v1.0 for C and C++ cases.pdf For C/C++ 38 pages
- Juliet v1.0 for Java cases.pdf For Java 39 pages