ITC'99 Benchmark Requirements for Third Generation Benchmark Circuits for ATPG and DFT

Scott Davidson

Rev 1.1 - 3/6/98


Purpose

The goal of a third generation of ATPG benchmarks is to provide a set of realistic example circuits to stress current ATPG algorithms, and to provide the impetus for the development of new ATPG algorithms. More motivation can be found in the Last Byte column by Professor Fujiwara, IEEE Design & Test, Jan. - March 1998.

Briefly, a successful set of benchmarks should break current ATPG and DFT tools, or require excessive effort to run. Only by breaking existing tools can they motivate the development of better tools. We wish to encourage the development of fundamental techniques for dealing with real circuit structures, even if there are tricks and heuristics for dealing with them now.

Characteristics of the Benchmarks

There should be two levels of benchmarks. The first circuits are designed to test the performance of ATPGs on one difficult circuit feature. The set of features is enumerated below. The second level is designed to test the performance of ATPGs on combinations of features, which would more resemble real circuits. There would also be a simple circuit with one instance of each feature hooked up in parallel to test translation and synthesis.

The ISCAS'85 and ISCAS'89 benchmarks were originally presented in a simple flat netlist format. A third generation of benchmarks must be available as hierarchical RTL. Either Verilog or VHDL is acceptable. The circuits will also be distributed as hierarchical EDIF netlists, but it is acceptable for the circuits to be resynthesized into any desirable netlist format.

The circuits should be presented without any DFT inserted, except boundary scan in some cases.

Circuit features that should be included are (preliminary list from D&T column):

There should also be some circuits not easily representable at the RTL level, representing the requirements of the microprocessor and DSP design communities. These features might include:

It is not clear to me how to represent such structures without going to the transistor level.

Some benchmarks should represent circuits with tight timing constraints, to measure the ability of a DFT solution to work around critical paths.

The designs should range in size from a few hundred or thousand gates (to test basic features) up to one million gates. The largest such circuits could have replicated modules, to reduce the task of understanding the design while still testing how well new DFT and ATPG techniques scale.

Acquiring the Designs

Available benchmarks with the requisite features should be used first. However most of the realistic circuits should come from industry. Many companies maintain a library of old designs that are used to test products from DFT vendors. In some cases these might be able to be used as is, in others they would have to be modified, with perhaps pieces extracted, for the benchmarks. It would be helpful to have a tool that could be delivered to benchmark suppliers to change signal and module names to something innocuous.

If designs directly available from industry are not adequate, benchmark development would be required. The first choice for this would be the simple modification of existing circuits - for instance changing a RAM to a CAM, or merging several circuits together into one larger circuit - perhaps using the smaller circuits as cores. Real benchmark development, from scratch, would probably be beyond the scope of this effort, unless it could be assigned as a project for some design class.

Documenting and Characterizing the Designs

The ISCAS'89 benchmarks were flawed in that several of the circuits had large numbers of untestable faults, and one was uninitializable. Some of this came from the process of creating the circuits, which seemed to have involved ripping out memories in some cases.

Third generation benchmark circuits should be more realistic. They should have passed RTL design rule checks, and have no hanging nodes. Design verification vectors, if available, would be very helpful in verifying the correctness of DFT insertion and synthesis.

We cannot expect to get full circuit documentation, however protocols for embedded memories and cores are essential for modeling these, and for generating valid tests. These protocols should be kept fairly simple.

Publishing the Benchmarks and Presenting Results

Since the Benchmarking center run by Dr. Franc Brglez has all the facilities required for distributing benchmarks, I hope that they can do the distribution. We would have to find resources to support the verification of the submitted benchmarks.

Ideally, the benchmarks should be ready for publication in early 1999. I propose that the first results be presented in a special session of the 1999 International Test Conference. Both DFT and ATPG algorithms can be presented in this session. Presenters should be signed up by the second program committee meeting in April/May.

Please send comments on the benchmarks, suggestions for other features to me at the email address above. If you have circuits that might be benchmark candidates, also please let me know.


Scott Davidson
Last modified: Wed Jun 10 16:21:35 PDT