Requirements, Goals and Procedures for ITC'99 Benchmark Circuits Scott Davidson 10/29/98 From the benchmark meeting held at ITC on 10/20, it became clear to me that I have not done a good enough job describing the goals of the benchmark circuits. Briefly, each circuit should demonstrate one or more problems for DFT and ATPG tools. This is different from the previous benchmarks, and means that results will be presented differently. DFT tools from universities will handle some but not all of these problems at first, eventually evolving into full solutions. PROBLEMS TO SOLVE ================= The ISCAS'85 and '89 benchmarks were designed to demonstrate the feasibility of combinational and sequential ATPG, and to serve as a means for comparing the effectiveness of test generation algorithms. Today the feasibility of DFT and ATPG is not in doubt. Commercial tools can successfully handle the very largest designs with real features. However, they require significant amounts of DFT and manual work to do so in some cases, and work at the gate level. Techniques and algorithms are proprietary, and are not often published. There appears to be little theory behind the way some things are done in commercial ATPGs, since there is no commercial incentive to do research to handle things a bit better. The goal of the ITC'99 circuits should be to drive research in areas of ATPG and DFT where there are no good solutions, or where the solutions are ad hoc. This means that the circuits should be selected to demonstrate specific problems, or to demonstrate many interacting problems. Here is a list of the problems that I see, based on my experience and discussions with many others. They are not in order of importance. o ATPG and DFT for circuits at the RTL level. o ATPG and DFT for circuits at the Behavioral level. o ATPG for circuits with embedded memories. - Effective memory modeling techniques. - Sequential ATPG for circuits with memories without wrappers. - Coverage of shadow logic between memory and scan chain. This item does not consider faults within the memories, but only improving ATPG for logic around the memories. o ATPG and DFT for tristate busses. - Proving busses non-conflicting. - Constraint generation. - DFT to prevent contention during scan shift. o ATPG and DFT for circuits with I/Oputs. o ATPG for circuits with boundary scan. - How to test the circuitry that controls the test? - Can an ATPG automatically generate shift control sequences? - ATPG for "messy" TAP logic. o DFT for non-traditional fault models. - Bridging and open faults. - High level fault models. o ATPG and DFT for circuits with complex clocking. - Multiple clock domains. - Small clock domains. (One or two FFs.) - Internally generated clocks. - PLLs. - Clock gating for low power. o ATPG and DFT for dynamic logic and custom designs. o ATPG and DFT for circuits with critical path timing constraints. - How to work around don't touch areas. o ATPG and DFT for embedded cores. o Efficient DFT for microprocessor designs. o Efficient DFT for DSP designs. REQUIREMENTS FOR NEW BENCHMARKS =============================== Each benchmark circuit should contain at least one of the following features: o Embedded memory. o Embedded tristate bus. o I/O buffers, including I/Oputs. o Inserted boundary scan. o RTL level description. o Hierarchical circuit description. o Multiple clock domains. o Gated clocks. o Behavioral level description. o Custom logic. o List of path delay faults. o List of bridging faults. o List of open faults. o Embedded cores. Most of the designs should contain several of these features. These designs should be large, in the hundreds of thousands of gates at least, though a few smaller designs should be used to test translators and compare results to previously published results. ROADMAP ======= Not all of the above features are likely to be available in the first set of benchmarks. To begin, we need at least the following: o Benchmarks with both RTL and netlists, of reasonable size (over 100 K gates.) o Benchmarks with embedded memories. o Benchmarks with embedded tristate busses. o Benchmarks with realistic I/O buffers. o Benchmarks with boundary scan added. o Benchmarks with multiple clock domains and gated clocks. o Benchmarks with an embedded core, possibly created by combining the benchmarks above. After this, we can extend the benchmarks both by adding new ones and by laying out some of the existing ones in order to collect a realistic set of defects. We might lay out both designs without DFT and designs with DFT, so that ATPGs requiring DFT to generate tests at this level can work. At some point analog blocks can be added. USING THE BENCHMARKS ==================== We are not expecting the preliminary users of the benchmarks to be able to run every one of them. Therefore, the first results will not be complete, but will be a baseline for future work in the various problem areas addressed by the benchmarks. The advances in DFT and ATPG that could be demonstrated using the benchmarks are: o DFT insertion at the RTL or Behavioral level. o ATPG at the RTL or Behavioral level, with or without DFT. o ATPG and DFT solutions for complex circuit structures. These solutions must be either more effective than those of commercial ATPG tools, or have a theoretical underpinning that can eventually lead to more successful solutions. o Non-DFT ATPG solutions for large, complex designs. It has been claimed that working at the RTL level is the way to do this, now is the time to prove this claim. o Fault simulation at the RTL or Behavioral levels, and validation of the fault coverage at the gate level. To aid in this, some of the designs should have functional vectors. SELECTING BENCHMARK PRESENTERS ============================== Since there is not enough time for benchmark results papers to be reviewed, we need an alternate method of selecting who will present. Assuming that there are more candidates than slots, I suggest the following. 1. A committee be set up of experts who will not be presenting. People from DFT companies might be good candidates. 2. The session will have 5 - 6 slots for presenters. One slot will describe the circuits, one will give full scan results, corresponding to the current state of the art, and 3 - 4 will be result presentations. 3. Criteria could be: o Quality of results. o Quantity of results. Those able to run more circuits will have an advantage. o Advancement of the state of the art. Results not achievable by commercial tools will be favored. 4. At the time of selection, the proposed presenters will have had 2 1/2 months to work on the circuits. Selection will be based on a proposal and preliminary results. At least one backup should be chosen who will present if one of the selectees withdraws. 5. In addition to the ITC presentation, anyone can use the circuits as data for papers submitted to any conference or journal. It may be the case that some of the techniques presented here will be well known to practitioners in industry. Unless already published, this should not be a negative, since there must be preliminary work done before industry methods can be surpassed. SUMMARY ======= The benchmark circuits should be chosen to provide realistic examples of DFT problems, and to stress DFT and ATPG methods. Users of the benchmarks can focus on one or several of these problems. For the initial presentation, preference will be given to those solving the most problems most effectively.