[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Exchange of mail on Test benches and benchmark requirements



< snipping preceding mails, since this mail stands on its own>
> Hi guys 
> 
> Here are some comments.

Thanks for the good inputs.  I give my comments below.
> 
> Before starting with sample circuits, let us have some things like defining
> terms. What do I  mean by that is let us define what coverage means.
> 
> The way I understand is the following terms
> 
> 				# detected faults
> 	raw coverage = ------------------------------- 
> 				# total faults
> 
> 
> 				       # detected faults
> 	testable coverage = ----------------------------------------
> 				#total faults - # untestable faults
> 
> 
> 			     #detected faults + #untestable faults
> 	fault efficiency = -----------------------------------------------
> 					#total faults
> 

Most commercial ATPG tools give definitions of these term in their 
documentation.  Your definition is what we used for Gentest, but there are a lot 
of valid choices.  How you classify potential (possible) detects is the big 
difference.  Some give partial credit, some give full credit if the fault is 
detected a certain number of times, some give no credit.

> I have seen other defination terms like possible detects, detectable faults, 
Oscilating
> faults etc. I think we should have the fault list for every circuit published 
after
> some analysis and discussion. Also we should try to map the coverage of any 
ATPG on 
> this fault list (this could be a university project) so that there is a 
independent 
> coverage measure.
> 
> 

The ISCAS '85 and '89 circuits came with lists of collapsed faults.  When you 
generated a test, you made sure your fault count matched the official count.

This only works for designs without DFT.  Any DFT you add changes the number of 
faults, and each technique makes the number different.  Just generating tests 
for the original faults is not good enough, since you want to cover faults in 
the DFT logic also.  

We can request a list of faults as the standard set, but I don't know how to 
solve the DFT problem.  For circuits described at the RTL level the problem is 
even worse, since each synthesis run will produce a different netlist with a 
different fault count.

My only suggestion is to make the fault lists and counts available with each 
publication, and not to get too hung up over small (< 1% ) differences in fault 
counts and fault coverages.  We are looking for major advances here, and the 
uncertainty of the stuck-at fault model makes such small variations irrelevant.

> 1. First of all do all ATPG or Fault list generators generate the same fault 
list for
>    a given circuit? Because any interesting fudge in the fault list can make a 
>    difference when coverage numbers reach the high ninties.
> 
There are lots of differences in faults between fault simulators and ATPGs.
See Steve Millman's column in the Last Byte in D&T for some ways to fudge.  
Perhaps this work will lead to a standard way of counting faults, but I don't 
want to there at the moment!



> 2. The one issue is that how do we define untestable faults for every given 
circuit.
>    Is there a universal measure for qualifying a fault as untestable which 
every tool
>    has to agree upon. Or otherwise we have to hilight tools who define the 
untestable
>    differently. 
> 

A very good point!  Combinational untestability is well understood, but 
sequential untestability (not to mention redundancy) is a point of some 
contention.  Identifying more faults as untestable is important to efficiency.

We could publish a list of combinationally untestable faults under a full scan 
assumption, but everyone is on his/her own when it comes to sequential 
untestability.  I don't think we can impose a definition, and finding all the 
untestable faults is equivalent to getting 100% efficiency!

> 3. Most ATPGs anf Fault list generators have a way of colapsing faults and the 
actual 
>    coverage is on the colapsed fault list. I have seen different fault list 
generators 
>    generating different colapsed fault list. Does it matter whether we have 
different
>    colapsed fault list?
> 
It might be easier if results were reported uncollapsed.  Many commercial ATPGs 
report results on both collapsed and uncollapsed, while generating tests for and 
simulating only the collapsed list.  Opinions?

> 4. One other intersting thing to consider is tristate bus conflicts in the 
ATPG tools
>    we have to understand what to do when there is a bus conflict for a given 
vector, 
>    some tools drop the vector , some tools accept the vector and for some 
tools the 
>    chioce is programmable. How do we handle this?
> 
Which companies ignore bus conflicts?  I want to stay away from their ASICs!

Seriously, most ATPGs can guarantee tests without bus conflicts.  How that is 
done is up to them - DFT is helpful, as well as constraints and other 
algorithmic tricks.


> 5. Do we have redundancies in the sample circuits, because a lot of times real 
>    synthesis leaves some redundancies which can cause a slow down in the ATPG.
>

I'd be surprised if there weren't any!  This depends on the circuits, which we 
don't have yet.   
> 
> There are a lot of these issues we should discuss along with the debate of 
what circuits
> to use as samples. You may have more of these issues to add to the list. 
> 
> -- 
> **************************************************************
> Prasad Mantri				Phone - 650-933-4331
> Scalable Network Servers Division         Fax - 650-932-4331
> SILICON GRAPHICS INC.			      mantri@sgi.com
> 2011 N Shoreline Blvd.
> Mountain View CA 94043-1389
> **************************************************************

Thanks for the comments.  How about having a standard fault coverage definition 
that everyone should report, and then letting users of the benchmarks report 
other types of coverage as they see fit?  Anyone want to volunteer to draft
a definition?

Thanks,

Scott

Scott Davidson
Sun Microsystems
scott.davidson@eng.sun.com