Common Guidelines for Gauge R&R Metrics

There’s an old saying that history repeats itself. Someone long ago modified that to, “History repeats itself; historians repeat each other.” When it comes to guidelines for gauge r&r studies, there’s a lot of repetition going on. Search for information about gauge r&r studies online and you’ll find a lot of material merely referencing other material without any objective commentary. It’s kind of like a mutual admiration society for measurement systems analysis, but we can do better.

Several guidelines have been suggested and documented over the years to help people decide whether or not a measurement system is capable. Three common guidelines read like this:

1) The precision-to-tolerance ration (PTR) should be less than 10%, and if greater than 30% the system is unacceptable.

2) The percentage gauge r&r (%GRR) should be less than 10%, and if greater than 30% the system is unacceptable.

3) The number of distinct categories (ndc) should be 5 or greater, and a value of 0 or 1 implies the system is unacceptable.

Let’s take a closer look at these guidelines and see how they might fit into your measurement systems analysis.

PTR = 6 times stdev(gauge)/tolerance — The PTR guidelines are based on the notion that a measurement device should be calibrated in units 1/10 as large as the final required measurement accuracy. This notion may hold for your measurement system, but it may not. We know that you may have guidelines established by internal or external customers, but we recommend determining your own guidelines based on your data and experience. Note that if you blow open your limits for whatever reasons, PTR can look quite impressive but not be meaningful. Or, if you have great repeatability and reproducibility, you might consider tightening your limits and fighting back against pesky defects.

Remember that 6*sigma is obviously going to give you a bigger value than 5.15*sigma. 6*sigma represents 99.73% of a normal distribution, while 5.15 sigma represents 99% of a normal distribution. You might ask, What’s 0.73% among friends?, but 6/5.15 is about 1.165. Most mentions of the PTR guidelines ignore this fact. To put it another way, reducing measurement error is harder than merely  changing a multiplier from 6 to 5.15.

%GRR = stdev(gauge)/stdev(total) — In terms of round numbers, the %GRR guidelines are generally the same as the PTR guidelines. BTW, a %GRR of 30% is the same as saying that the measurement system variance is 9% of the total variance (in other words, less than 10%).

Note that if the part-to-part variation increases, %GRR goes down. This does not mean you should ask your friends in the fab to increase part-to-part variability. Ratios are just that – ratios. If your part-to-part variability is extremely low than your %GRR doesn’t compare directly with someone else’s %GRR where there is considerable part-to-part variability. If you’re going to do a gauge r&r study, don’t just pick two or three parts. You’re either going to underestimate part variability or over estimate it, neither of which is helpful.

Also note that if you use 6 as your sigma multiplier for PTR, then %GRR divided by PTR (approximately) equals Cp.

Again, use your data and experience to determine how the %GRR metric can help you decide whether your measurement system is capable.

NDC = square-root

[2*variance(process)/variance(gauge)] — The number of distinct categories derives from another gauge metric, the discrimination ratio. Technically, the ndc can be interpreted as the number of non-overlapping confidence intervals that cover the range of the product variation. (Less technically, ndc can be interpreted as “never don’t concentrate” if you’re a Simpson’s fan.)

More practically, you can view the ndc as the number of distinct categories that the measurement system “sees” within a given parameter. Relatively large amounts of measurement error mean that two parts that are truly quite different from each other may look very similar to each other when measured. Relatively small amounts of measurement error mean that the measurement system can differentiate between two parts that are similar but not identical to each other.

The usual ndc guidelines state that ndc should be 5 or more, and that values less than 2 suggest a non-capable measurement system. An ndc of 5 is actually equivalent to a %GRR of around 27.1%, so the ndc and %GRR guidelines are not consistent with each other. See Some Relationships Between Gage R&R Criteria by William H. Woodall and Connie M. Borror in Quality and Reliability Engineering International (2008; 24:99-106) for more information.

Use your data and experience to determine if the ndc metric can help you measure and improve your measurement system.

Remember that dataConductor’s gauge r&r results can be easily filtered and sorted, and in combination with other statistics you can quickly spot unusual results. It’s easy to drop in a line plot or build a scatterplot to compare appraisers. Sorting the min/mean/max plot from low to high in the default gauge r&r output is a great way to spot whether variability changes as the absolute measurement changes.

Remember too that gauge metrics are there to help you improve your measurement system, but the focus should be on the substance of the metrics and not just the repetition of their use.

By |2017-09-18T22:54:36+00:00November 2nd, 2010|The Data Blog|Comments Off on Common Guidelines for Gauge R&R Metrics

About the Author:

Syntricity Corporation is the number one provider of test management and optimization software for the semiconductor industry. Built for the future of analytics, enterprises worldwide rely on Syntricity for data integration, data quality, and big data solutions that accelerate product ramps and deliver premium yields. A privately-held company, Syntricity is headquartered in San Diego, California. For more information, send e-mail to brian.graff@syntricity.com, visit www.syntricity.com, or call 858.552.4485.