DOC

Uncertainty Est

By Rita Nelson,2014-05-01 06:47
13 views 0
Uncertainty Est

-----Original Message-----

    From: Greg Gogates [SMTP:iso25@fasor.com]

    Sent: Monday, January 31, 2000 6:54 AM

    To: iso25@quality.org

    Subject: Estimation of Uncertainty of Measurement for a testing Lab

Date: Sat, 29 Jan 2000 13:14:41 -0500

    From: "Todd Marrow" <TMarrow@lsd.uoguelph.ca>

    To: <iso25@quality.org>

    Subject: Estimation of Uncertainty of Measurement for a testing laboratory

    For the draft ISO 17025 that I've seen, uncertainty of measurement estimation seems to be an expectation for a calibration laboratory, but the expectation seems to be a little looser for a testing laboratory.

    For example section 5.4.7.2 states "Testing laboratories shall have and apply procedures for estimating uncertainties of measurement, except when the test methods preclude such rigorous calculations".

    Can someone help me out with how to estimate uncertainty of measurement for a testing laboratory? How far do you actually take this? Examples would be helpful. Opinions from calibration labs are welcome too since you probably have a better handle on this.

Thank you

-----------------snippo-----------------

>>> Greg Gogates <iso25@fasor.com> 01/02/2000 15:17:01 >>>

    Date: Mon, 31 Jan 2000 11:14:31 -0800

    From: "Dr. Howard Castrup" <hcastrup@isgmax.com>

    To: 'Greg Gogates' <iso25@fasor.com>

    Subject: RE: Estimation of Uncertainty of Measurement for a testing Lab

    Actually, having a handle on measurement uncertainty in testing is more critical than in calibration. The spreads of distributions of parameter values of tested products (i.e., uncertainties) are directly influenced by uncertainties in the test process. Moreover, these uncertainties lead to such undesirable effects as false accept and false reject risk.

    As to how to estimate uncertainties in test processes, you apply the same methods as with calibration process uncertainties. The GUM is, of course, a good resource. There are also commercially available software packages, as well as freeware. If you want to dig a little deeper, you might get a copy of "Experimentation and Uncertainty Analysis for Engineers" by Coleman and Steele or "An Introduction to Error

Analysis" by Taylor.

Howard Castrup

    President, Integrated Science Group

    1-661-872-1683

    1-661-872-3669 (fax)

    http://www.isgmax.com

-------------------snippo--------------------

To: iso25@quality.org

    Subject: Estimation of Uncertainty of Measurement for a testing Lab RE3

Date: Thu, 03 Feb 2000 10:42:32 +0000

    From: Steve Ellison <SLRE@lgc.co.uk>

    To: iso25@fasor.com, iso25@quality.org

    Subject: Re: Estimation of Uncertainty of Measurement for a testing Lab RE2

    Have to disagree somewhat with Howard here. The new 17025 explicitly permits use of validation data for MU estimates. That uses the same maths, but quite different uncertainty contributions to those applicable in fully traceable calibration systems. The reason is that validation data is based almost entirely on output performance; recovery (a chemists' indicator of overall bias, but never the whole story), repeatability, reproducibility, linearity, ruggedness and robustness tests, between-analyst variation etc. [b]The aim is to demonstrate that all controllable sources of bias are essentially negligible compared to those output measures[/b] (eg a weighing uncertainty of 1 in 10^4 is not important if the reproducibility is 10% !). Once you have that evidence, the reproducibility standard deviation is a pretty good guide (you may well need to throw in extra allowances for sample prep. and other things that ARE significant). The result is a much more rough and ready measurement system model; of the old "x(obs) = x(true) + bias plus random error" form, with some additions.

    That contrasts with the input value estimates that nearly always apply well in calibration (and of course, much testing). [b]You're essentially predicting the 'potential variability' from an assumed distribution of all input effects.[/b] This is the assumption the GUM is based on. It is very effective when the dominant effects are well predicted and tied to traceable measurements with known uncertainties. Of course, that's the norm for metrology and calibration labs. Even there, there's often a 'random error' term hiding in there somewhere; the variability in the small number of calibration observations made. And that's added in; it's just that for calibrations, the other effects tend to be better known.

Report this document

For any questions or suggestions please email
cust-service@docsford.com