1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
Dismiss Notice
You must be a registered member in order to post messages and view/download attached files in this forum.
Click here to register.

Statistical artefacts in proficiency testing program (PTP) report

Discussion in 'ISO 17025 - Calibration and Test Laboratories' started by andic, Sep 23, 2021.

  1. andic

    andic Member

    Joined:
    Jun 29, 2020
    Messages:
    36
    Likes Received:
    14
    Trophy Points:
    7
    My lab just received feedback from a third party organized chemical analysis PTP, there were some failures (results outside of 2 standard deviations from the average) but on at least some of these elements I think the problem is due to the way the organiser has set up the testing and processed the data, see below:

    Element A was included in testing the instructions were to report to x.xx % 2 dps, we got 0.0252% and reported 0.03%

    86 % of the labs reported 0.02%, 10 % of labs reported 0.03% and the remaining 4% of labs reported either 0.01%/0.04%/large outlier. The nominal value and standard deviation (sd) were calculated as: 0.020 % and 0.001% respectively.
    I have 4 comments on this:

    1. The rounding has created a bimodal distribution (for something which should be "normal") with a lower mode of 0.02% and an upper mode of 0.03 %. If most labs actually measured the concentration as 0.023% - 0.025% then the normal distribution would be split unevenly with the average and lower tail being reported as 0.02% and some portion of the upper tail reported as 0.03%. Since 10% of the labs reported 0.03% students t tables suggest the cut off of 0.025% lies around 1.2 - 1.3 sd from the mean

    2. I calculated the mean and sd from the reported results excluding only one very large outlier (0.67%) I got 0.02082% and 0.00397% respectively. This suggests to me that the organizer has excluded most or all of the "non 0.02%" results compounding the rounding bias and artificially reducing the standard deviation.
    3. The allowed precision of 2 dps wipes out the resolution, the rounding of +/- 0.005 is about 25% of the assigned value and larger than the +/- 2 sd range of the pass/fail criteria.

    4. Is it valid to calculate an average of rounded numbers and then report that average to greater precision than the inputs?

    Is it possible to estimate/quantify the effect of rounding on the distribution?

    I think that there is something wrong with the data processing but I am not an expert and I would welcome feedback on the above observations before I go to the organiser with my concerns.

    Thanks

    andic