# LTPD and interpretations.

Discussion in 'Sampling, Standards and Inspection' started by JeremyQ2022, Jan 7, 2022.

Tags:
1. ### JeremyQ2022New Member

Joined:
Jan 7, 2022
Messages:
2
0
Trophy Points:
1
Hi, I am a undergraduate student currently enrolled in introduction to statistic.

I am confused about the LTPD and its relationship between defect rate and sample size.

For example, the LTPD table shows that, with 90% CL, c=0 (accepting zero defect), a sample size n=15 has a maximum defect rate of 15%, and a sample size n=24 has a maximum defect rate of 10%.

Question 1: is it correct to interpret this defect rate with respect to sample size as following:

a. With a sample size of 15, I am 90% confidence that I can detect a failure if the failure rate is 15% or higher.

b. With a sample size of 15, I am 90% confidence that I can detect a failure if the failure rate is 10% or higher.

Therefore, the more sample I have, the more rare a defect I can detect.

Question 2: assuming I am running a test with a sample size of 30, and I detect a problem with one unit. The defect rate for this particular problem is then: 1 / 30 = 0.033 or 3.3%.
Am I really lucky to be able to detect 3.3% defect rate with only 30 samples?

Thank you

2. ### Bev DModeratorStaff Member

Joined:
Jul 30, 2015
Messages:
561
625
Trophy Points:
92
Location:
Maine
LTPD is the defect rate that will be detected (and the lot REJECTED) 90% of the time that defect rate is present.
Stated another way there is a 10% probability that a lot with the LTPD defect rate will be ACCEPTED.

3. ### JeremyQ2022New Member

Joined:
Jan 7, 2022
Messages:
2
0
Trophy Points:
1
Thank you for the reply, just so I understand the concept clearly, assuming I tested a sample of 11 units from a population and found one defect, according to the LTPD table as shown here, is it correct to conclude that, with 90%CL, this population has a defect rate of 20% or higher?

4. ### Bev DModeratorStaff Member

Joined:
Jul 30, 2015
Messages:
561
625
Trophy Points:
92
Location:
Maine
Unfortunately, that is not a correct interpretation. Hang on, it’s going to be a bumpy ride…

Two critical things to understanding about sampling:

1. There is a difference between ‘a priori’ probability and ‘a posteriori’ probability. (in essence a priori is ‘before you take data’ and a posteriori is ‘after you have taken the data’.

2. The second critical thing is that statistical confidence is NOT the same thing as layman’s confidence. Statistical confidence is a narrow mathematical construct related to the precision of the estimate. Statistical confidence is very different in the a priori situation than in the a posteriori situation.

3. A third thing is that this type of sampling only works when the lot is reasonably homogeneous (the defects are uniformly or randomly sprinkled about in the lot) AND your sample is randomly taken. Too often in lot acceptance testing the samples are not randomly selected but are the easiest to get to…so if your lot isn’t homogenous your sample will be biased.

Sampling plans have a priori probabilities. While the language uses the term ‘confidence’ these plans actually have probability and not confidence. The 20% defect rate LTPD plan would have you sample 11 units and reject the lot if you get 1 (or more) defects in your sample because IF the lot were 20% then the probability that you would find 1 or more defects is 90%. In other words, 90% of the time, you would get 1 or more defects in a RANDOM sample of 11 units. However, there is also a large probability that if the lot were 10% defective you would get 1 or more defects in a sample of 11 units. In fact, the probability is 67%. This comes from something called an Operating Characteristic Curve (OC curve.)

You must apply a posteriori probability once you take the sample and know how many defects are contained in that sample. Thus it requires a different formula to understand the likely defect rate (and I use that phrase very loosely as it isn’t quite true although we wish it were.) The only thing you can really do with a single sampling event is to estimate the precision of that estimate. This precision estimate is called a confidence interval. I leave it to you to look up the various formulas for this. (agresti-coull, clopper-pearson, exact binomial, binomial approximation to the Normal…) One of the formulas (exact binomial) would give us a 90% confidence interval for 1 defect in 11 units of plus 0.27 and minus 0.06. Using the convention of placing the confidence intervals about the point estimate we get a low of 3% and a high of 36% defective. Now common usage would say that we have a 90% confidence level that the true defect rate is somewhere between 3% and 36%. BUT this isn’t really true (although it has some utility in that it indicates that 11 is a pretty small sample size to provide any real confidence (human certainty) that the defect rate is 9% or 20% or whatever…) This isn’t really true for two reasons. First, what we sloppily refer to as confidence level is actually only 1-alpha risk. Secondly, confidence intervals actually belong about the true mean, not the point estimate. Of course since we don’t know the true mean we have no place else to ‘put’ them so we attach them to the point estimate.

Statistics is complicated. While the sampling tables help us to make routine choices simply, they hide the true complexity…

Andy Nichols likes this.