I am confused about NDC (number of distinct categories) and hope someone on here can answer my question. I have a part with a specific width measurement. The tolerance is ±0.10 millimeter. Because this is a plastic part with a definite maximum material condition (MMC), I elected to use a caliper, which easily captured the MMC for all inspectors and returned a gage R&R of 13%. My customer did not like the NDC of 2 and asked for a different measurement method. So we used a digital micrometer that reads to 0.001 millimeter and performed a new gage R&R study. This time the R&R was 16%, which I could live with, but the NDC is still 2. I am wondering how can this be? We are using the standard AIAG worksheet and its formulas. I also typed the data into Minitab and the NDC was confirmed. But when I look at the micrometer data, it sure looks like there are a sufficient number of different readings. What am I missing? Now the customer wants a special fixture built to use a drop indicator for measurement. My gut tells me this is unnecessary and I also cannot guarantee improvement in the NDC based on what I have experienced so far. I don't understand how the NDC could not improve with the increased resolution of the micrometer and would sure appreciate any insight from your experiences. Thanks.