I am conducting a capability study on a process for cutting cables to a certain length. We need 13 different lengths each with the same tolerance (+/-1mm). We are running 3 lots per our QMS with a set minimum Ppk value for each lot (Ppk of 0.74 with sample size of 30 for variable data). After running each lot through the normality test, some came back with p values above 0.05 but many did not. It was then suggested by other members of the team to use a Johnson transformation to normalize this data and then run the capability analysis. This seemed odd to me as each transformation of each data set was different from the other and failed normality test (p value below 0.05) when one equation was applied across lots. My understanding of using a transformation of this type is because the process itself has some odd characteristic that makes it non-normal, such as a bound that would skew it right or left. This approach would imply that any set of data can be normalized and then compared to different data sets to give understanding if a process is capable. This does not seem correct to me. Can anyone explain this further?