We have a laboratory process that processes batches of 96 samples at a time. The average of each batch is recorded in an Xbar-s chart. The variability within a batch is very small and the variability between batches is much larger. It is expected that the batches will be quite variable and this does not indicate a process issue. Because the intra-batch variability is so small, the control limits are very narrow and every batch looks to be out of control, even though we know the process is not out of control (at least based upon this metric). As a result, we do not use the LCL and UCL to make determinations about process health and we do not apply the typical SPC tests because they fail with almost every batch. I would like to apply meaningful statistics to this metric and be able to detect when there is actually a process issue. Is there an alternate way to compute control limits that uses the variance among samples (variance among batches) rather than among observations within a sample (among the 96 samples in a batch)? Or is there a different statistical test that I can apply that is more appropriate?