1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
Dismiss Notice
You must be a registered member in order to post messages and view/download attached files in this forum.
Click here to register.

Capability and Parallel Processes

Discussion in 'Capability - Process, Machine, Gage …' started by ncwalker, Jun 8, 2018.

  1. ncwalker

    ncwalker Well-Known Member

    Joined:
    Sep 21, 2015
    Messages:
    261
    Likes Received:
    168
    Trophy Points:
    42
    Location:
    North Carolina
    The age old question. You have 6 lines, and each line has 2 fixtures. That's 12 value streams making parts. The customer wants capability data, and with the current mindset of 100 piece study at a minimum, that's 2 * 12 * 100 = 2,400 measurements to do this study. Assuming you're only doing it on one characteristic. That's a lot, and there's motivation to shortcut the study in some way.

    Can't we just do the study on the collective output of the 12 value streams?

    It's a hard problem to wrap your head around. I have come up with an analogy that seems to explain it well to people.

    The goal of the capability study is to determine if there is problematic noise - is one of the value streams so noisy that it (potentially) makes bad parts? And is this detectable only looking at the group?

    Think of the value streams like music speakers. We wish to know if we can detect one of them with high noise (high standard deviation) if all the others are not noisy. The analogy is ... we have 6 speakers playing music and one of them is staticy, a bad speaker. Can you hear the bad apple? As the number of speakers increases, it is less likely you will detect the bad apple. As the number of speakers decreases, it is more likely to detect it, by listening to the total output.

    The other question we answer is centeredness. The analogy here is all the speakers are in phase. A group of parallel value streams could all be individually capable, but some of them towards the low end and some towards the high end of the spec. In other words, the individual studies would all have good Cps, and OK Cpks. In the speaker analogy, this would be, say, half of our speakers in phase, and the other half playing the music on a half second delay. Each individual speaker may be high end and sound great, but the conglomerate arrangement would sound like crap. So detecting this problem from the group would depend on two things. How many were out of phase? As you'd struggle to hear 1 speaker out of phase with 99 in phase. Also, how far out of phase are they? If the delay were very slight, it would be harder to hear than a massive delay. Which would be how different the means of the processes were.

    Can it be done?

    I have successfully done it - certified a batch of parallel value streams by checking the total output. I did this by blocking the study, it wasn't totally random. I ensured I had 2 subgroups of 5 from EACH value stream, and I kept these subgroups together. I was looking for a good capability metric. But you cannot just look at the metric alone, you must also look at the run chart. What are you looking for? 2 things. You are looking for the spread of the subgroups, are they all the same? Because they have been blocked so that each subgroup is aligned with a value stream, and not mixed, if one of the subgroups is in fact noisy, you will see it in the run chart. But do not shortcut the taking of 5 samples, less than that and the risk of random effects is too great. The second thing you are looking for - are all the clusters on the same line (process are tuned to the same nominal). Also visible on the run chart.

    In this way, you may avoid doing the thousands of measurements for each value stream. But if you DO see fliers, you need to do a full study on THAT value stream to determine why it is not like the others.
     
  2. Miner

    Miner Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    577
    Likes Received:
    493
    Trophy Points:
    62
    Location:
    Greater Milwaukee USA
    nc,
    The biggest concern that I would have with how you collected this data is that 2 subgroups of is insufficient to verify that each process stream is stable. However, if you have independent data, say control charts on these separate streams, that shows the processes are stable, that would remove that concern. You could then use the control chart data for your study.

    Assuming the streams ARE stable, and given the large number of process streams involved, I have the following recommendations. Take the data you have already collected and perform an ANOM (Analysis of Means) for the 12 value streams. The null hypothesis of the ANOM is that there are no differences between the individual process stream means and the grand mean of ALL the process streams. You can safely group all streams that lie within the upper and lower decision limits. So, in the example attached, you can group Operators 1 through 5 together into one capability study. Operators 6 & 7 would have to be analyzed separately. Once you know which value streams are different, you can work on bringing those means more in line with the rest.

    In your example, and in the picture shown, you should not just study the capabilities individually, because the overall variation is much greater than the individual stream's variation. You also should not just combine all of them because a mixture of dissimilar processes will probably not be normally distributed and may not be repeatable. You could perform a nonparametric capability study. Minitab has an macro (ECAPA.mac) that can do this.

    ANOM.jpg
     
  3. ncwalker

    ncwalker Well-Known Member

    Joined:
    Sep 21, 2015
    Messages:
    261
    Likes Received:
    168
    Trophy Points:
    42
    Location:
    North Carolina
    I don't disagree. I've run a lot of models and using this aligned subgroup method works well, but is not a guarantee. And it is by no means a replacement of historical monitoring.

    At least in automotive, we have a big issue of when can we turn a process on safely?

    The SAFEST approach is to do a large study, independently, on every possible combination. Let's say we want to do a 300 piece run with 100 samples in the study - typically the bare minimum that OEMs will entertain. There's lines out there with 20 processes in parallel. So a study of this type would be 300 pieces * 20 lines = 6,000 parts for a PPAP run and 100 measurements * 20 lines = 2,000 measurements of ONE feature.

    This certification event down in the tiers often occurs before PV testing is done. There's a medium risk that these 6,000 parts could be orphaned due to a design change and not saleable. That can be a big bill to just eat.

    My abbreviated study, while not a guarantee, will give one a sense that things are reasonable. And the following caveats MUST apply:
    1) It DOES hinge on similar processes. You have a Tier X supplier that makes a widget, this is a slightly different designed widget. But the processes to make it, the Tier X guy is familiar with. So he already has a good sense of what sort of variation he would see based on prior art.
    2) It ALSO should be verified that when he actually DOES turn the process on, there is a heightened level of checking done (called GP12, or Early Containment, etc, depending on the OEM) where more data is gathered. Because even a good, standard, capability study can be wrong.

    The goal is not to replace good process control, rather, based on some risk assessment, provide at least an examined strategy to gain confidence in a process without generating a lot of waste at the onset.

    And as always - buyer beware. ;-) The less offensive the errant process is as compared to its brothers, it can get impossible to detect. My argument to that is - there's a confidence interval on capability that NOBODY talks about. If you're hovering on a limit, you can't guarantee a "pass" because of this.

    To me, one of the biggest irksome holes (at least in automotive) is everyone wants a crisp hurdle. One side is good, the other is bad. Reality is there's a band. There's the clear no way, and the clear this is fine, then the middle ground of too early to tell, we need to watch this. Which isn't a fail OR a pass. Problem is, everyone is afraid to start running, so lot's of test parts are made and wasted. And your car costs 3X what it should.