1. Hello and Welcome to The Quality Forum Online...Continuing in the spirit of People Helping People !
    Dismiss Notice
Dismiss Notice
You must be a registered member in order to post messages and view/download attached files in this forum.
Click here to register.

Dr Wheeler's Article on Probability and Control Charts

Discussion in 'SPC - Statistical Process Control' started by Bev D, Nov 10, 2015.

  1. Bev D

    Bev D Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    350
    Likes Received:
    349
    Trophy Points:
    62
    Location:
    Maine
  2. Bob Doering

    Bob Doering Member

    Joined:
    Jul 30, 2015
    Messages:
    36
    Likes Received:
    24
    Trophy Points:
    7
    I can see why there might be discussion of the article's points. Wheeler claims the process charts are primarily to get the process to a satisfactory state. However, that should already be the ongoing state of a process after its initial development - in which case they state the process charts are likely monitoring awaiting special causes. If you buy into the concept that control comes after capability, then at that point it will be monitoring in its day to day use with Western electric rules, et. al. I am not saying that is a bad thing, but there seems to be a little too much generalization - probably due to the brevity of the article - to clarify the point -overemphasizing the tool as process improvement versus monitoring. Monitoring is process improvement versus not monitoring, if one wants to get credit for that...
     
  3. Bev D

    Bev D Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    350
    Likes Received:
    349
    Trophy Points:
    62
    Location:
    Maine
    Well, Wheeler is just stating Shewhart's actual intent. Shewhart did in fact intend control charts to be a method of detecting non-homogeneity and generating clues as to that non-homogeneity so the engineer could improve the process. His premise was that processes were not capable and not in a state of statistical control and that was the engineer's starting point. Shewhart also intended the limits to stop the tampering that resulted in excessive variation - this simple use of the control chart automatically reduces the variation in a process. Shewhart's lesser intention then was to use the control chart as detector of special causes after capability and stability had been established.
    Wheeler's (and Neave's) article isn't opinion, its based on what Shewhart actually stated and wrote. Unfortunately too many hacks have written a revisionist history of SPC (starting with Pearson in 1935) and both Shewhart and Deming are difficult to read, but still...I certainly didn't interpret the article as stating that "control comes after capability", although some processes are so bad (incapable and unstable) that once you improve them they become both capable and stable. Shewhart did intend to use the control chart in these cases to make the process capable and then use the control chart to maintain control and capability.

    I suspect the reason the article is closed for commenting is to block the back and forth with our old friend artichoke and Dr. Levinson that has been occurring the last several months. (at least for this article)
     
  4. Bob Doering

    Bob Doering Member

    Joined:
    Jul 30, 2015
    Messages:
    36
    Likes Received:
    24
    Trophy Points:
    7
    I got the "control comes after capability" from: From his perspective, a major purpose for creating the chart was to provide help to get the process into a “satisfactory state,” which you might then be content to monitor (if not persuaded by arguments for the need of continual improvement). But, Shewhart did some great work - especially for the technology of the time. It is not biblical, in that further knowledge has clarified the extent of its usefulness and stretched the concept with more tools beyond the original concepts to increase the benefits to the practitioner. The worst thing I ever heard about SPC in my presentations was that only Shewhart's charts are SPC. Nope...no way....they are a subset of the concept.
     
  5. ncwalker

    ncwalker Well-Known Member

    Joined:
    Sep 21, 2015
    Messages:
    216
    Likes Received:
    127
    Trophy Points:
    42
    Location:
    North Carolina
    I always looked at what Shewhart did as a necessary evil due to calculation limits at the time. Consider standard deviation: It's not a terribly difficult thing to calculate, but let me give you a piece of paper and a pencil and let's find out how long it would take you. It's pretty laborious. I always viewed Shewhart as a good compromise - yes, we are estimating sigma with a range. But in running a shop with a "math guy" on staff with a pad and pencil, using Shewhart's simplifying assumption means that my "math guy" can check 10 times the processes easily. A trade I would be willing to take in light of the cost. Fast forward to today with computers and Excel and all the goodies that take the grunt work out of the math .... I never understood and still don't understand why we don't just plot the mean of the last n readings along with +/- 3 standard deviations. With n being a rolling number of samples. You move along, constantly recalculating mean and sigma and plot this against the specification limits. It's easy to do (now) and easy to understand. A process going out of control is obvious when you do this.
     
  6. Bev D

    Bev D Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    350
    Likes Received:
    349
    Trophy Points:
    62
    Location:
    Maine
    ncwalker - I suggest you read Donald Wheeler. He expalins all of your questions quite well. WHeeler isn't publishing opinions, he is publishing mathematical and scentific facts...Shewhart didn't take the subgroup approach he did because of 'math limitations'. (although in the day a subgroup range was easier to calculate than the subgroup standard deviation.) He took his approach because because his control charts are intended to detect non-homogeneity. your approach of just using the individual values will not detect non-homogeneity. additionally it will self adjust as the process increases in variation to create limits that are outside the the data stream. I will try to post a few select articles that help explain all of this and post some of my material that also demonstrates it.
     
  7. ncwalker

    ncwalker Well-Known Member

    Joined:
    Sep 21, 2015
    Messages:
    216
    Likes Received:
    127
    Trophy Points:
    42
    Location:
    North Carolina
    It's OK. I can do the grunt work. Look at this... your opinion, valuable chart or not?
     

    Attached File(s): 1. Scan for viruses before using. 2. Report any 'bad' files by reporting this post. 3. Use at your own Risk.:

  8. Bev D

    Bev D Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    350
    Likes Received:
    349
    Trophy Points:
    62
    Location:
    Maine
    ncwalker - I looked at the chart and honestly I can't say that it's very useful for several reasons (unless you don't want to detect any but the absolutely worst out-of-control conditions)

    First let start with this: when we calculate the "3sigma" limits form either all of the data points (standard deviation of all of the data points so that we have both within and between subgroup variation) or we use the standard deviation of the subgroup averages, we 'inflate' the limits making them less sensitive to detecting OOC conditions. When we use rolling averages this over inflation gets even worse. On top of that you've added a different control limit for each subgroup based on it's variation and average. IF you look at the chart you will see that as the subgroup gets bigger or smaller the limits get even wider around the subgroup. Only a catastrophic event could signal an OOC condition. Your data is hugging the center line and the limits are incredibly far away. Also the calculations you have used are very complex and technically incorrect in any statistical sense. Sorry to be so blunt. :( but you are putting a lot of effort into a control scheme that can't control anything. If you were to simply plot your raw data against the spec limits you would see - very clearly - that this process spans the entire spec range and there are many individual out of control points. If plot a simple Xbar, R chart you will also see many out of control points.

    The intent of a process control chart is to reveal the patterns and shifts and drifts of the process. the chart you have presented does not do this. I wonder why you feel the need to not use standard control chart methods? they have been proven to be so powerful (when used properly) for so many years...some times the 'old ways' are the best ways.
     
  9. ncwalker

    ncwalker Well-Known Member

    Joined:
    Sep 21, 2015
    Messages:
    216
    Likes Received:
    127
    Trophy Points:
    42
    Location:
    North Carolina
    You don't have to apologize for being blundt. I'm not motivated by "look at this neat thing" where my pride will be hurt if there is a flaw. I am motivated by getting a good tool out there. I do do it old school as well. Here is the same tool, but doing things Xbar-R. I also have one that is Xbar-s.

    I didn't give you "instructions" so you may not have sorted out all the things you can do. Look at the tab called CtrlCht. Up at the top (on both versions) near cell H2 is a spin button. Clicking this will change the output screen to look a the different test data. If you look at Dim06, this is a simulated sawtooth (tool wear). Then, to get a larger sampling, set the cell B10 (Last n rows) to 150. Either by typing or using the spin button. So you see some cycles.

    On the Xbar-R version, the UCL/LCL are pretty wide. I have the "Rolling Limits" set to yes which recalculates them as you go. You can set this to "No" and it will use static limits which are entered at the top of the Dim06 tab. Like you do your initial capability study and set your control limits (and forget them) when you start up your process. Which I don't like, because in a tool wear situation, the number of parts during the capability study runs many times is not enough cycles to start showing tool wear. Which means you get fictitiously tight control limits. So to me, this deployment of UCL/LCL means your going to be changing tools much more frequently than you have to. Which is why I added (and it may not be right) the rolling limit option. But with XBar-R, the way the math seems to work will NOT detect a sudden jog - which should raise alarms.

    The running sigma WILL. Let me address the averaging a bit. On both charts, the solid blue line in center is the average of the readings taken at the time of subgroup generation. It is NOT a rolling average, it is a discrete average at that time. And the Range chart is the discrete range at that time. The raw data table lets you have between 2 and 6 measurements for the subgroup. (If you only have one, the math poops out).

    The rolling average comes in ONLY in the calculation of Sigma. Both charts, in the vicinity of L10 have a "Rolling" box. What you put in here determines how many rows back you "look" in determining Sigma. So if it is set to 30, it looks back on the last 30 rows up until the current. Which will be anywhere between n=60 and n=180 because you can have between 2 and 6 measurements for each row. If you want sigma to be just the current row, you can set this to 1.

    I will take some more time to digest what you said.
     

    Attached File(s): 1. Scan for viruses before using. 2. Report any 'bad' files by reporting this post. 3. Use at your own Risk.:

Share This Page