1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
Dismiss Notice
You must be a registered member in order to post messages and view/download attached files in this forum.
Click here to register.

How many samples needed for attribute gauge study?

Discussion in 'Gage R&R and MSA - Measurement Systems Analysis' started by Tsietsi Zondo, Aug 19, 2015.

  1. Tsietsi Zondo

    Tsietsi Zondo New Member

    Joined:
    Aug 19, 2015
    Messages:
    1
    Likes Received:
    0
    Trophy Points:
    1
    How many samples are needed for conducting attribute gauge study?
     
  2. Nikki

    Nikki Well-Known Member

    Joined:
    Jul 31, 2015
    Messages:
    268
    Likes Received:
    141
    Trophy Points:
    42
    Location:
    Maine
    I am currently setting up a Gage R&R study for the testing of plastics.

    We are using three different technicians, and 15 samples per test.

    15 may be excessive, but in this case, the final product is a medical device.

    I think the number of your samples would be based on a risk level / importance level of the end resulting product.

    Just my two cents.
     
    Gensidhay likes this.
  3. Emmyd

    Emmyd Member

    Joined:
    Jul 31, 2015
    Messages:
    38
    Likes Received:
    38
    Trophy Points:
    17
    Location:
    Tennessee
    For AIAG, it is 50 pieces (3 operators, 3 trials each) for an attribute gage. The Kappa score should be over 70 to be acceptable.
     
    Gensidhay likes this.
  4. Ravi Khare

    Ravi Khare Member

    Joined:
    Jul 30, 2015
    Messages:
    24
    Likes Received:
    27
    Trophy Points:
    12
    Location:
    Pune, India
    The example in the AIAG MSA reference manual (4th Edition) does show 50 samples. This is referred as based on a Ppk estimate of 0.5 for the underlying process. However this is not necessarily the sample size that is recommended under all circumstances.

    On the matter of sample size, I quote the AIAG reference manual (page 140)

    Further an example (page 132) gives a guidance on regions from where the samples need to be selected. It says that since the maximum risk of incorrect assessment is in the "Gray" areas around the upper and lower specification limits for a limit attribute gage, 25% of the samples should be selected at or close to the Lower specification limit, and 25% of the samples at or close to the Upper specification limits.

    If the inherent process capability of the underlying process making the samples is good, a small random sample may not yield adequate samples in the Gray zone area. Thus as the process capability of the process improves, the required random sample for the attribute study should become larger.

    The standard states that as a general rule of thumb, Cohen's kappa value greater than 0.75 indicates good to excellent agreement and values less than 0.4 indicate a poor agreement (page 137).
     
    Emmyd likes this.
  5. Bev D

    Bev D Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    605
    Likes Received:
    663
    Trophy Points:
    92
    Location:
    Maine
    AIAG is correct for taking more samples out of the 'gray zone' - this is where the study is most informative. The only real reason to include parts that are not in the gray zone is to help prevent 'memory' bias in the study. The study should consist of two phases: and exploratory or 'development' phase and a validation phase. First we should analyze and improve (if necessary) the gray zone and then perform the validation study of the performance of the system using a distribution of rejectable and accepatable parts that is representative of the actual distribution (or expected distribution) at a sample size that allows for a decent number of parts in grey zone...

    Where AIAG misses the mark (IMO and experience) is that Cowen's Kappa score is not the best statistic. And certainly a score 0.75 is horrible. I don't know why they think this would be acceptable - maybe they are grading on the curve? McNemer's test is far better as it focuses on the area of disagreement. A second point is that again 3 repeated tests on a part add little to no value over the simpler 2 tests per part (and with visual inspections, going to 3 tests even with a somewhat larger part set has a high potential for bias where the inspector remembers the 'odd' parts and their response to the first 2 tests) On the other hand, statistical tests of significance are rarely that informative anyway. We are far better off if we calculate the false reject and false accept rates, compare them to the requirements and then fix or accept the attribute system based on its ability to meet the requirements. The requirements should be set based on the consequences of a false reject and false accept. These requirements will be different for different Customers, parts and characteristics...

    I have attached an document I wrote for my organization to use for MSA - it includes the study designs, mathematics and references for 'attribute' studies as well as continuous data studies. I hope you find this informative...
     

    Attached File(s): 1. Scan for viruses before using. 2. Report any 'bad' files by reporting this post. 3. Use at your own Risk.:

    Robert S., Stijloor and Ravi Khare like this.