1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
Dismiss Notice
You must be a registered member in order to post messages and view/download attached files in this forum.
Click here to register.

DOE to improve polishing quality of ophthalmic lenses: No variable response

Discussion in 'DOE - Design of Experiments' started by essegn, Sep 27, 2016.

  1. essegn

    essegn Member

    Joined:
    Feb 5, 2016
    Messages:
    47
    Likes Received:
    6
    Trophy Points:
    7
    Hi there,

    i would like to improve the polishing quality of opthalmic lenses and use a DOE for it.

    The opthalmic lenses are being polished and at the end of a process needs to be checked the following:
    1. by sight, if the surface is polished well (no markings from previous smoothing operation)
    2. by sight, surface shall not content scratches, pits or dots
    3. surface can not be polished too much, because of power of a lens (the more is a lens polished, the more deviation could be occured)

    The problem, that i am not able to overcome is, how to evaluate a response.
    There is no measurement device for that, any exact value.
    The third point could be measured, but response is a map with surface deviations. And even with perfect results, the first two points needs to be done.

    I was wondering to set up some results ranking.
    To make 16 experiments and the response would be ranked from 1 to 16. The number 1 for the best result and the number 16 for the worst one.

    Or to split the responses into two categories ? The point 3 gives a certain measured number and the second response would give 0 or 1. (0- lens is not polished well, and contains scratches, spots and 1 for lens is polished well, and contains any scratches, spots etc. )

    Does somebody have exprience with no measurable response ?
     
  2. Miner

    Miner Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    628
    Likes Received:
    544
    Trophy Points:
    92
    Location:
    Greater Milwaukee USA
    Attribute response are always problematic because they drive very large sample sizes in experiments. Try to move in the direction of variable data. Binary (good/bad) data, including proportions, contains the least information and consequently requires the largest sample size. Ordinal data (i.e., a rating scale) contains more data provided that the scale is repeatable and reproducible, and you can see differences in that scale. Count data may fall above or below ordinal data depending on the number of counts (e.g., scratches, pits, etc.). If the number are very small, it is worse than ordinal, if high it could be better. The best is definitely variable data.

    Regarding your specific question, #1 as stated is binary (pass/fail). However, is there an opportunity to change this into a rating scale? #2 lends itself to count data, unless you can quantify the size of the scratches or pits. #3 sounds like it might be quantified as a deviation from the ideal surface (e.g., GD&T profile).
     
  3. ncwalker

    ncwalker Well-Known Member

    Joined:
    Sep 21, 2015
    Messages:
    261
    Likes Received:
    170
    Trophy Points:
    42
    Location:
    North Carolina
    I have a bit - we used to machine visually aesthetic surfaces.

    Your ranking idea is a good one. Here are my warning shots: You may NOT be able to rank them adequately. What I mean is if 1 is the best and 16 is the worst... you begin evaluating and it's pretty clear which part is #1, #2, and #3. Then you get to #4 and the next three parts look identical. Do you call them all 4 (three in a row) and then the next part is 7? Do you just call them 4, 5, 6 even though they are equal? This may mess up your discrimination. That's why we come up with a Likert scale. Make the categories 1 through 5. I am assuming you are comparing two different methods in your experiment and you want to know if NEW is better than CURRENT. With a Likert scale, you can count the members in each category. If there is an appreciable difference, you should be able to see it. If you can't detect a difference, there's only three reasons:
    1) You can't detect the differences adequately. (Like a failing Gage R&R). It's just too different. Your reaction plan is see what else you can do to get creative.
    2) The difference is to slight to detect. More samples is the only way out of this.
    3) There is no difference.

    You could also analyze this as a percentage of goods and bads. This typically takes a lot more samples. Your results would be something like "Process Current resulted in 3% scrap and Process New resulted in 2.6% scrap." There are discretized analogs to things like the t-test to tell you if there's a significant difference. If there isn't, you don't spend the money on the new thing.
     
  4. Bev D

    Bev D Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    647
    Likes Received:
    699
    Trophy Points:
    92
    Location:
    Maine
    Just a caution here - even if you use proportion data you should not use tests for continuous data like t tests. Categorical data - even when represented as a proportion or rate - do NOT behave the same as continuous data. I typically use just the exact binomial or exact poison confidence intervals for my 'statistical tests'

    Large sample sizes may not be an issue for you especially if the savings and improvement in Customer Experience are worth it. I have worked on many problems where large ample sizes were necessary but it has always been worth it and there is nothing particular difficult about categorical other than the sample size and picking the right analysis approach.
     
  5. ncwalker

    ncwalker Well-Known Member

    Joined:
    Sep 21, 2015
    Messages:
    261
    Likes Received:
    170
    Trophy Points:
    42
    Location:
    North Carolina
    What she said. Sorry if I was unclear. The t-test is for continuous data. There is a test out there that compares proportions of discrete data. I just can't remember what it is called. :) And I'm too lazy to Google it.
     
  6. essegn

    essegn Member

    Joined:
    Feb 5, 2016
    Messages:
    47
    Likes Received:
    6
    Trophy Points:
    7
    Thank you for all the replies. I was wondering about the ranking from 1 to 5 as well, but i have still my worries how precisely could be the samples divided into categories. I will need to try it out.
    Anyway thank you once more time for sharing experience.
     
  7. ncwalker

    ncwalker Well-Known Member

    Joined:
    Sep 21, 2015
    Messages:
    261
    Likes Received:
    170
    Trophy Points:
    42
    Location:
    North Carolina
    Something else that works for ranking based on interpretation is the phrasing of the ranking. Let's say your hobby is polishing coins and you are going to rank several silver polishes for shine. So you go through your coin collection and come up with a spread of coins of various degrees of shine. Instead of wording it as:

    Level 1 is like this coin, level 2 is like this coin ....

    You word it like this:

    Level 1 is BETWEEN these two coins, level 2 is BETWEEN these two ...

    Humans get hung up on the matching thing... If you go with the first method where each coin represents a BAND and I give you a coin half way between levels 3 and 4, you're going to get a lot of arguing among appraisers as to if the coin is actually a 3 or a 4. But if I tell you BETWEEN these two coins, people will have much better agreement.

    In other words, the best approach is for the samples to be boundaries of the categories and the scale is the region between the samples.
     
    Bev D and Miner like this.
  8. essegn

    essegn Member

    Joined:
    Feb 5, 2016
    Messages:
    47
    Likes Received:
    6
    Trophy Points:
    7
    Thank you ncwalker. Those "BETWEEN" idea make decisions much easier. :)
     
    Bev D likes this.
  9. Miner

    Miner Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    628
    Likes Received:
    544
    Trophy Points:
    92
    Location:
    Greater Milwaukee USA
    This is one of the best suggestions that I have seen in a long time.
     
  10. ncwalker

    ncwalker Well-Known Member

    Joined:
    Sep 21, 2015
    Messages:
    261
    Likes Received:
    170
    Trophy Points:
    42
    Location:
    North Carolina
    Wish I could take credit for it. Someone older and wiser taught it to me years ago. But it DOES work much better.
     
  11. Bev D

    Bev D Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    647
    Likes Received:
    699
    Trophy Points:
    92
    Location:
    Maine
    I remember this as part of Shainin training in the early 90s. Pretty certain he didn't think of it. Ranking scales are pretty common in psychology and they would certainly understand the 'between' thing instead of the match thing...I would hope! And you are correct it works a heck of lot better!
     
  12. ncwalker

    ncwalker Well-Known Member

    Joined:
    Sep 21, 2015
    Messages:
    261
    Likes Received:
    170
    Trophy Points:
    42
    Location:
    North Carolina
    I didn't get it from Shainin, I got it from a site Master Black Belt ages ago at a previous job. One of the smartest people I have met. But he WAS a Shaininite. Card carrying.