Dismiss Notice
You must be a registered member in order to post messages and view/download attached files in this forum.
Click here to register.

Which tool to use?

Discussion in '5S, 5Why, 8D, TRIZ, SIPOC, RCA, Shainin Methods...' started by Colin Pitman, Sep 11, 2020.

  1. Colin Pitman

    Colin Pitman Member

    Joined:
    Sep 3, 2017
    Messages:
    35
    Likes Received:
    14
    Trophy Points:
    7
    Location:
    West Sussex, UK
    We have a complicated machine that we designed, developed, and built in-house to use in production of glass vials. There are all sorts of moving parts, lasers, sensors etc. that turn short lengths of glass tubes (the input), into closed cylindrical envelopes - vials (the output).

    For the most part it works very well, however we frequently run into process variability issues. We have basic SPC in place which catches this and we adjust one or more of the machine's many parameters and re-calibrate sensors etc. until we get conforming product and off it goes again.

    We would like to reduce downtime and make these parameter changes less often. To do so we need to understand what variables cause the most variance, and reduce them as much as possible. The question is, how is this usually done?

    The idea I have is to list every conceivable variable in the system, work out it's min and max error values, and assess how much each combination of those minima and maxima effect the end product. We could then focus on fixing the variables that produce the largest end product variability. However, if I identify say 50 variables - you can quickly see this becomes an impossible task to evaluate all possible combinations of minima and maxima. If instead I evaluate all variables at their respective minimum and maximum values - this kind of addition of error is unrealistic and wouldn't provide usable data - or would it?

    In any case is there an established tool/methodology for doing something like this?

    I want to call it "Process Variability Mapping", but nothing really comes out of it if I google that :rolleyes:. Maybe we can coin it in the future after we've worked this out :D
     
  2. Miner

    Miner Moderator Staff Member

    Joined:
    Jul 30, 2015
    Messages:
    392
    Likes Received:
    295
    Trophy Points:
    62
    Location:
    Greater Milwaukee USA
    Do you understand the impact mathematically, or will you need to develop this knowledge empirically through experimentation?

    If you understand it mathematically, I would use a Monte Carlo simulation followed by a sensitivity analysis.

    If you must experiment, you have a few options. The first is to collect observational data over time then analyze it using regression analysis. This is less intrusive, but could take a long time if there is little variation in the inputs until something goes wrong.

    The other option is to design a screening experiment to identify the key input factors. A definitive screening design for 48 factors will require 97 runs, while a 47 factor Plackett-Burman design will require 48 runs. As you can see this will be more involved, but will identify the key input factors more quickly.
     
  3. Colin Pitman

    Colin Pitman Member

    Joined:
    Sep 3, 2017
    Messages:
    35
    Likes Received:
    14
    Trophy Points:
    7
    Location:
    West Sussex, UK
    We don't understand anything mathematically yet so this knowledge will have to be developed.

    A lot of the smaller parameter settings/changes are not recorded - and so adjustments might be made but not recorded. We don't really have any history. This needs to be the first step to do any useful analysis. Your suggestions are appreciated; I'll have to research the methods as I've never heard of these :confused:
     
  4. Gejmet

    Gejmet Member

    Joined:
    Jul 23, 2019
    Messages:
    12
    Likes Received:
    6
    Trophy Points:
    2
    Hi Colin,

    I attach a link to a paper which describes how natural variation really works in effect. There is no magic bullet here but essentially in any process you have controlling factors and otherwise. If the controlling factors are indeed doing that then they have an impact on the location of the process, the accuracy. A classic example might be tooling, we may not be happy about the accuracy but we know that tooling plays a major role in its control.

    Then we have the uncontrolled factors which affect the spread of variation around its location, many of these interact to output routine variation which hardly ever changes, whilst some dominate to affect variation in an unpredictable and undesirable way.

    The trick to any data analysis is to firstly understand that variation come in two flavours, routine and exceptional and then knowing how to tell the difference.

    At the moment it seems as though you are applying process behaviour charts to the output but its not clear what the flavour of variation is that you are seeing for example.

    In the end you will assemble your controlled and uncontrolled variation sources and decide amongst yourself and your team what you feel are the dominant sources. After this you will need to collect data on these sources and see for your self what the process thinks of your choices then work to eliminate the excessive variation.

    Hope this helps somewhat.

    https://www.spcpress.com/pdf/DJW294.pdf
     
    Miner likes this.

Share This Page