1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
Dismiss Notice
You must be a registered member in order to post messages and view/download attached files in this forum.
Click here to register.

Reduce Risk As Far As Possible (AFAP)

Discussion in 'ISO 14971 - Medical Devices Risk Management' started by Sam Lazzara, Mar 3, 2016.

  1. Peter Selvey

    Peter Selvey Member

    Joined:
    Nov 9, 2015
    Messages:
    13
    Likes Received:
    8
    Trophy Points:
    2
    Yes and no. The underlying nature of probabilities is the individual risk from most situations literally falls off a cliff beyond a certain point. This means that the underlying cost (or risk) associated with the use resources doesn't really influence the minimum point (graphically this is much easier to see). Instead in most cases it is the characteristics of the individual situation which drive the minimum point.

    It turns out that in most cases the absolute value of risk is not critical to know, only the turning point (minimum), which being based on the technical characteristics of the situation can be derived in a relatively impartial (objective) sense. Two organizations on different sides of the world should reach much the same conclusion. This is rather fortunate since our attempts to estimate risk and compare to an acceptable risk criteria are rather woeful - itself a another interesting mathematical exploration, but for some other time.

    The cliff model is a generalization and not all situations use this. The theory produces 4 different types of buckets in which the analysis might fall, 3 of which are cliff based and one which is not. This last one might be influenced more by the individual organizations cost of resources. But this last bucket also requires a lot more discussion in the risk management file (analysis), and in some cases public disclosure would be reasonable (e.g. residual risk, not state of the art, declared in the operation manual). But only if the regulators have some guts :)
     
    Last edited: Mar 8, 2016
  2. MarkMeer

    MarkMeer Well-Known Member

    Joined:
    Dec 3, 2015
    Messages:
    138
    Likes Received:
    62
    Trophy Points:
    27
    Forgive my obstinance, but I'm obviously not completely understanding the position...
    How is there an "ABSOLUTE, single minimum risk point", and how, in practice, could anyone "fully and formally consider device availability"?
    Graphically, I understand the approach, but I think that the "availability risk curve" is very difficult to map, and will vary from company to company depending on their resources and market factors.

    It's not like it's "literally" falling off a cliff in the sense of a step-function. Graphically, it would look exponential or asymptotic, right? Which means there is always room to improve.
    (I've attached a quick sketch to illustrate my concern).

    So we're ultimately left with the same dilemma: who determines the minimum? What authority do regulators have to decide when a determined minimum is appropriate?
    How, in a "as low as possible" framework, can manufacturers reliably provide acceptable evidence to the satisfaction of regulators.
     

    Attached File(s): 1. Scan for viruses before using. 2. Report any 'bad' files by reporting this post. 3. Use at your own Risk.:

    Last edited: Mar 8, 2016
  3. lama01

    lama01 New Member

    Joined:
    Mar 8, 2016
    Messages:
    1
    Likes Received:
    0
    Trophy Points:
    1
    Last edited by a moderator: Mar 9, 2016
  4. Sam Lazzara

    Sam Lazzara Member

    Joined:
    Jan 16, 2016
    Messages:
    9
    Likes Received:
    3
    Trophy Points:
    2
    Location:
    San Francisco CA USA
    Thank you lama01. That link brought me to a draft of the European MDR dated 2012-09-26.
    Through other resources I found a draft from 2015-09.
    Please see this link to access them.

    I am not patient enough to try to find them via the EUR-LEX website.
     
  5. Peter Selvey

    Peter Selvey Member

    Joined:
    Nov 9, 2015
    Messages:
    13
    Likes Received:
    8
    Trophy Points:
    2
    Apologies to Sam, we might have hijacked this conversation, I suspect you were looking for a simple answer to the auditor's question. In that context: I assume the auditor has been prompted (probably through training) to highlight Annex ZA of ISO 14971, which itself highlights AFAP text in the directive. By the way, AFAP is not the only search phrase, there are several other places referring to "minimizing" risk which should be included in the discussion.

    Legally these texts are toothless for a few reasons. Principally, the the directives only require you to document your solutions to the essential requirements if you don't apply a harmonized standard. The NB enforced ER checklists are bogus and never required by the directive itself. Also, "harmonized standards" are defined, in another directive, as product standards, which must contain objective test methods and criteria, i.e. management system standards like ISO 14971 and ISO 13485 are not relevant. As long as you comply with all applicable product standards, you are afforded a legal "presumption of conformity" which is absolute, even if those standards don't address a particular issue or risk. Before the howls of objections arise, keep in mind the MDD was originally designed as a "clearance for sale" function, with other regulations designed to protect the patient/operator etc in the event of negligent design, irrespective of CE marking (much the same as the FDA and Health Canada - they don't "approve" your product, just allow you to sell it).

    So, in effect, the legal situation is that AFAP (or minimized risk) criteria only applies if you deliberately overrule a specific, well defined requirement in a harmonized (product) standard. For example, if there was a product standard that specifically said the item should be sterile, and you decided to overrule that, you would not be able to use your own criteria for acceptable risk. Rather you would need to demonstrate that the risk is reduced "as far as possible". And to be honest, that makes a lot of sense.

    This conclusion is based on the text of the actual MDD, which needs careful study and the history back to the late 1980's to understand. The area is muddied by the NBs who have capitalized on the gap between what someone would reasonably expect and what is actually written in the directive, and then adding their own well intentioned but ultimately misguided desires. Fortunately some support for the conclusion arrived in the form of the proposed new directive (Procedure 2012/0266/COD, COM (2012) 542), which has attempted to fix some of these "loopholes", i.e. an ER checklist - search for the word "checklist" to find the relevant text, and management system standards have been incorporated into the presumption of conformity (see the Article 6.1, second paragraph).

    How to address the auditor? It's messy because most NBs don't train their auditors on the law. For example, Annex II, section 3.3 says the NB audits the manufacturer for implementation of Section 3.2. But most NBs don't stick to 3.2, but rather go for the full directive. Yet, in doing so they usually take bits out of context. In the current directive, the reference to "AFAP" and "minimized" risk needs to be read in the context of the "presumption of conformity" as outlined in Article 5. A careful reading of that article shows that if harmonized standards fail to meet the "AFAP" criteria, action is required by standards committees, not by an individual manufacturer. An entirely reasonable conclusion drafted back in 1989, more than 25 years ago, totally ignored by our modern NBs.
     
  6. Sam Lazzara

    Sam Lazzara Member

    Joined:
    Jan 16, 2016
    Messages:
    9
    Likes Received:
    3
    Trophy Points:
    2
    Location:
    San Francisco CA USA
    Wow! Well reasoned response Peter! Thank you.

    Somewhat unfortunately for my client the directly applicable standard (which does not mandate sterility) is not on the harmonised list currently.

    I sure do hope they clarify all this in the new European MDR. We need practical definitions for AFAP and Minimized.

    I have heard from some European colleagues that they think there is light at the end of the AFAP title - this will be resolved before the MDR is published. Won't believe it until I see it.

    In the meanwhile, the current draft somewhat tempers AFAP by saying this under Essential Requirement 1:
    The requirements in this annex to reduce risks as far as possible mean reduce risks as far as possible without adversely affecting the risk benefit ratio.

     
  7. Peter Selvey

    Peter Selvey Member

    Joined:
    Nov 9, 2015
    Messages:
    13
    Likes Received:
    8
    Trophy Points:
    2
    OK, the legal situation is cleared, now let's get to the interesting stuff - the real situation. Many thanks to MarkMeer for challenging things, it helps to test the theory and to be honest weak points are showing up, I need to think some more.

    In principle I am comparing minimum risk and acceptable risk under the following graphical concepts:
    upload_2016-3-10_0-51-33.png

    Acceptable risk assumes risk will fall with resources applied. To avoid unlimited resources, we need to define a limit (acceptable risk). The problem with acceptable risk is we can't measure risk - it's a scientifically provable fact, something akin to the uncertainty principle in Einstein's relativity. Specifically, it requires an enormous amount of effort to get a plausible estimate of risk. While our instinct to measure and control risk seems logical, the process is usurped by uncertainty - it's a bit like trying to control the temperature of a room with a thermometer that has an accuracy of +/-100°C .

    Minimum risk assumes that there is always a "risk cost" associated with resources. The adage "better be safe than sorry" is not true - improving safety always comes at some increase in risk. While typically the incremental risk is small, the principle of diminishing returns means that a risk minimum will always exist - eventually a point will be reached where the tiny improvement in risk is no longer offset by the risk cost.

    Unfortunately, the attainment of a risk minimum does not help - it is just academic. Ultimately, our target is influence decisions, outcomes. At this point, the risk minimum theory, while possibly correct, is just as useless as acceptable risk. Both should be put in the bin.

    The turning point comes from closer look at probabilities. In the UK quiz show "QI" episode in 2012, Stephen Fry shuffled a pack of 52 cards, placed the pack in front of him and declared that the particular order of cards in that pack had never been seen before in the history of man(woman)kind. The underlying numbers are amazing - the number of possible permutations are 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000.

    To the risk management engineer brought up on the ISO 14971 concept that the "use of a medical device entails some degree of risk", the idea that risk can be eliminated is something that does not compute. But the statistics reveal that it is possible to minimize and even eliminate risk.

    My Toyota Wish recently passed 100,000 km. Despite the possibility that any one of the tens of thousands of components that work together to make the engine start could fail, they don't. The car starts. Every morning. You don't get that by applying ISO 14971 acceptable risk criteria. You get that by understanding the cliff, or more specifically knowing where the cliff is, how to keep your design away from the cliff, and only accepting a few cases that operate near the cliff.

    This nature of probabilities to quickly fall to negligible levels (not just exponentially, but literally smaller than the resolution of a calculator) is the underlying reason why things work most of the time, and provide us a clear natural risk minimum which is relatively independent of severity and risk costs associated with taking action. Rather it is the characteristics of the situation (e.g. the number of cards in the pack to be shuffled) that influence the decision.
     
    Sam Lazzara likes this.
  8. Peter Selvey

    Peter Selvey Member

    Joined:
    Nov 9, 2015
    Messages:
    13
    Likes Received:
    8
    Trophy Points:
    2
    Finally before I switch to to watching the cricket ...

    upload_2016-3-10_2-35-19.png

    The above diagram combines the concept of a fast falling risk with a slowly rising risk cost, creating a clear risk minimum. Unlike the slow bending curve minimum above, in this model the minimum point is clear and the slope of the risk cost does not really affect the point which is considered safe. The key point is knowing where the cliff happens.

    The final adjustment to the model is to consider only the resources used in risk management in deciding what to document. A risk might be minimized by an incoming inspection for a raw material; but if such an inspection is standard practice and covered by the QMS, re-documenting in the RMF adds no value (and hence, increases risk). The documentation in the RMF must plausibly add value, i.e. have some reasonable potential to influence decisions.

    The result is the 4 buckets a hazardous situation can fall in, each with it's own risk/resource profile:

    1) [no doc] : the cliff is part of well established standard practice, usually well documented in other areas of the QMS e.g. standard design practice, expected production test, already in published standards - in this case there is no need to document in the risk management file; this helps to keep the file lean and focused on critical aspects;

    2) [reminder]: the cliff is easily understood by qualified people, but potentially forgotten in implementation or for design changes - in this case a simple list is sufficient. Such a list should be deliberately limited in length to ensure the benefit of keeping a list.

    3) [analysis]: the cliff needs some effort to know where the cliff is, or where the device is in relationship to the cliff, or both; in such a case the analysis should be in the risk management file for regulatory review and future reference; example: an endoscope requires solvent rinsing to remove residual toxic materials from the production process. After rinsing the risk is negligible (cliff). But, the flow rate, rinsing time, temperature range, replacement interval etc are not obvious by inspection and need to be determined by experiment. Also, the limits for acceptable residual levels might come from clinical literature. Such data should be kept in the risk management file.

    4) [justify]: there is no cliff, so the situation needs some broader discussion why the final solution was OK (typically a balance of various considerations, state of the art, high cost of mitigation, no technical solutions etc)

    Is the theory plausible? Feedback and weak point analysis is welcome.
     
  9. Ronen E

    Ronen E Well-Known Member

    Joined:
    Jul 31, 2015
    Messages:
    133
    Likes Received:
    70
    Trophy Points:
    27
    Peter,

    Nice theory overall though I disagree at some minor point.

    The problem is the practical benefit - your buckets 3 & 4 come back to the current situation, with all the current problematic terminology and practices such as "negligible" (who decides what is negligible?), "acceptable levels", reference to literature / state of the art, "final solution is OK because...", "free style" economic considerations etc.

    What industry and regulators need is a sea-change. The idea of an absolute minimum (in the mathematical sense) created by two intersecting curves is great, we only need to find a "computational" way to locate it. Most device designers / engineers know (intuitively or through study/experience) how to design their device away from the cliff base; the problem is how to objectively "prove" to regulators that the result is actually that.

    Cheers,
    Ronen.
     
  10. MarkMeer

    MarkMeer Well-Known Member

    Joined:
    Dec 3, 2015
    Messages:
    138
    Likes Received:
    62
    Trophy Points:
    27
    I don't know if I totally agree. The RMF, in my opinion, is not only a tool to influence decisions, but also a design record that demonstrates consideration of all potential risks. As such, I think it still is appropriate to document all hazards and any mitigations, even if they are "standard".

    I'm unfortunately not understanding: how is "the minimum point is clear and the slope of the risk cost does not really affect the point which is considered safe"? Unless we are saying that the origin of the risk-cost is at the intersection point (in which case I don't see any value to the risk-cost plot), then the slope of the risk-cost is CRITICAL to determining the intersection point, no?

    My other concern, as I've mentioned, is that such a model would vary from company-to-company depending on their resources - the risk-cost plot will be steeper for companies with less resources. Wouldn't this result in different "risk minimum" intersection points for two companies manufacturing identical devices?

    ----
    One again, good discussion!
    Much appreciated! :)
     
  11. Ronen E

    Ronen E Well-Known Member

    Joined:
    Jul 31, 2015
    Messages:
    133
    Likes Received:
    70
    Trophy Points:
    27
    Mark,

    The origin of the availability risk curve is at the axes origin - if zero resources are invested in reducing use risk, the resulting availability risk (which is a result of the attempt to reduce use risk) is also zero. So, the intersection of the 2 curves is a result of the availability risk curve's slope (assuming a straight line - needs substantiating).

    Since we are interested in the "X" coordinate (the amount of resources to invest in use risk reduction), a varying slope for the availability risk curve matters little. the reason is that we're looking for an intersection with the "cliff". Since the "cliff"'s slope is so steep, the intersection's X coordinate changes little when the intersection point moves up or down. It's true, though, that the effect won't be negligible on the resulting use risk (at the intersection), which might be an issue.

    The result is that all companies (strong or weak) would be allowed to stop reducing use risk at about the same level of resource investment, which sounds fair. The difference is that economically weaker companies would stop at a higher risk level, and perhaps the model needs some twaeking to address that.

    Cheers,
    Ronen.
     
  12. MarkMeer

    MarkMeer Well-Known Member

    Joined:
    Dec 3, 2015
    Messages:
    138
    Likes Received:
    62
    Trophy Points:
    27
    This is assuming that the slopes of the availability risks are not significantly different between companies, no?
    Given the vast differences in companies' resources and abilities, I wouldn't think this would be the case, and hence I can't see how the intersection points would ever be "at about the same level of resource investment".

    I've attached another graphical representation.
    Notice that at the equilibrium point for the "high resource" company is set at a point where they'd be investing almost twice the resources as the "limited resources" company. Unless I'm missing something, I'd say the slope of the curves absolutely matters...
     

    Attached File(s): 1. Scan for viruses before using. 2. Report any 'bad' files by reporting this post. 3. Use at your own Risk.:

  13. Ronen E

    Ronen E Well-Known Member

    Joined:
    Jul 31, 2015
    Messages:
    133
    Likes Received:
    70
    Trophy Points:
    27
    You are correct. In terms of availability risk, the slopes differ between companies, with the "weaker" companies having steeper slopes. The thing is I was relating to Peter's graph which depicts the use risk as a straight, or almost straight line dropping quickly (a cliff). In general I agree with your exponential representation, but assuming that the intersection occurs at the higher slopes region (at the lower-resources allocation area), that exponential curve can be fairly well approximated by that "cliff". True, this assumption need substantiation, however:
    A) The exponential model needs substantiation too; we intuitively "feel" that it's "right", but do we actually have a formal backing for that?
    B) Assuming that use risk curve is actually exponential and the curves intersect at the higher-resources allocation region, we're in a problematic situation to begin with, because it means that risk reduction resources allocation is doomed to be ineffective - lots of resources are required for marginal risk reduction.

    If the use risk curve is a sharp cliff (at least in the intersection area), the varying slopes of availability risk curves shouldn't make a big difference in risk reduction resources allocation.
     
  14. Peter Selvey

    Peter Selvey Member

    Joined:
    Nov 9, 2015
    Messages:
    13
    Likes Received:
    8
    Trophy Points:
    2
    The cliff is derived from a normal probability distribution, which beyond 3-4 standard deviations has a faster fall off than a exponential decay. By six standard deviations (the so called 6 sigma) the probabilities are so small that no matter what severity or resource cost is assumed, the situation can be deemed safe. A normal design would work in the 4+ sigma (below this, cumulative failure rates would be too high even for economic viability). In >4sd region, the risk/resource curve looks pretty much like the diagram shown above, it is virtually a straight line down. Some numbers (single sided situation, with an upper limit, probability of adverse event):

    2sd: 2.2%
    3sd: 0.1%
    4sd: 0.003%
    6sd: 0.0000001%

    The surprising thing is how little effort (resources) it can take to move risk from "small but significant" (3-4 sigma) to completely negligible (5-6 sigma), and that understanding is at the heart of quality vs average company. It is also at the heart of why things work so often despite the incredible underlying complexity.

    Unfortunately many aspects in design are not independent. Fix one problem here and another one pops up over there. It's whack a mole. Want to improve sensor accuracy (beyond 6 sigma point), but you are going to drain more battery power, which means shorter life. Increase battery size means increased weight and cost. On it goes. So sometimes it can be easy to give up and settle for a 3-4 sigma solution.

    Under current ISO 14791, these 3-4 sigma solutions don't get tested. It is easy to pick some numbers out in the traceability matrix and say that everything is OK, when in fact it is an engineering compromise and there is significant residual risk. The problem is that there is often a low cost solution there, it just needs a little push to find. ISO 14971, and the acceptable risk model, doesn't provide this push.

    Consider the following true case:

    An insulin pump manufacturer designs a protection system that watches the infused rate, halts flow and alarms if the rate exceeds +/-5%.

    In the market, it is found that under cold conditions, the motor is sometimes a little slow to reach the target speed. This causes a rate alarm to occur and interrupts the treatment. The manufacturer considers this a false alarm not important for safety. To eliminate the false alarm, the protection system operating point is raised to +/-30%


    The original 5% limit is plausibly in the 6 sigma region. But the 30% limit is likely to be in the 3 sigma region, with small but significant risk. Is it acceptable? Each observer would have a different opinion. No one can say for sure.

    Yet, it would be fairly simple to modify the software to handle the cold start issue while retaining 5% limit. That would eliminate the false alarm while staying in the 6 sigma region. The resources are tiny compared to the reduction in risk.

    Under 14971, the 30% limit could be deemed "acceptable" under the manufacturer's risk management scheme. A single line in the risk management summary table (traceability matrix) would cover the point. And the manufacturer can argue that the device provides so much benefit that a small amount of risk is acceptable.

    But ... this fails the common sense test: if the simple software solution exists which moves from some risk to zero risk, why not use it? How can this be "acceptable"?

    Under the risk minimum theory, the case would fall in bucket 4, requiring some evidence to show the risk is minimized. This includes compiling data on the clinical impact of a 30% error, basis for overall probabilities, discussing possible options and why they were not feasible. The manufacturer would quickly realize that a simple software solution is preferable to working on the justification. Having reverted to the 5% limit, the situation would fall back in bucket 1. Because the small cost of resources compared to the reduction of risk, this option is likely to be taken regardless of the underlying cost of resources (as the theory predicts).

    In Sam's original case, it might be that risk of infection falls in bucket 4 (not 6 sigma), but in this case the resources for sterilization are quite large. So it is a different case, and one where the manufacturer could reasonably argue that the risk is minimized, using for example the background risk of infection as a point of reference. It is interesting to note that in this case the resources has the high slope - the solution (sterile process) is essentially a step change. Again this high slope provides a risk minimum that is likely to be broadly agreed, and relatively independent of the organization. This might suggest a 5th bucket - significant residual risk, but a solution requires a large (step change) jump in resources.

    My expectation is that the risk minimum theory is sound, but it can take some work to understand how the individual situation fits into the theory. Over time, patterns will emerge such as the 4 or 5 buckets. But these patterns and solutions are just tools. From a regulatory perspective, it is sufficient to simply redefine safety as ensuring the risk is minimized, but with the proviso that risk refers to the whole device, and recognizing that the use of resources also influences risk.