1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
Dismiss Notice
You must be a registered member in order to post messages and view/download attached files in this forum.
Click here to register.

Traceability in Medical Software Development Process

Discussion in 'IEC 62304 - Medical Device Software' started by MrTetris, Oct 29, 2018.

  1. MrTetris

    MrTetris New Member

    Joined:
    Oct 29, 2018
    Messages:
    2
    Likes Received:
    0
    Trophy Points:
    1
    Dear All,


    At the moment I am working on a feasibility study about moving from TFS to PTC Integrity for a Class B medical software company, and I am trying to define the basic requirements to guarantee traceability in the sw development process. We identified these items so far:


    Project (the highest level element)

    Backlogs (a whishlist from the PdM)

    Design Input (defined by PdM and SA)

    User Interface specifications (defined by PdM and SA)

    Risks (defined by PdM and PjM)

    Requirements (defined by SA)

    Tasks (defined and implemented by the SE, when he thinks that the Requirement item is not detailed enough or if he prefers to split it up in multiple parts)

    Test cases (defined and executed by QA)

    Bugs (opened by QA, resolved by SE)

    Change Requests (opened and approved by PdM and PjM)


    The answer appears to be not so trivial, related standards leave some freedom and probably there is no “one size fits all” solution, which instead depends on our QMS and Risk Analysis


    At the moment, my proposal is the following (please consider that all elements are tracked to the “Project” item):

    http://depositfiles.com/files/t2w09mgar


    In words:

    Backlogs do not need to be traced, since the real starting point is Design Input (where Backlogs are analyzed, rephrased and eventually deleted).

    UI specifications should be traced to higher level DI, Change Requests to the original DI/UI spec and to the Requirement defined from them.

    Each DI, UI spec and CR shall be linked to at least one requirement, and vice-versa (this is actually a critical point, because other colleagues propose that a Requirement does not imply a higher level item, but I cannot imagine any case of this type).

    Tasks are created by SE when they want to decompose or further detail a Requirement, but only the latter is tested through Test Cases.

    Each Requirement is traced to at least a Test Case, but Test Cases can be created also when a Bug is found during Exploratory Testing (in this scenario, we create the test case only if the Bug is related to a Risk).


    I would be curious to know what you think about my proposal and how you would manage Risks, considering that all links between items needed to create the Traceability Matrix have to be manually defined (generally by the SA), so the fewer the better to avoid forgetfulness.
     
  2. yodon

    yodon Well-Known Member

    Joined:
    Aug 3, 2015
    Messages:
    198
    Likes Received:
    115
    Trophy Points:
    42
    I wasn't able to access your file so I can't comment directly. And I'm not familiar with all your acronyms so bear with me.

    First off, risk management should be driven by ISO 14971. 62304 does define some software-specific risks to consider. Controls are treated as requirements and should be mapped (traced) to your software requirements. Oh, and do get someone on the risk management team with clinical expertise. Don't just have a bunch of software engineers making best guesses as to probability and severity.

    Software is a bit of an odd duck in risk management in that the standard says you should treat each risk as having the highest probability score (100% likely to occur). We generally temper that by using 2 probability factors: probability of occurrence (100%) and probability that if it occurs, it will cause harm.

    IEC 62366 is another standard you should consider (usability engineering). That will also help with your UI spec and use-specific risks (and drive software requirements). It can also help with confirming that risk controls are effective (you need to demonstrate BOTH that they are implemented and effective).

    The machinations getting to software system test are, by-and-large, not that "interesting" from a compliance perspective so what you're doing there should be whatever is best for your team. There are specific requirements for unit verification and acceptance (didn't see that addressed above) and integration (which could well be the ever-evolving test cases you mention).

    Exploratory testing, IMO, is good for supporting your claims of validation. The device world makes that a bit more challenging. To comply with validation requirements, you would need to have pre-defined acceptance criteria. We generally work around this by doing what we call "structured exploratory testing" where we give the tester some latitude to explore but keep focus (risk-based) on specific tasks. We generally frame the expected results around the software not doing something that could lead to harm.

    This statement is a bit concerning. Bugs can be found at any time - not just exploratory testing - and ALL bugs should be documented - not if only related to risk (and all changes need to be verified which is frequently through testing).

    Hope that helps a little. It's a rather huge topic so if you can break it down into smaller chunks (questions), it may stir better discussions.