Part 10 – It’s the Thought That Counts

A common observation made by hiring managers, especially in the medical device industry but also in the pharmaceutical industry, is a lack of experienced people to fill their open positions.  Our belief is that the typical hiring process frequently asks the wrong questions.  This, at times, actively hinders identification and access to qualified individuals who would otherwise greatly benefit the hiring organization.  In this article, we discuss three hiring practices that we believe artificially reduces the apparent talent pool size, and which ultimately inhibit high product quality.

To illustrate this, let us return to a portion of the job description posited in our third article Everything I Know I Learned After College:

———————————————————

Senior / Principal Quality Lead

“ …

Skills needed:

  • Competency in Statistical techniques: Gauge R&R, SPC, process capability, sampling plans and sample size determination, DOE, ANOVA, Regression, Lean
  • Design FMEA, process FMEA

Experience

  • Practical knowledge of FDA Quality Systems Regulations, ISO 13485 (medical device quality management systems), ISO 14971 (Risk Management), IEC 62366 (usability engineering)
  • Previous experience working in an FDA regulated environment

…”

——————————————————-

On the surface these requirements seem reasonable, consisting of knowledge of a series of techniques and tools, and experience working in quality systems associated with the FDA and medical devices.  But this type of job description, or hiring ad, masks two underlying problems:

  • Listing a set of skills often conflates “skills” with “tools”
  • The insistence of experience with FDA-associated quality systems ignores the value and content of other quality systems.

 

Tools Versus Skills:

What do we mean by “skills” versus “tools,” and why is it misleading to focus on tools as part of the hiring process?  These days, “tools” are most often computer programs: Microsoft Project, Excel, Minitab, JMP, SAS, DOORS and so on.  Frequently, candidates are critically assessed on whether they have familiarity with a given piece of software.  The candidate may answer “yes” – they know the software.  This may mean they know how to access the menus and know what functional items are present in which menu.  But they may not know how to use the software effectively, which is what a “skill” is.

The hiring process is often reduced to keyword searches or quick questions identifying whether candidates have used a specific “tool.”  Yet, internally to an organization people can easily, and derogatorily, become labeled as a “tool jockey.”  Such labeling occurs because of the perception, one that is often correct, that the tool being used does not add value to the overall effort because it is not being used effectively.  The reasons for this are related to the arguments we have made previously regarding too much data (see especially: Is This Really a Requirement, and It’s the Destination – Not the Journey).  The fundamental issue is then to focus on what is essential to be done.  Unfortunately, many “tools,” and using too many tools in total, make it extremely easy to lose that focus.

For example, take requirements management tools like DOORS or Cognition Cockpit.  With these software packages, it becomes easy to enter a requirement and then create a link to a lower level requirement or to a design output (often involving several “layers” of requirements).  It therefore becomes tempting to just “throw it all in there” and let the software keep track of everything.  But when we take that approach, we are not exercising the critical thought we argue for in the article Is This Really a Requirement.  “Just throwing it all in there” usually creates a complexity of interactions that results in a tangle of information that is either logically wrong, or so complex that it cannot be understood and communicated!

Or in another example, take tools that easily enable Statistical Process Control such as Minitab, JMP, or SAS.  Keep in mind that when Statistical Process Control was created by Walter Shewhart in the 1920’s, calculators or computers did not exist.  Because of this, he created a process that was easy to execute using only a pencil, graph paper, and look-up tables (check out Deming’s book Out of the Crisis: many of the control chart examples given therein are photocopies of hand-drawn control charts).

With the computer-based software available today, creating control charts is a near mindless activity.  The result is that too often control charts are created without understanding the underlying principles.   This lead to incorrect or inapplicable results, one family of which is described in: https://www.mddionline.com/stop-doing-improve-your-device-manufacturing-process-part-1.  Equally damaging is the temptation to apply control charts to everything.  The result is so much data that there are not enough resources to review it, and thus none of the data are actionable.  As we’ve argued in our article, Is this Really a Requirement, effective control charts should provide immediately actionable data associated with critical design outputs.  Moreover, they should be used thoughtfully and with a specific “question” being asked of the analysis.  To do otherwise is not an effective use the tool.

Finally, let’s take a look the very popular Microsoft Project.  Project is a wonderful tool: it allows the project manager to understand and communicate the time frame of a project, determine interactions between tasks, assign resources, determine costs, determine budgets, and track adherence, or non-adherence, to the project plan.  Yet, the ease with which those data can be entered into the program, down to the minutest of detail in sub-tasks, frequently leads to “bloat.”  How many of us have seen Project output printed on a poster printer hung to cover entire walls?  More than a few of us!  But, if the object of software such as this is to aid understanding and communication of a project plan and its execution status, such detail is, at best, obfuscatory.  At worst, it creates errors arising from the very complexity that is entered.

To repeat: knowing where the “buttons” on a program are is not the same as knowing how to effectively use the tool – and volume is definitely not the same as quality when it comes to the data entered in to such a program.

When we spoke earlier of the so-called “tool jockey” we were referring to the perception that arises when someone uses a tool ineffectively, thus yielding no benefit to the organization.  So, when it comes to evaluating a potential candidate, we should not evaluate them on whether they know how to use a specific tool.  Rather, we should evaluate them on whether they know how to effectively implement the underlying principles the tool is facilitating.  Any tool can be learned, usually rather quickly.  This is especially true if you find the right person, i.e. one who truly understands the underlying principles.

 

Do I Really Need Someone Who Has Done This Before?

Frequently a job posting will list a role, job title, or an activity or series of activities that an applicant must have done in the past.  With regard to roles, or titles, this can be extremely misleading.  First, different titles can be equivalent in different companies.  Secondly, several individuals’ interpretation of the meaning of a given title can differ widely.  Third, just because an individual has held a given title does not mean that they executed that position well and suitably absorbed the lessons any job will impart on a person.

Looking for and hiring someone who has “done this before” can lead to the so-called “Peter Principle.”  Past success does not predict future success and past failures do not necessarily condemn a viable candidate.  Someone who has held a position of “validation engineer” may not be good at it.  Someone who has held a position of “Systems Engineer” may not be good at it.  And, someone who has held a “supervisory” position (Manager, Director, etc.) may not be good at it either.   Conversely, someone who has never held any of these positions might well excel at one or all of them.

Therefore, we recommend that we identify someone who has shown adaptability and an understanding and historical execution of the underlying needs of the new position regardless of past titles.  A person possessing these skills is most likely to succeed in the new position.  We have all seen someone excelling in a position where they have not performed the activity or role before.  This point has been made by Claudio Fernandez-Araoz (21st-Century Talent Spotting, Harvard Business Review, June 2014).

There is a more general conundrum here: somewhere, sometime, someone does something for a first time.  In the extreme case, if everyone were to only hire people who have done a given activity before, then hiring managers will never find a person to perform their activity.  This is an unrealistically extreme example, but the point is this: we likely do not have the lack of experienced people to fill open positions we think we do.  Rather, we are not effectively evaluating candidates on the appropriate criteria.  We don’t need people who have “done this before.”  Rather, we need people who have demonstrated the flexibility and capability to do the job the opening requires!

 

Calling All Quality Systems:

The job description that we re-visited at the beginning of this article stipulates that the applicant have:

  • Practical knowledge of FDA Quality Systems Regulations, ISO 13485 (medical device quality management systems), ISO 14971 (Risk Management), IEC 62366 (usability engineering)
  • Previous experience working in an FDA regulated environment

While working in the medical device field and Pharma fields, we place great emphasis on adhering to the regulations that govern us.  Indeed, we emplace an entire function, “Quality” to oversee this (though as we have discussed in earlier articles, “Quality” usually acts as a “Compliance” function – a practice that leads to great difficulties).  Because of this, it would appear to make great sense that we seek people with experience in our specific highly regulated environment.  In reality, this practice makes no sense at all.

The reasons for this last statement is threefold: first, there are a great many industries that are just as regulated as is the medical device industry many of which have requirements for quality systems; second, the quality systems those other industries follow, like ISO 9001and its kin for general manufacturing, Pharmaceuticals, Aerospace, Automotive, much of the food industry, etc., have far more in common with 21 CFR Part 820 than differences; third, many medical device companies “teach” their employees the regulations by way of having them follow the quality system and not by having them study the regulations directly.  Therefore, we find it surprising, perhaps even alarming, that many medical device company employees have not read the Parts 820 or 210/211 and related FDA regulations.

There are two implications flowing from the above points.  First, an employee coming to you from another medical device company may have a very different, and quite possibly incomplete or incorrect, “understanding” of the regulations compared to you and your company.  Second, an employee coming to you from another industry might well have an extremely good understanding of how to operate in a regulated environment: they will likely only need to become familiar with slight differences between their old quality system versus yours.

When we automatically exclude from our pool of applicants those who come from other industries, we artificially and inappropriately eliminate excellent candidates.  Also, we set too high an expectation on the regulatory knowledge of those coming from “within” the industry.  Both results do us and our organizations a great disservice.

Author Hamlen was quite heartened to see a recent job posting from a well-known medical device maker that simply asked that candidates come from a regulated industry (i.e. “Planes, Trains, and Automobiles”).  An executive at that company, when contacted to congratulate them on their openness to this approach, replied that they found great success in candidates from other regulated industries.  If more organizations would take that perspective, we will likely find that good people are not as hard to find as we would claim!

It’s the Thought That Counts:

The common thread in this article is, in broad stroke, one of looking for “understanding” versus having used a certain “tool.”  Software, in its various forms, is certainly a “tool”.  But so, arguably, are quality systems:  they are tools that we use to guide execution of the product development and deployment processes and the creation of associated documentation.  Likewise, job titles are a tool we use to guide us, unfortunately using a lot of hidden assumptions, to a belief about the core capabilities of a job candidate.

As we argued in the article A Quality System is Not Enough, when we do not truly understand the underlying principles of a system we are using, we are very open to mis-use of that system.  The same argument holds with any “tool” – especially those we have discussed here.  Software is extremely prone to over-use.  This over-use leads to complexity and obfuscation of the business and project insight the software is supposed to deliver to the user and to their audience.  Job titles are mis-used because they often are given great emphasis, but they mean different things to different people, and definitely do not reflect the fundamental capabilities and potential of an applicant.  Finally, believing that someone must have come from a 21 CFR 820 environment to understand and effectively execute the underlying principles of the associated quality system is not a valid belief.

As hiring managers, and supervisors of active employees, we need to stop relying on a few keywords to filter out and evaluate people.  Rather, we need to focus on their underlying understanding of the task at hand, whether it be running a piece of software, using a quality system, or supervising other employees.  As we have also argued several times in earlier articles, true fundamental understanding of a task or role usually leads to a reduction in complexity, reduction in confusion, and ultimately an increase in product quality.

© 2018 DPMInsight, LLC all rights reserved.

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for six sigma, Lean, and six sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (www.dpmillc.com).

 

Bob Parsons

Over 26 years of experience in leading Quality Assurance, Validation and remediation efforts in FDA regulated Medical Device and Pharmaceutical industry. Experience includes product development life cycle management from initial VOC through New Product Introductions (NPI), sustainable manufacturing, and end of life product management.  Technical expertise in quality system gap assessment, system enhancement, alignment and implementation of all quality elements including design controls, risk management, purchasing controls, change control and post-market surveillance.  Regulatory experience includes; ISO 13485, 9001 and 14971 certifications, providing guidance for FDA PMA/510K and CE clearance, designated Management Representative, company representative and lead during FDA and ISO audits, 483 and warning letter resolution with experience working within consent-decree environments.  Bob is currently Vice President at ProMed Molded Products, Inc.

Part 9 – It’s All in The Story

Imagine the following scenario.  You arrive at work one day, perhaps to the corporate headquarters of My Quality Device, Inc. (see our first article “Setting the Stage and Initial Thoughts”).  Waiting in the lobby are two people.  They show credentials as auditors for the FDA, and declare they are there to perform an audit.  You scramble to find a room, and offer them a cup of coffee (which they decline).  They indicate they want to see the Quality Manual, the Product Design SOP, and the Design History File and Device Master Record for your new product, the “IOMDD.”

In the vignette in our first article, the design of the IOMDD was rushed by management and the project management group.  The project team was not clear on the requirements they were designing to, and at one point, the project champion stepped in and said, “do it this way.”  Because of this history, several things are present, and not present in the Design History File.  Not present is a clearly defined and consistent list of product requirements (design input requirements).  Instead, some are vague (which makes them difficult or impossible to verify), some conflict with other requirements, and some are not present at all.  The latter are identified by the auditors because production is monitoring and controlling “critical” design output that has no linkage to any design requirement.  Because of the large number of requirements that are defined, the auditors are not able to understand how the requirements relate to each other.  The auditors are also unable to clearly understand how the design outputs relate to and conform to the design input requirements.  Finally, it is clear in the documentation that a number of the requirements were defined after the corresponding design output was defined, and there is little or no justification noted for the reasoning behind most of the design decisions.

Worst of all, the identities and locations of many of the documents are not clearly indexed: and the auditors are frequently left waiting for extended periods of time while documents are located or accessed from old emails.  The auditors eventually leave after issuing a 483 notice with many observation points.  A few weeks later a Warning Letter is delivered to My Quality Device, Inc.

There are a number of points to take away from this scenario, but fundamentally they all come down to the following: It’s all in the story.

In the end, one of the fundamental objectives of quality systems in general, and design controls in particular, is to clearly document how a design team translated input requirements into both design outputs and manufacturing processes.  This may seem like unwanted extraneous work – a “hoop” we just need to jump through.  In reality, this objective of quality systems is actually of critical importance to high-quality product for the following reason: if you have done the design input work, the design analysis, and the organization of that analysis so as to enable clearly documenting it, then it is much more likely that the quality of the design process is high.  The converse of this statement is: haphazard work will yield haphazard documentation of that work.

In the end, the objective of this aspect of quality systems comes down to building confidence, in the eyes of a reader days or years after the fact, in your design process and the resulting design.  The rigor and requirements imposed by quality systems are intended to force a minimum level of documentation to demonstrate that appropriate levels of engineering practices have been followed.  As we said in our article A Quality System is not Enough: “The regulatory expectations only add one thing (to the engineering practices): provide reasonable evidence…”

The importance of that point of “reasonable evidence” cannot be understated.  First, regardless of the “correctness” of a design, if the process is not documented clearly it will leave auditors and potential buyers confused and lacking confidence in your business and your products.  Second, the last thing we as a design company want is for auditors to be confused – either about how one document relates to another, or how a design decision relates to a design input.  Crisp answers to questions, rapid and confident production of documents asked for, and clear explanation of design rationale breeds confidence in your organization and design processes.  Confusion in answers, lack of ability to produce documents, and, especially, demonstrated lack of awareness of regulatory requirements and expected engineering practices leaves the auditors lacking confidence in your systems and practices.  This latter situation is what gives rise to the description of an auditor “smelling blood in the water.”

The second facet of understanding the implication of “…provide reasonable evidence…” is even more important and powerful: by telling a story clearly, you force yourself to critically understand what you have done in your design.  Forcing clear documentation of a design process (and related activities) thus increases the probability of yielding high-quality product.  There may be, and frequently are, complaints by designers that “… I know what to do: all this documentation is slowing things down.”  Well – yes.  That, within reason, is the very point.  Anyone who has been a teacher will note that you do not really understand material until and unless you can explain and teach it clearly.  The same point applies here: we may think we “know what to do” … but unless we can explain, document, and defend it clearly, the opportunities for mistakes or overlooked problems with a design are magnified.

The last point to note here is that, while exactly following the prescriptions and processes of a quality system is important, it is not absolutely necessary.  Even the best quality systems cannot foresee all that can arise in all future projects (and in the authors’ view, they should not be designed with such an objective).  It is entirely possible, and actually common, for a project closely following a quality system to yield a conclusion that … just does not make sense.  We  saw this described in our vignette in the first article where the design team was forced to closely adhere to the quality system, but ended up reaching conclusions that defied common sense.  The thing to do in this situation is to step back, carefully define an alternate conclusion (or approach), and to document: 1) that this is a deviation from the quality system, 2) what the alternate conclusion or approach is, and 3) what the (defendable!) rationale for that differing conclusion is.  It is far, far, better to pull out that document and show it to an auditor (which portrays confidence and awareness of the situation) than to take either of the alternate approaches of not documenting the deviation from the quality system (which portrays lack of awareness of the quality system to the auditor) or accepting and moving ahead with the illogical approach – which will most likely result in a quality issue (or added expense) in the design product.

In conclusion then, our very ability to instill high-quality into our products and to demonstrate that high-quality is driven by the clarity and quality of the story we can tell about that design process.

The key takeaway points here are:

  • Enforce clear and understandable documentation, both between and within documents. This will naturally enforce quality and robustness of the associated engineering and design work
  • This means:
    • Clearly document what was done (and at times what was not done). This applies to selection of design inputs, relationships between design inputs and design decisions, and relationships between design decisions and manufacturing controls.
    • Clearly document how design decisions were reached. This makes those decisions more defendable and more likely to be robust.
    • Be willing to deviate from the quality system if it clearly leads in a nonsensical direction. Clearly document when the quality system is deviated from.  This includes documentation for the alternate approach, and clear rationale for taking that approach.

© 2017 DPMInsight, LLC all rights reserved.

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for six sigma, Lean, and six sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (www.dpmillc.com).

Michael B. Falkow

Michael Falkow is a Quality Specialist with Raland Compliance Partners. He has served as a regulatory compliance and quality assurance executive with multi-facility/international companies and was an FDA Compliance Officer and Senior Investigator/Drug Specialist.  Michael has subject matter expertise for quality and regulatory compliance, quality auditing, quality assurance, quality control, supplier evaluation and certification, and compliance remediation.  He has been approved by FDA as a GMP certifying authority and is qualified to provide Expert Witness testimony for GMPs.  Currently – Adjunct Professor at Mercer County Community College – teaching courses on Clinical Development for Certificate Program in Clinical Research as part of Drexel University’s Masters Degree in Clinical Development.

Part 8 – It’s the Destination – Not the Journey

An earlier article (“Lets Gather a Few Good Friends”) pointed out that early and continued focus on a specific design created by a “founder” of a design effort can lead to significant problems and lack of quality in a final manufactured product.  We made the argument for using a cross functional team early in the requirements definition / design process because doing so is an excellent way to reduce the probability of “latching on” to a low-quality design.  This does not mean that the original “founder’s” concept may not have been truly excellent – the team simply cannot know until they work it through.

On the other hand, arguably the most important objective of a design project is its own completion, including delivery of a high-quality product.  For this reason, the arguments of “Lets Gather a Few Good Friends” are not meant to imply that the design team should ponder for prolonged periods the project’s requirements, scope, or design decisions.  Once the design objectives, scope, and requirements are defined, there is great power in focusing on and adhering to those definitions.  Doing so sharpens the clarity of roles, responsibilities, and objectives for the entire team.  Conversely, continuously second-guessing decisions, changing objectives, or adding scope causes lack of focus, delay, and additional cost.  The team should at all times be highly focused on completion of a clearly defined project.

This is what we mean by the title of this section: “It’s the destination, not the journey.”

Why are the risks of lack of project focus so great?  From the technical and quality side, re-visiting old decisions opens wide the door to low quality designs.  Often, such “revisions” arise very late in the game, are made in a rushed manner, and involve people who were not part of the original decision.  As such, the information and perspective regarding quality used to support the original decision may not be taken in to account, and the rushed manner of the later decision may not allow for recovery of those perspectives.  In addition, doing so opens the door to the team’s losing track of what is truly important (and thus what leads to quality) in the design.  Finally, project time is lost:  often a lot of time.  We saw this situation play out in the vignette posed in our first article “Setting the Stage and Initial Thoughts.”  In that vignette, the team had ill-defined design objectives, and started arguing about them and redefining them as they went.

Lack of project focus has many ramifications.  The loss of time alone has financial implications (salaries are still paid, additional delay in getting to market results in lost revenue).  Trying out new design options is not free: there is additional cost in obtaining, prototyping, and testing new components and parts and attempting to integrate them in to the “old” design.  The additional requirements and gold plating[1] on the final design incur cost that may not be offset by increased sales price, etc.  With the current fast pace in lifecycle in other industries whose components and assemblies we may rely on, there is a very real risk any delay will see parts and components discontinued or obsoleted by suppliers.  This then requires more resources to identify, incorporate, and test replacements.  From a quality perspective, the increase in requirements and gold plating increases the “target area” within which low quality in the design can rear its head, especially if the decisions and resulting implementations are rushed.  Replacement of discontinued or obsoleted parts also presents a quality risk, as their integration in to the “old” design may not be seamless from a systems engineering perspective.

From a Regulatory perspective, even though a product may not yet be approved, there is a  (compliance) obligation to track design changes.  A constantly-changing design is very hard to document and defend: this creates audit risks for findings in the design process if the confusing “story” of how the design unfolded is not clear to the reviewer[2].

Those of you who have been fortunate to work with a focused, responsible team can attest to the differences and advantages when unwavering focus is involved.  There is excitement, camaraderie, cohesion, and individual responsibility within the team.  This produces thoughtful, productive discussion and debate within the team, along with rapid progress towards the team’s objectives.  It is, unfortunately, somewhat rare for people to experience working on such a “high performing team.”

The point here is one that everyone, senior management, middle management, established senior technical people, and newly graduated technical people must understand at a gut level: our biggest challenges to achieving high quality in our project and design objectives are not technical.

The biggest risks are the “soft” things – social, interpersonal, and team dynamics issues.  Specifically, the issues that frequently cause projects to fail to achieve their goals are lack of true group cohesion and buy-in on the direction, objective, and method of execution of the program, along with lack of true group agreement on domains of responsibility (and accountability for deliverables).  This is why investors and venture capital firms should, and often do, ask and look for execution by the team, not whether the objective is technically feasible.

Because of this, a sub-title of this section might well be, “The Softer Side of Engineering.”  Why and how do you take a design team and drive them to focus on adhering to requirements, scope, and design decisions that have already been made?  Doing so is neither engineering, nor is it in the dominion of the quality system.  Maintaining this focus is effective project management at its core – and requires those “soft skills,” not technical skills.

But what does that mean?

We have already written about bringing to bear multiple perspectives and scopes of experience and drawing on these to yield the best solutions – especially during the creation of requirements and initial design stages.  But this must not be “management by committee.”  Adopting that approach is a recipe for stalemate, and for team members failing to take individual responsibility for their roles and deliverables.

There must always be someone with the responsibility to make decisions – but those decisions must draw on the wisdom of the team.  We have never recommended that the team should be driven to the “best” decision.  Someone can always argue “If we delay this decision we will know more and make a better decision.”  But that will always be the case at any point of time.  Trying to hit the bull’s eye 100% of the time is neither cost nor time effective:  if you perpetually leave the door open to ongoing changes, you will either never reach any objective, or you will end up being rushed and make panicked decisions (either in schedule or design) that produce low-quality product.

The person responsible for making decisions should expeditiously draw out the wisdom of the team.  That same leader should strive to have the team say “this is our conclusion – this is our stake in the ground and we will stick with it.”  This is very different from telling the team what to do.  Rather, it is facilitating the decision making of the team and enforcing their decision.

So make those decisions with the team (i.e. define the project’s destination), trust those decisions as sufficient to the needs at hand … and stick to them (i.e. don’t change the destination and don’t sight-see along the journey).  Don’t add to them, and don’t change them (unless later evidence is overwhelming that they were wrong).  Doing this requires soft skills (i.e. interpersonal skills and facilitation skills), not technical skills.  We discuss these skills further in the sections below.

“Make the decisions with the team:” 

What goes wrong in team dynamics?  Frequently, different members just want different objectives and are not open to listening to alternate objectives.  Equally often team members will argue about how an objective should be reached, not what the objective is.  Surprisingly often the team will actually agree, but argue vehemently about the words used to describe either methods or objectives (author Hamlen likes the description of this as “being in violent agreement”).

There are a number of methods to bring a team to consensus.  We have already mentioned that many of the “tools” taught in Six Sigma and Design for Six Sigma are really disguised methodologies for arriving at consensus within a team.  If these are used well, they are powerful.  Instill these methodologies within your organization and use them.  If you find it challenging to achieve that level of consensus within your organization, bring in a trained facilitator to assist you.

One powerful practice that works at any time to build understanding and acceptance of multiple perspectives is “active listening.”  Active listening helps overcome the unconscious cognitive “filters” with which we all operate[3].  What a person says is not necessarily what they are thinking; what a person hears is not necessarily what another person said; what another person responds with is not necessarily directly related to what they heard, and so on.  This is the core source of the garbling of messages or stories always found in the “telephone” game[4].  Often it is these unconscious filters that drive the occurrence of “violent agreement” mentioned above.

Executed correctly, active listening is a mechanical practice, but is striking in its effect.  To practice it, do this: as a listener, listen to the other person with the intent to say back to them what you thought you heard.  Don’t respond, don’t argue, don’t judge.  Then say back to them what you thought you heard.  Say something like “what I heard you say is …. “  If you get it wrong, they will let you know!  Keep trying.  Have the first person re-phrase what they said – and concentrate on trying to re-phrase your interpretation of what they said.  Eventually you will “connect” with true understanding.  You will know when you have hit that point because you both will experience a physical reaction: a sigh of relief, a sense of a “punch in the gut” or something similar.  Combined with this is an emotional feeling of “oh my … I get it” (or “they get it”).

This is powerful for two reasons.  First, it drives a real sense of “feeling understood” on the part of the person who is trying to get their point across.  Paradoxically, their “feeling truly understood” opens the door to their acceptance of an alternate viewpoint.  Second, it drives a real sense of “understanding” on the part of the second person (the listener).  Again, this sense of “truly understanding” drives more acceptance of the first person’s alternate viewpoint.  All this when the two might have started out vehemently disagreeing with each other.  Better than yelling and pounding on the table … no??

By using this and other decision-making methods, the team is involved.  That involvement creates their buy-in and commitment to the decisions and objectives.  That buy-in breeds team energy, involvement, and ownership of the objectives by the team.

“… trust those decisions as sufficient to the needs at hand:”

There is no magic formula here, simply the faith that your people are good – and the clear majority are.  Their experience and perspectives are real and have worth.  Lean has a practice … “the wisdom of the team” … that explicitly recognizes this.  Also, the entire team needs to accept and recognize that they are not after a “best” or “optimum” or “fastest” solution (or set of requirements).  They are after a solution that is sufficient to meet the objectives of the program, and which they can execute.

“… and stick to them” (the decisions):

All team members, especially the leader, need to reinforce to each other (and to all comers from the “outside” including incoming new members of the team) that previous decisions have already been made.  They cannot be altered.  To revisit and to debate old decisions takes time and dilutes the focus of the team (whose members need to be meeting objectives).  To remain so focused takes integrity, commitment, and a certain amount of courage on the part of the team members and the leader – but the alternative is unacceptable:  increased cost, project delays, project cancellation, and loss of a high-quality final product. (Again, an exception to this occurs if overwhelming evidence of a need to change direction arises).

In conclusion, because it is so important to truly understand and act upon, we reiterate the following point: our biggest challenges to achieving high quality in our project and design objectives are not technical.  Truly recognizing this, using the methods discussed above (and others) to achieve team cohesion and focus, then consistently and consciously resisting changes to that focus is one of the most powerful things a team can do to complete a design project resulting in high-quality product.

© 2017 DPMInsight, LLC all rights reserved.

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for six sigma, Lean, and six sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (www.dpmillc.com).

Bob Parsons

Over 26 years of experience in leading Quality Assurance, Validation and remediation efforts in FDA regulated Medical Device and Pharmaceutical industry. Experience includes product development life cycle management from initial VOC through New Product Introductions (NPI), sustainable manufacturing, and end of life product management.  Technical expertise in quality system gap assessment, system enhancement, alignment and implementation of all quality elements including design controls, risk management, purchasing controls, change control and post-market surveillance.  Regulatory experience includes; ISO 13485, 9001 and 14971 certification, providing guidance for FDA PMA/510K and CE clearance, designated Management Representative, company representative and lead during FDA and ISO audits, 483 and warning letter resolution with experience working within consent-decree environments.  Bob currently supports various organizations in remediation and compliance projects through Raland Compliance Partners (RCP) www.raland.com .

Michael B. Falkow

Michael Falkow is a Quality Specialist with Raland Compliance Partners. He has served as a regulatory compliance and quality assurance executive with multi-facility/international companies and was an FDA Compliance Officer and Senior Investigator/Drug Specialist.  Michael has subject matter expertise for quality and regulatory compliance, quality auditing, quality assurance, quality control, supplier evaluation and certification, and compliance remediation.  He has been approved by FDA as a GMP certifying authority and is qualified to provide Expert Witness testimony for GMPs.  Currently – Adjunct Professor at Mercer County Community College – teaching courses on Clinical Development for Certificate Program in Clinical Research as part of Drexel University’s Masters Degree in Clinical Development.

 

[1] “Gold plating” is a wonderfully descriptive term used in project management to describe functions or features added to a design that are not linked to a product requirement.  They are extraneous to achieving the objective of the design – adding cost and complexity.

[2] A later planned article in this series, “Its All in the Story” will cover the issues around documentation of the design process.

[3] We seem to prefer to think of ourselves as engineers and scientists as rational in our listening and decision-making processes.  Much research shows clearly that we are not.  We have previously mentioned the book “Thinking, Fast and Slow”, Daniel Kahneman, Farrar, Straus and Giroux, 2011.  We highly recommend it as a both enlightening and humbling read.

[4] The “telephone game,” apparently known internationally as “Chinese whispers.”  See: https://en.wikipedia.org/wiki/Chinese_whispers

Part 7 – Pay Me Now, Or Pay Me Later

This article is unabashedly directed at two audiences: the product design/discovery team, and senior management.  It may not be obvious, or not often discussed, but  these two groups have considerable “skin in the game” with regard to downstream implications of product quality.

Two different viewpoints of this topic are presented in this article.  The first viewpoint is a “macroscopic” one that the Project Management Institute (PMI) calls “The Cost of Quality.”  The second is a more “microscopic” viewpoint that we call the “The Cost of Requirements.”  The two viewpoints are related, but they give different viewpoints of the sources of costs that an organization incurs to achieve high quality product.  One focuses in particular on the downstream financial repercussions of low quality product.

The PMI, as part of their internationally recognized best-practices in the PMBOK Guide[1], divides the “Cost of Quality” into two broad categories: the costs of “conformance” and the costs of “nonconformance.”  The costs of “conformance” arise from activities intended to “prevent” bad quality from occurring in a product.  Many of these activities occur during design.  Some also are activities to evaluate the quality of a product, such as inspection during manufacture.

The costs of “nonconformance” arise from failure of the product to achieve quality.  These include costs that arise while the product is “internal” to the organization (scrap, rework, etc.) and costs that arise when the product is “external” to the organization (warranty costs, lost business, liabilities, etc.).  The relationships between these cost categories as described by the PMBOK guide is illustrated in the figure below.  Also included is somewhat more detail of what the costs “are.”

This is an informative way to talk about the costs related to quality in the design and manufacture of a product. Even so, it is given at a very high level, and many more specific costs are present in reality.  The PMI is not the only organization that has attempted to describe the cost of quality: the American society for Quality (ASQ) has semi-quantitively described the Total Cost of Quality as: TCOQ = Cost of Conformance (COC) + Cost of Nonconformance (CONC) = [Prevention + Appraisal] +[Internal Failure + External Failure] = P + A + IF + EF.

For our current purposes, we wish to focus on the broad categories of costs of conformance versus nonconformance, and how these categories relate to costs incurred by an organization in satisfying Design Controls (as defined by 21 CFR Part 820, ISO 13485, etc.).

Let’s look first at the “Costs of Conformance,” which are incurred during design and manufacture.  They are about training people, documenting what you are doing, qualifying and validating equipment and processes, taking time to make sure the team is “on the same page” and ensure that all truly understand what the project objectives and deliverables are.  They also include appraisal costs (costs associated with measuring, evaluating or auditing products/services to assure conformance to requirements) and prevention costs (the costs of all activities designed to prevent internal or external failure in products or services).  Think about it: these are the very activities expected by design controls.  The PMI describes these activities as “costs” … and they are just that.  But they are necessary and expected costs, and typically are a one-time expenditure.  That they are “costs” does not alter the conclusion that these activities are accepted best-practices.  There should be no mystery that the FDA and ISO give them a specific title: “Design Controls.”  Here we have another example, as discussed in our first writing in this series (“A Quality System is Not Enough”) of alignment between widely accepted best-practices in engineering and project management and regulatory “requirements” for quality systems.

Above, we say that these costs / activities are “necessary and expected.”  This is because – and this is a point that should be firmly held in mind by both product design teams and senior management – if those “costs of conformance” are not spent effectively, the organization will inevitably incur the “costs of nonconformance.”  These costs of nonconformance are spent after the design process due to failures of quality, are typically not planned for, and are frequently dealt within a panicked, non-efficient, error-prone way.

One perspective on this is that costs of conformance are essentially an “investment” that when effectively spent, lead to profits downstream.  Conversely, costs of nonconformance are simply a loss with no downstream profit resulting from them.  Making the situation worse is that the “costs of nonconformance” are recurring and typically far exceed those for conformance. They can, and often do, impact the health of the organization – both in terms of organizational profit margin and morale of the employees.  They can threaten the very existence of the organization.

In other words, as the old FRAM[2] marketing slogan goes: pay me now or pay me later.

The product design team’s immediate reward system is often built around delivering the design to manufacturing within a given budget and within a defined schedule.  BUT – when they hand the design off to manufacturing the design team is not divorced from the downstream impacts if the “costs of conformance” (i.e. well-executed design controls) are not well spent.  The downstream costs of a poor quality design are felt by the entire organization.  These downstream costs include low manufacturing yield, scrap, customer support costs, recalls, law suits, etc.

Members of the product design team need to understand the following statements: the overall organization will answer first to its shareholders.  Profit margin is paramount.  When that profit margin is threatened, the Finance function will look first to cut expenses in organizational functions that are not “value added[3].”  R&D, not being involved in daily generation of revenue, is not “value added.”  Thus, creation and launch of a low quality design will threaten your job two, five, seven years down the road.

Senior management needs to understand the same things, with the added responsibility of managing long term risk to the organization.  Short term gains by being quick to market are easily offset by long term losses and expenses, and it is these long term costs that can threaten the existence of the company.  Poor employee morale resulting from the threat of reductions in force creates a vicious cycle of poor execution and resulting loss of product quality.  The personal threat to management then comes from the shareholders if profit margin and company growth are not maintained.

All this results from decisions during a design process that may have occurred years ago.

In the vignette in the first article of this series, we saw this play out as a series of “reductions is force,” followed by a removal of senior management by the Board of Directors.

Pay me now or pay me later.

Note also that the above issues arise in part from differing incentives for different functional groups.  It is also an example of how lack of coordination at a higher level above the leaders of the various functional groups can have a profound impact on the health of the organization (and thus the employment security of the employees).  R&D can be incentivized to get that design into production within a given budget and schedule, but it does not directly see (or answer to) the later “costs of nonconformance.”  Manufacturing might be incentivized by manufacturing rates and cost reductions, but the fundamental die on quality is cast during the design process.  Seldom are the two coordinated regarding what really matters to the organization: maintenance of long term profit margin.

Now to the necessary question of how can these issues be addressed? Some hint of this comes from taking a slightly different perspective of those “costs of conformance.”  Recognizing that the “costs of conformance” are really just another view of design controls, and design controls are intimately related to product Requirements – let’s ask the following question: how much does a Requirement cost?  This is that second more “microscopic” viewpoint that we call “The Cost of Requirements.”

When we define a product Requirement (formally, in the system engineering and design controls sense, with associated needs for verification and validation), we explicitly incur costs of: definition and documentation of the requirement, associated design activities to meet the requirement, component and assembly specification, test design and test method verification, test execution and documentation of results, manufacturing process design, validation, and monitoring, inspection, post market surveillance, customer support and training, and more.  Each formally defined product requirement obligates us to significant up front and ongoing recurring expenses.

In an earlier article (“Is this Really a Requirement?”) we argued for the benefits of carefully defining, and rigorously keeping to an absolute minimum, the number of formal requirement statements for a product.  Further, we argued the need to design only to those requirement statements.  In that earlier article, the argument was around project focus, minimization of “gold plating”, and forceful rejection of distractions from meeting the design objectives.  Here, the benefits of that careful selection of product requirements is seen to be more solid: money.  A focused set of requirements require less money to design and execute to.  It also reduces the long-term risk to the organization by reducing risks for regulatory action due to findings of non-conformance (whether due to lack of documentation of conformance or real failure of product quality) and reduced customer support produced by a less confusing or simpler product design.

In addition to the above, all of the points discussed in the preceding article regarding Design for Manufacturability hold, and should be followed.  With focused attention on a limited number of requirements and application of design for manufacturability on those limited requirements, the likelihood of effectively investing the “costs of conformance” is increased – and the “costs of nonconformance” will be decreased.

© 2017 DPMInsight, LLC all right reserved.

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for six sigma, Lean, and six sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (www.dpmillc.com).

Bob Parsons

Over 26 years of experience in leading Quality Assurance, Validation and remediation efforts in FDA regulated Medical Device and Pharmaceutical industry. Experience includes product development life cycle management from initial VOC through New Product Introductions (NPI), sustainable manufacturing, and end of life product management.  Technical expertise in quality system gap assessment, system enhancement, alignment and implementation of all quality elements including design controls, risk management, purchasing controls, change control and post-market surveillance.  Regulatory experience includes; ISO 13485, 9001 and 14971 certification, providing guidance for FDA PMA/510K and CE clearance, designated Management Representative, company representative and lead during FDA and ISO audits, 483 and warning letter resolution with experience working within consent-decree environments.

Michael B. Falkow

Michael Falkow is a Quality Specialist with Raland Compliance Partners. He has served as a regulatory compliance and quality assurance executive with multi-facility/international companies and was an FDA Compliance Officer and Senior Investigator/Drug Specialist.  Michael has subject matter expertise for quality and regulatory compliance, quality auditing, quality assurance, quality control, supplier evaluation and certification, and compliance remediation.  He has been approved by FDA as a GMP certifying authority and is qualified to provide Expert Witness testimony for GMPs.  Currently – Adjunct Professor at Mercer County Community College – teaching courses on Clinical Development for Certificate Program in Clinical Research as part of Drexel University’s Masters Degree in Clinical Development.

[1] A Guide to the Project Management Body of Knowledge (PMBOK Guide), 5th ed. the Project Management Institute. 2013

[2] FRAM Group IP LLC.  http://www.fram.com/

[3]  In Lean, a non-value added step is something that the customers do not see as evidenced in the final product.  Thus inspections, rework, etc. are non-value added steps.  It is likely difficult for many to agree with or accept the following: the customer sees the output of Manufacture, not R&D, so R&D is non-value added.  From Finance’s perspective, R&D does not contribute to the cash flow of the organization (neither do Quality, Regulatory, Clinical, etc.).

Part 6 – Can We Build This?

With all due respect to Bob the Builder[1], too often our answer to the question posed in the title of this article is “no, we can’t.”  This response requires a bit of explanation.  Consider the following dialog – not quite verbatim to a single conversation, but a synthesis of several very similar conversations:

Joe Design Engineer (to Author Hamlen): “…and so that is the design and the process I want manufacturing to use to perform this manufacturing task.”

Author Hamlen: “Umm… I am not sure if that is a viable process to take in to manufacturing.”

Joe Design Engineer: “I don’t see any problem with it.”

Author Hamlen: “Well, the process is rally rather ‘fidgety” – you need to get the parts oriented just right, and it could be easy to get the orientation wrong.”

Joe Design Engineer: “So?  Just back off and re-orient them until they are right.”

Author Hamlen: “Yes, but that takes time. On top of it, this design and assembly process has a lot of parts and many sequential steps.  If you make a mistake on any one of those steps, the entire assembly is scrap.”

Joe Design Engineer: “Again, I don’t see the problem.  Just go slow and be careful and the product can be assembled.”

Author Hamlen: “In the manufacturing environment with a lot of pressure to get product built quickly and get it out the door, taking that time presents a lot of problems.

Joe Design Engineer: “I don’t see any problems – I have no trouble performing this assembly on my laboratory bench top.”

Author Hamlen: “But in manufacturing, you are not in a lab environment, and time is king.  With this design, throughput will be slow, and if the workers are forced to hurry, there will be more mistakes and the scrap rate will go up.”

Joe Design Engineer: “Again, I don’t have any problem assembling this design.  We just need to train the workers more.  Besides, our profit margin will be so good on this product that a little scrap is not a problem.”

Our guess is that many of you have had this, or similar conversations.

The first point to make here is that the manufacturing environment is not the same as the development environment.  Most anyone who has worked in both environments will echo that statement.  In manufacturing there is: considerable time pressure to complete product assembly; turnover in the workforce; and movement of workers between differing assembly tasks which reduces their focus on a single process.  Also, the parts used  to assemble product are more variable in their characteristics (dimensions, mechanical modulus, surface roughness, etc.) than those parts used during the development process.

Aside from the pure cost associated with “scrap” is the relationship between scrap and quality.  One of the fundamental teachings of Six Sigma, borne out by experience, is the relationship between scrap rate and quality of released product.  The higher the scrap rate in manufacturing, the lower the quality of the released product.  This is the case even if there is a final inspection of an assembled product.  If you want to produce high-quality product, reduce your scrap rate.

How low a scrap rate is acceptable?  The answer to this is complicated, and often businesses will not explicitly define it – instead offering that “if I am still making a profit, I am ok.”  This appears to work in the short term, but fails when another organization steals your market share and profits by producing a product that they can sell less expensively, or which is recognized as having higher quality and reliability.  Examples abound: the US auto industry versus the Japanese in the 70’s and 80’s, the race towards smaller and cheaper computer disk drives, the first drug eluting stent approved for use in the US (it could not keep up with competition in cost and quality, and its manufacturer is now out of the market)[2], and more.

Again, how low a scrap rate is acceptable?  We might take a clue from those industries that are operating effectively – the most notable of which is the electronics industry.  There, the overall yield rates are often in the high 80% and can be well up in to the 90’s.  This is remarkable given that final yield is often the product of the sequential assembly steps.  For example, for five assembly steps, each individual step having a yield of 90%, the yield of the overall process is: 0.9 * 0.9 * 0.9 * 0.9 * 0.9 = 59%.  That is with only 5 assembly steps.  Hopefully, with that example, the drive to achieve “Six Sigma” yield levels on individual manufacturing steps becomes clearer.

The main point of all this is: the ability to achieve high throughput with low scrap rates in manufacturing is driven by the design.  If this point is not paid attention to during the design process, then no amount of “continuation engineering” will really fix a problem.  We simply cannot afford to continue to have more dialogs like the example one given at the start of this article.

On the regulatory front, organizations often struggle with satisfying, for example, 21 CFR Part 820.30 (h), Design Transfer: “Each manufacturer shall establish and maintain procedures to ensure that the device design is correctly translated into production specifications.”  We will repeat here our much earlier statement: a quality system is not enough.  Focusing on the regulations, which appear to require procedures for “handing a design off to manufacturing” entirely misses the point: effective design transfer starts with a design that is manufacturable in the first place.

So, by saying in the first paragraph that “no, we can’t (build it)” we mean that we often do not build our products in a way that is high-yield, high-quality, and can continue to beat out the competition.  We accept what we believe is “good enough” without real regard to the downstream problems that acceptance causes.

In the vignette in the first article of this series, we saw this play out by a rushed development effort, followed by a hand off to manufacturing that resulted in obsolete components, and overly tight component tolerances that drove up both cost and scrap rates.

Which brings us to … design for manufacturability.

This is a topic that today is often taught in association with lean practices.  However, this subject really goes back, to the authors’ knowledge, to W. Edwards Deming[3].  No one who ever had the privilege to hear Dr. Deming speak can ever doubt his passion and sincerity for his subject.  Among the many points he made was the need to include the people involved in manufacture in the design process.  And he specifically meant both the engineers, as well as the manufacturing line employees.  It is those people who do assembly up close and every day that really understand what the pitfalls are, and what many of the potential solutions are.  These are resources and sources of wisdom that we would be foolish to ignore.

In Deming’s era, this translated into the use of “quality circles.”  In the current lexicon of Lean, we speak of “the wisdom of the team,” or “the wisdom of the organization.”  Closely aligned with this is the Lean directive to “go to the Gemba.”  Because our intent here is not to teach Lean or Six Sigma, we will leave it to the readers to research these terms if not already familiar with them.  But we will make the point that the terms all represent methodologies that have at least one thing in common: actively drawing on the experiences, activities, and perspectives of those individuals who are closest to a specific activity or procedure.

In the context of product design, this translates into what we believe is the first and most important step in design for manufacturability: include on the design team significant representation by people who have had appropriate experience in the manufacturing environment.  Do not limit this to engineers: include representation by the manufacturing team itself.

With that perspective in place, we can then start to effectively use the tools and methodologies that are currently being taught as “design for manufacturing.”

These methodologies are widely known, and readily available by search on the internet.  They include; “design for assembly” (DFA) guidelines such as “reduce the number of parts” and “minimize assembly directions”[4]; scoring methods to enable semi-quantitative estimation of time to perform an assembly process (typically known as “DFMA” – Design for Manufacturing and Assembly)[5]; Lean Design[6], teachings from Lean to “poka-yoke” a part or process (i.e. make it literally mistake-proof so it is not possible to do or be used incorrectly); and teachings from Lean to mock-up and practice a manufacturing line or process as part of the design cycle[7].  Many other such methodologies and teachings are out there.

The purpose here is not to teach these methods, but rather to reinforce awareness of their existence.  Many or most of them are accessible to learn on the Web via straightforward searching.  It is likely more effective to first actually practice them under the tutelage of someone experienced in them – but lacking that opportunity it is still feasible to simply start out on your own from what can be learned on line.

We conclude this article with a list of recommendations that are either stated above, or logically flow from that discussion.  None of these points are really new, and all of them are executable should an organization decide to do so:

  • Demand that any individual has experience in manufacturing before they are allowed on a design team. Actually doing this does demand breaking down “silos” and reducing biases between organizational functions and their leadership – but it can be done.
  • Lacking the above, include significant manufacturing representation on the design team. Listen to them (their perspectives are golden).
  • Apply, as part of the design process, design for assembly, design for manufacturing and assembly, assembly process mock-ups, and other appropriate teachings from Lean. Be willing to change the design based on what you learn from these practices.
  • Do not accept low yield (i.e. low capability) processes. Low capability processes lead to high scrap costs, and reduced quality even in the released product.  If you accept these low capability processes, you may get your product out sooner, but your organization will pay a significant price downstream.

© DPMInsight, LLC 2017 All Rights Reserved

 

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for Six Sigma, Lean, and Six Sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (www.dpmillc.com).

 

Bob Parsons

Over 26 years of experience in leading Quality Assurance, Validation and remediation efforts in FDA regulated Medical Device and Pharmaceutical industry. Experience includes product development life cycle management from initial VOC through New Product Introductions (NPI), sustainable manufacturing, and end of life product management.  Technical expertise in quality system gap assessment, system enhancement, alignment and implementation of all quality elements including design controls, risk management, purchasing controls, change control and post-market surveillance.  Regulatory experience includes; ISO 13485, 9001 and 14971 certification, providing guidance for FDA PMA/510K and CE clearance, designated Management Representative, company representative and lead during FDA and ISO audits, 483 and warning letter resolution with experience working within consent-decree environments.

 

Michael B. Falkow

Michael Falkow is a Quality Specialist with Raland Compliance Partners. He has served as a regulatory compliance and quality assurance executive with multi-facility/international companies and was an FDA Compliance Officer and Senior Investigator/Drug Specialist.  Michael has subject matter expertise for quality and regulatory compliance, quality auditing, quality assurance, quality control, supplier evaluation and certification, and compliance remediation.  He has been approved by FDA as a GMP certifying authority and is qualified to provide Expert Witness testimony for GMPs.

Currently – Adjunct Professor at Mercer County Community College – teaching courses on Clinical Development for Certificate Program in Clinical Research as part of Drexel University’s Masters Degree in Clinical Development.

 

[1]  “Bob the Builder” is an animated children’s television show produced and copyrighted by HIT Entertainment Limited and Keith Chapman.  The iconic question and response from the show that many kids (now young adults) and parents will remember is: “Can we fix it? … Yes we can!”

[2] “J&J to quite struggling heart stent business.”  http://www.reuters.com/article/us-johnsonandjohnson-idUSTRE75E2PK20110615

[3] Out of the Crisis. W. Edwards Deming.  The MIT Press, Cambridge, Massachusetts. 2000.

[4] Different sources will have somewhat different versions of these guidelines.  One of them is available at: http://www.unm.edu/~bgreen/ME101/dfm.pdf.

[5] Software and information available from Boothroyd Dewhurst, Inc.  http://www.dfma.com/software/dfma.htm

[6] Software and information available from Munro and Associates, Inc.  http://www. http://leandesign.com/

[7] http://www.assemblymag.com/articles/89972-you-can-t-beat-a-good-old-mockup

Part 5 – Lets Gather a Few Good Friends

In previous articles we focused on quality systems and design controls: how they relate to each other in practice; how they embody established best practices that are not taught in engineering schools; and how they should be burned in to, taught, and propagated within the culture of any engineering organization.  In this article we start to focus more on organizational structure and organizational issues that are critical to effectively producing high-quality product.  As we have noted earlier, just having a quality system is not enough!

Frequently a project for a medical device, combination device, or a pharmaceutical, starts with the vision and drive of an individual.  That individual is able to “sell”[1] their concept to a like-minded group of others.  In doing so, they form the nucleus of a team which then grows to carry the concept forward[2].  Often this process of project growth is the same regardless of whether that initial individual resides within a tiny start-up, a small, or a large organization.  There is great power in the ability of an individual to sell their concepts and to crystalize a team around that person’s vision.  Hargadon’s insight in this regard is remarkable.  But this process can be risky if it is not managed carefully.

Consider the fundamental teachings of Design Controls: they hold that the development/discovery team strives first to understand the needs of the stakeholders (users, patients, payers, the business, regulatory agencies, etc.).  Those needs are then translated in to requirements statements.  Nowhere, at that initial stage, is there any mention of designs. The reality is that frequently a project starts with a design concept pitched to the business and development team.  This reality is often driven by the experience that funding or support from investors or corporate leaders seldom comes from just words delivered:  some kind of semi-functional prototype that people can touch and feel and have faith in is needed.  The authors recognize and support the role prototypes play in building understanding and support.  The critical difference is in the needed level of complexity and completeness of that prototype.  As engineers, we often have a predilection to drive prototypes to a “final” design.  However, a quickly and cheaply made 3-D print, or a paper mock-up of a display screen (thus avoiding time spent writing software) can be very effective in telling a story.  We also need to set appropriate expectations with investors and supporters.  Change the story from “this is what I will build” to “this is an example of what I can build.”  This can be a fine line to walk – but we must learn to do it.

Learning to walk this line is critical: the risks of starting with an embedded design concept are multiple and specifically contradict both adherence to design controls and to designing  high-quality product. If the individuals involved, and the organization itself, become overly invested in the design concept, it can be, and frequently is, defended and pushed forward, even in the face of significantly increasing hurdles in technical problems, cost, and time.  Often this occurs to the death of the project and the detriment of the organization.  All of this is seen in the vignette posed in Part 1 of this series of articles.  In the vignette, the design team is not allowed to settle on the true requirements of their product; they are overruled by the champion and are pushed forward with the design originally pitched to the organization.  The result in the vignette is a design that is difficult to manufacture (high scrap rate) and with parts that are becoming obsolete.  The outcome is a higher-than-projected cost of the product, a product that does not meet the needs of the customers and stakeholders, a product that fails at an unacceptable rate, and unacceptably high customer support costs.  The costs created by a rocky design process do not stop when the design process is finished.

As humans we look at the sunk cost of the project, and want desperately for that investment not to have been wasted: we are all susceptible to this “loss aversion”[3].  Make no mistake: this is a powerful and unconscious drive that can subvert data-driven decision making.  We sacrifice quality for the sake of “getting the project done.”  We justify design requirements after the fact, and defend the project despite its hurdles and costs – until everything falls apart.

Does this sound like a familiar experience to you?

Clearly, a dynamic leader around whom a team can crystalize is needed, or at least is greatly beneficial.  But, how do we resolve the conflict between the leader’s vision and design flexibility?  First, we need to recognize that none of us sees the world as it truly is.  Rather, we see the world nearly completely through the lens of our own expectations. This important concept is well described, from Kahneman’s descriptions of cognitive bias, to Benjamin Zanders’ Rule # 1: “It’s all made up.[4]”  Truly internalizing this concept is important: it is a critical first step that empowers the leader and team to recognize that that an initial design concept likely sprang from such biases.  Internalizing this concept will also aid the team’s understanding that the initial concept may well not be able to achieve acceptable quality.

How can a team avoid falling in to the trap of devotion to an initial design concept?  After all, it is natural that we might become enamored with a unique, personal, design.  Fortunately, there is a completely mechanistic method to help us avoid our personal biases … from that method derives the title of this article.

The leader, the originator of the product idea, will typically reach out to people known to be of like mind to herself (or himself).  These people are more likely to get “on board” with the project, work on it, and help “sell” it to others.  In short, we naturally “gather a few good friends.”  Right?

Not so fast.  This is exactly what we do not want to do. What we do need to do is … uncomfortable (but powerful).  To describe why and what this more effective approach is, lets present a teaching tool that author Hamlen has used in teaching design methodologies.

Consider that we have a design goal, and are seeking to understand the greatest breadth of design options that will achieve those goals.  The square below is meant to represent the entire breadth of human knowledge pertinent to our design problem:

Consider that author Hamlen, trained as a Chemical Engineer with experience in computational modeling of designs, brings to bear his background and knowledge to solve this design problem.  Of the total available knowledge, his knowledge might be represented as the blue circle in the diagram below:

To build the team, Hamlen reaches out to a good friend from the same Chemical Engineering program.  Call that person, “Joe”.  On the diagram below, Joe’s likely scope of knowledge is indicated by the circle in red.  Hamlen also reaches out to “Sue,” a mechanical engineer with whom he has worked closely, specifically on computational modeling of designs.  “Sue’s” available knowledge on this subject is indicated by the circle in green on the diagram below:

The problem becomes clear: by reaching out to only those who are “like me” –  those with whom I work well and who I expect (or want!) to “confirm my perspective” – I do not avail myself of different perspectives that will challenge the inviolability of my initial personal design idea.

There is an important concept to understand here known as the Strength of Weak Ties[5].  A fun, more current-day example demonstrating this concept arises from a global, WWW-enabled, study of the “small-world” phenomenon.  This phenomenon is otherwise known as the “six degrees of separation” theory that has been popularized through the “six degrees of Kevin Bacon” game.  The WWW-enabled demonstration[6] defined 18 “target” persons in 13 countries, and challenged more than 60,000 e-mail users to get a message to one of the “targets.”  The study’s findings showed that successful attempts to reach the target primarily used intermediate-to-weak “social” ties.   Further, they found that only 5 – 7 “steps” were needed to reach the target – a small world indeed!

The strength of weak ties holds that the collection of those whom I know “just a little” has access to much more information than do I and my close associates.

So, in trying to determine potential design options for his product, ChemE author Hamlen, should not reach out to ChemE “Joe,” but perhaps should reach out to author Parsons (with training in Microbiology, and extensive experience in Quality System deployment), and also to an acquaintance of Parsons, “Barb”, who is a EE, and also to a friend of Barb’s, “Peter,” who is in marketing, etc.  By doing so, the breadth of human knowledge pertinent to our design problem that is actually available to the team is significantly increased, as is illustrated in the diagram below:

By taking this approach to building the team – accessing people you “know a little,” and who have knowledge and perspective different from your own – you make available to the team a greater breadth of knowledge and experience.  The result is an increase in the likelihood of: 1) detecting deficiencies and pitfalls in that first “pet” design concept, and 2) identifying alternate design concepts that both avoid the pitfalls of the first, and that better meet  the stakeholders’ needs.  As Kahneman (Thinking Fast and Slow) would state: “other people have a superior ability to detect the flaws in our own logical arguments than we do ourselves.”

Note that taking this approach is a purely mechanical and conscious activity – no “learning” or additional “skills” are needed.  Simply enforce the choice not to bring your “friends” on board to the team, and be open to challenges to the initial design concept.  There is an apparent price to pay when taking this approach: friction in the team.  Imagine what is bound to happen when author Hamlen assembles the team, and presents the bright, shiny, initial design concept that he is convinced is the best idea ever.  Peter, who arrives at this project with a very different set of experiences and preconceptions from Hamlen, will say something like “that is a stupid idea … it will never work”; or, “I think this other idea that I have is better.”

Likely, most of you have seen this happen before.

Friction is uncomfortable – so we avoid it.  That avoidance is why we tend to build teams with people we know we have worked “well” with in the past.  However, a certain degree of friction is exactly what a development team should expect, create, and embrace.  That friction, and the discussion that goes with it, uncovers the hidden flaws that reduce quality in a design, while identifying alternative designs that arise from the greater body of available knowledge.  This process leads to a high-quality product.  Creating, capturing, and nursing disagreement is the secret the team and its leaders need to master.  Patrick Lencioni[7] addresses this as the need to erase “fear of conflict,” embracing healthy conflict through candid debate.

Actually accomplishing this is not easy, but it can be done.  It starts with the team leader setting the stage for conflict by assembling a diverse team.  It is further fostered by enforcing the team norm that candid debate is expected, and that alternative views will be listened to and incorporated into the team decision.  The initiator of the project idea must be willing to “let go” of his/her initial design concept.  Doing so does not make the time spent on the initial concept a “waste” – rather it becomes the critical “seed” that sparks meaningful discussion by the team.

Another point to strongly consider in nurturing and managing constructive conflict is one made in one of our earlier articles: many of the decision-making “tools” taught in Six Sigma and Design for Six Sigma are actually methodologies to build consensus in a team or organization.  Organizations should learn to use the disciplines’ methodologies, at the right time and place, to build consensus around design decisions.  When this is not enough, and a team is not able to work constructively through differing perspectives, consider using a trained and experienced facilitator.  Using a facilitator is nothing to fear, as doing so is really just continued embracement of the value constructive conflict.

Further Thoughts: On Stakeholder Needs and Requirements, not Designs:

Closely associated with the need for a team to look past and not get stuck in “initial” design ideas is the best practices teaching that a project should start with a definition of “requirements” for the product, not a design.  As stated in earlier articles, this teaching comes equally from the engineering disciplines of project management, systems engineering, Six Sigma, and design for Six Sigma.  By reducing the product concept to a high-level, generalized, statement involving neither design not testing elements, the team is freed from the fixation on an “initial” concept.  This opens their attention to alternate concepts that might be of higher quality.  This is also exactly the effect of taking the up-front time in the TRIZ[8] methodology to define the “Ideal Final Result” … which is really nothing more than a requirement statement.  When done correctly, the team members should experience an “Aha!” moment, with a thought of something like “Oh – that is what we are really trying to accomplish!”

Taking the time to work from design-free requirements, and using those requirements to guide the design selection, is key to achieving a high-quality design.  The power of that key step is lost when the team focuses on trying to make an initial design concept “work,” despite the obstacles encountered.

Further Thoughts: On the Power of Usability Engineering:

It is also important that the team members, sitting in their office or lab, understand the risk of conflating their own perception of a user’s need and that of the user’s actual need.  Surprisingly often the team members’ belief does not reflect the real user’s need or method of accomplishing a task.  This approach (call it “armchair engineering”) often leads to poor quality designs (at least one of the authors will admit to having learned this lesson the hard way).

A memorable image to drive this point home is the “Coffeepot for Masochists” image used by Don Norman, and introduced in his classic text “The Design of Everyday Things[9]”:

(Photo courtesy of Don Norman)

Don Norman is one of the pioneers of “human factors” engineering (today more commonly called “usability engineering”).  Norman’s original thesis basically held two points: 1) a design should be intuitively “obvious” as to how you use it (i.e. no manual should be needed), and 2) the design should meet the real needs of the user.  The Coffeepot example is exemplary as a counter-example: it is not clear how it should be used … and it certainly does not meet the needs of a user (unless the user is indeed a masochist)!

A better approach than sitting in the office or laboratory is to connect with potential users through Voice of the Customer (VOC) exercises, in which the purpose of a new device is described to them.  Let the users voice opinions regarding how they use a current device or would envision using the new device.  Sift through comments looking for what may eventually be adopted as requirements for the product.

For the purpose of the current series of articles, one cannot escape the reality that proper application of usability engineering is intimately tied to achieving high quality product: a user who views a product as “difficult to use” or “difficult to understand” will view the product as being of poor quality … regardless of the verification results of the design team.

© DPMInsight, LLC 2017 All Rights Reserved

 

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for Six Sigma, Lean, and Six Sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (www.dpmillc.com).

Bob Parsons

Over 26 years of experience in leading Quality Assurance, Validation and remediation efforts in FDA regulated Medical Device and Pharmaceutical industry. Experience includes product development life cycle management from initial VOC through New Product Introductions (NPI), sustainable manufacturing, and end of life product management.  Technical expertise in quality system gap assessment, system enhancement, alignment and implementation of all quality elements including design controls, risk management, purchasing controls, change control and post-market surveillance.  Regulatory experience includes; ISO 13485, 9001 and 14971 certification, providing guidance for FDA PMA/510K and CE clearance, designated Management Representative, company representative and lead during FDA and ISO audits, 483 and warning letter resolution with experience working within consent-decree environments.  Bob currently supports various organizations in remediation and compliance projects through Raland Compliance Partners (RCP) www.raland.com .

Michael B. Falkow

Michael Falkow is a Quality Specialist with Raland Compliance Partners. He has served as a regulatory compliance and quality assurance executive with multi-facility/international companies and was an FDA Compliance Officer and Senior Investigator/Drug Specialist.  Michael has subject matter expertise for quality and regulatory compliance, quality auditing, quality assurance, quality control, supplier evaluation and certification, and compliance remediation.  He has been approved by FDA as a GMP certifying authority and is qualified to provide Expert Witness testimony for GMPs.

Currently – Adjunct Professor at Mercer County Community College – teaching courses on Clinical Development for Certificate Program in Clinical Research as part of Drexel University’s Masters Degree in Clinical Development.

 

[1]The value of technical people being able to sell an idea within an organization, or to a broader audience, is worthy of in-depth discussion, but must be relegated to a later article.

[2] In his book “How Breakthroughs Happen,” Andrew Hargadon describes this near-spontaneous growth of successful energetic teams as akin to a “phase change” in a material.  A phase change is the result of a gathering of like material around a nucleation site, which causes a segregation of “like” material from “un-like” material.  In the same way, like-minded individuals associate with and gather around themselves, sparked by the vision and concepts of an individual.

[3] See “Thinking, Fast and Slow”, Daniel Kahneman, Farrar, Straus and Giroux, 2011.

[4] “The Art of Possibility: Transforming Professional and Personal Life,” Rosamund Zander and Benjamin Zander,” Penguin Books, 2002.  (those who have not seen Benjamin Zander present this material might want to search out videos of him on the Web.  Watching and listening to him is a special opportunity.)

[5] “The Strength of Weak Ties”. Granovetter, M. S. The American Journal of Sociology 78(6):1360-1380. 1973.

[6] “An Experimental Study of Search in Global Social Networks”. Dodds, P. S., Muhamad, R., and Watts, D. J. Science 301: 827-829. 2003.

[7] “The Five Dysfunctions of a Team: A Leadership Fable”. Patrick Lencioni. Jossey-Bass. 2002.

[8] TRIZ (“trees”) is a design and problem solving approach first developed by the Soviet inventor Genrich Altshuller.  See: https://en.wikipedia.org/wiki/TRIZ.

[9] “The Design of Everyday Things”. Donald A. Norman. Basic Books (reprint edition). 2002.

Part 4 – Is This Really a Requirement?

Consider the following statements derived from our laws and guidances for product design:

  • “…Each manufacturer shall establish and maintain procedures to ensure that the design requirements relating to a device are appropriate and address the intended use of the product …” (21 CFR 820.30(c)
  • “… The organization shall determine … requirements specified by the customer … requirements not stated by the customer but necessary for specified or intended use … regulatory requirements … requirements determined by the organization.” (ISO 13485-2003)
  • “… the stakeholder requirements govern the system’s development and are an essential factor in further defining or clarifying the scope of the development project … Outputs of the Stakeholder Requirements Process establish the initial set of stakeholder requirements for project scope …” (INCOSE Systems Engineering Handbook, October 2011)
  • “…In a modern quality systems manufacturing environment, the significant characteristics of the product being manufactured should be defined from design to delivery…” (Guidance for Industry, Quality Systems Approach to Pharmaceutical CGMP Regulations, September 2006)

In a previous article we wrote, about the activities of analyzing and defining stakeholder needs and translating those needs into product requirements.  We also described how these activities are included in the practices of project management, six sigma, and design for six sigma.  Because these steps during product development are so widely accepted, there is clearly something important about their influence on high-quality product.  In this article we focus on requirements: how they benefit the design process, and how they lead – when properly used – to high-quality product.  We also examine the pitfalls that many practitioners and organizations fall into – pitfalls that actively lead to poor quality product.

The attributes of high-quality requirement statements[1] are widely published.  21 CFR 820.30 maintains that requirements should be complete, unambiguous, and non-conflicting.  The INCOSE systems engineering handbook[2] maintains that requirements are necessary, implementation independent, clear and concise, complete, consistent, achievable, traceable, and verifiable – and maintains that the use of the word “shall” implies a requirement, and that use of the word “should” shall be avoided.

The description in the preceding paragraph defining the quality of single requirement statements is the level of analysis of requirements that most organizations use within their quality systems.  However, few organizations give thought or discussion to what a requirement really is – and not asking this question creates significant costs and risks and confusion for the organization.  The authors hold that understanding and rigorously enforcing analysis of what a requirement truly is is extremely important to high-quality product – perhaps more so than the syntax  of individual  requirement statements[3].

So … what is a requirement?  How can we distinguish between something that is, versus is not, a requirement?  Thinking critically, the answer is in INCOSE’s recommended word “shall.”  Words can be loaded, and the authors understand that some organizations dislike the use of the word “shall.”  However, for current purposes we follow INCOSE’s lead.  “Shall” is a statement about a characteristic of a product that absolutely, positively, demonstrably, and without exception, has to be met.  If that product characteristic is not met you do not ship the product.  Period.  There is a key point here that many organizations do not take fully to heart: a requirement is something that has to be met in order to meet the expectations of the stakeholders.  Another way to think about this is to ask: “If the product is shipped without the fulfilment of this ’requirement’, will the stakeholders care or even notice?”

That last sentence might seem trite or obvious, but the authors believe both that there are deeper implications to it, and that truly critical application of this question is a key starting point to producing high-quality product.

Why this belief? Because, as stated in our first article, if we do not rigorously reduce the set of requirements to those that are absolutely critical to meet the needs of the stakeholders, we open the door to “too much.”  We produce designs that are too complicated, too expensive, and take too much time to develop and manufacture.

Also, we need to understand that requirements are not “free.”   Each requirement has implementation costs that must be considered during the development process.  These include costs to collect/define the requirements, costs to design to those requirements, and costs to verify/validate the design.  During manufacture, each requirement results in more critical design outputs that must be monitored and controlled.  This, in turn, results in more criteria being placed on components from vendors.  When we have more monitoring and control activities than we can reasonably achieve, we will, and we do, “drop the ball”.

In the vignette posed in the first article of this series, the team is forced to proceed on a fast timeline and without clear understanding of what the requirements are for their product.  In that circumstance, the tempting thing to do is to rapidly adopt requirements without critical thought about their implications.  Moreover, in the vignette the champion ultimately forced upon the team his/her concept of what the product should “be” based on his/her preconceptions.  Unfortunately, this action  bypassed the critical step of definitively understanding what the stakeholders really need[4].  In the vignette, manufacturing discovers it is difficult to manufacture the product and its components without high scrap.  This likely results from inappropriate requirement setting, and possibly from conflicting requirements.  Finally, in the vignette, manufacturing discovers they are not controlling all the critical manufacturing tolerances – a sign that there are so many requirements it is unclear what the critical design outputs are and/or resources are not available to effectively execute all the monitoring required.

We take too much time and spend too much to complete a design, and then fail to demonstrate that we are monitoring and controlling the critical design outputs that we should.  Not demonstrating that control can result in a regulatory finding during an audit.  What follows is product declared as non-conforming, 483 notices, warning letters, recalls, and worse!  We then experience significant organizational thrashing and non-revenue-producing activities to try to correct the situation.  This is especially regrettable when we go through all this for a “requirement” that the stakeholders would neither notice nor care about.

The worst result of defining requirements that should not be requirements is the following: in the confusion and overload that arises from attempting to monitor and control design outputs that do not need that attention … we fail to monitor and control design outputs we really do need to monitor.  These design outputs are the ones that stakeholders will care about and do notice when they are not present in the product!  Six sigma calls these design outputs “critical to quality.”  21 CFR 820 calls them “design outputs that are essential for the proper functioning of the device.”

Unfortunately, it is far too easy to stamp the label “requirement” on any statement we feel we want to.  If we do so too easily or frequently,  we create the cascade of work, distraction, and errors described above.

So, how can we drive critical thought to differentiate between a true requirement and one that  is not?  As a back-door way to answer this, we ask the readers the following question: “How many times have you been part of a material review board (or similar board) tasked with reviewing non-conforming product … and ended up justifying a deviation to ship that material or product?”  If so, you were also faced with the following question: what is your response to the FDA investigator when you are questioned regarding your decision to effectively ignore a requirement?

Come on, admit it – this happens often.  Here is the point: if you justified a deviation and shipped the product, then that non-conforming characteristic should never have been a requirement.  It is far better to distinguish requirement from non-requirement early during the design process.

There is no single and simple answer regarding how to differentiate between “real” and “not real” requirements.  Fortunately, however, there is a suite of approaches that can be used.  A good starting point is to  ask: will we advertise that function or characteristic? Will we make that claim in a brochure or manual?  Will it become the basis for treating or diagnosing a patient condition that we will build a regulatory claim around?  These types of questions are usually fairly high-level and non-technical, but they help distinguish what we can call a requirement from what we must.

For example, if we are designing a ball to use in a sports game, our users will care (and notice) whether the ball is “round and fits comfortably in the hand of the typical player.”  The players do not care, and likely will not notice, if the ball diameter is 3.5 ± 0.1” or if it is 3.8 ± 0.1” with eccentricity less than 0.1 and roughness less than a certain value.  This is an example of “can do versus must do.”  We can require a diameter, we can require a tolerance for that diameter, we can require an eccentricity, we can require a roughness – but must we?  Certainly things like diameter, eccentricity, and roughness can be design outputs – but we need to ask the question how tightly we need to  control them before a stakeholder notices?  Often we don’t need to control them nearly as tightly as we believe.  In this example,  any ball of diameter, say, 3.3” – 3.9” might well do.  As designers we all too easily and frequently jump to declaring ‘shall be 3.5 ± 0.1” ‘ whereas all we really need is the statement “fits comfortably in the hand of the user.”  If we take the former approach, the manufactured ball with diameter 3.7” is non-conforming product.  If we take the latter approach, that same manufactured ball is just fine[5].

Another trap we fall in to is to substitute “what we can buy or source” for what we really need.  Take that same ball.  We discover that we can source from an outside vendor a ball with diameter 3.6 ± 0.1”.  With that knowledge, we write in to our requirements that the ball must be 3.6 ± 0.1” diameter.  In our vignette in the first article of this series,  under time pressure to get the design finished, the design team likely would have latched on to those values they could quickly define, versus taking the time to understand what was really needed.  What do we do when, later, a vendor’s manufacturing process changes, or we shift to another vendor?  In the example at the start of this paragraph the balls might start coming in with diameter 3.7 ± 0.2”  What we classically do is: panic.  We blame the vendor (which is destructive to  cooperative relationships), exhort  them to change the process back to where it was (which often they cannot), and then finally justify shipping the product anyway.

If instead, in our ball example, the requirement is “fits comfortably in the hand” a minor shift in diameter of sourced balls is a non-issue.  To be sure, a range in the design output of  3.3” to 3.9” might be defined as acceptable but, because this dimension is  not defined as critical to quality, only occasional monitoring is needed.  As designers we make this misstep of mistaking “can source” for “need” too easily and too often.  Examples are: battery size or capacity, container volume, color (few are going to notice the difference between pale green as rgb (102, 255, 102) versus rgb (110, 240, 110)); part dimensions when stack-up is not an issue; roundness of an edge, etc.

The key point here is to recognize the difference between design output that needs to be tightly monitored and controlled because it is directly linked to satisfying a stakeholder’s requirement, , versus design output that is not tightly linked to satisfying a stakeholder need.  The latter needs  much less rigorous and less frequent monitoring and control (this is not to say no monitoring).  Imagine the difference in manufacturing execution between building to a dimensional drawing that has 50 dimensions that need to be tightly controlled, versus a drawing that has 49 dimensions that need to be defined (to manufacture the thing) but not tightly controlled, and only one dimension that needs to be rigorously monitored and controlled…..

If you can accept what was said above, here is the next challenging step in thinking:  better yet, simply do not make something a requirement at all!!  This goes against the grain of many design teams.  But,  if we can master the perspective, it yields incredible freedom to execute and to focus on those requirements that we truly need to focus on.

Lets take our ball example.  Ok, we have the requirement that it is “round and fits comfortably in the hand of the typical player.”  But who or what says that we need to say anything at all about color, or weight, or texture, or internal pressure, etc.?  Certainly these are design choices we need to make to actually source or manufacture the product.  But if they are not initially stamped as “requirements,” and if the design decision is not linked to satisfying any other requirement, then our regulatory and quality burden to monitor and control those aspects of the design is much, much lower.  This then  allows us to truly focus on those design outputs that must be closely monitored and controlled.

Experience has shown that truly critical assessment of a requirement set, i.e. identifying what statements are truly “must have” statements, allows reduction of the number of requirement statements by about an order of magnitude.  Imagine shifting from designing, manufacturing, and controlling a product with 200 “requirements” to one that only has 20 requirements.  Imagine, in the vignette in our first article, the impact on MQD, Inc. if they had driven to accomplish this.

In an earlier article (“A Quality System is Not Enough”) we made the point that what really matters in producing high-quality product is not the quality system.  What matters is how that quality system is used.  The discussion in the present article represents a case in that point: the same quality system can give rise to a product with 200 requirements … or 20 requirements.  What is regarded as a requirement has changed – not the  quality system.

There is another thought to consider here that likely will cause some people to disagree, but which the authors sincerely hope will give design teams pause to consider.  All the best practices describe requirements as being design-free.  They are statements of “what” a design needs to accomplish, not “how” that will be accomplished (the latter is design output).  Thus design controls, especially requirements statements, are fundamentally based on “soft,” more “intuitive,” more “conceptual” facets of the design activity.  Conversely, many engineering disciplines (and thus the people who are attracted to them) are based on a more “physical”, “structural” understanding of our designs.  The difference between these learning and thinking styles is illuminated in an excellent slide shown to incoming freshmen at the University of Minnesota College of Science and Engineering (permission to use this slide has been graciously given by Dr. Paul Strykowski, U of M Associate Dean for Undergraduate Programs):

Dr. Strykowski’s point is that, by his experience,  an individual is more likely to learn and execute effectively only on one side of the dichotomy of either “physics intensive” or “chemistry intensive” disciplines.  This is dependent on the student’s inherent learning and thinking style.  This slide resonated with the authors because we have considered the same dichotomy as distinguished by “concrete,” “physical” thinking (the left side of the slide) versus “conceptual” thinking (the right side of the slide).  You can touch, and feel, and see, and physically manipulate most of the pieces associated with the  disciplines on the left side of the slide.  You can not touch, or feel, or see a chemical, or a molecule, or a chemical reaction rate, or a property of a material, etc. – you need to think about those things conceptually.

In exercising design controls, much of the initial work is conceptual.  Yet many product design teams are made up predominantly by individuals drawn from the left side of Dr. Strykowski’s slide.  The result is that such teams move very quickly and naturally to substituting detailed design output for what should be design-free, conceptual, requirements.  This leads to the problems we all too often experience in designs.

Are engineers ill-suited to manage design controls?  (Don’t yell too loudly!)  We are not sure, but we believe the question is worth serious consideration.  The ability to think conceptually can be taught and needs to be reinforced: this effort needs to be made on all design teams using design controls.  Perhaps also we should strongly consider bringing non-technical people onto the design team to at least manage and oversee the definition of the initial set of product and stakeholder requirements.  Such people should be well placed to quickly distinguish between “what the stakeholder wants” versus design output that does not need to be tightly controlled and that the stakeholders never really think about.

© 2017 DPMInsight, LLC all right reserved.

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for six sigma, Lean, and six sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (www.dpmillc.com).

 

Bob Parsons

Over 26 years of experience in leading Quality Assurance, Validation and remediation efforts in FDA regulated Medical Device and Pharmaceutical industry. Experience includes product development life cycle management from initial VOC through New Product Introductions (NPI), sustainable manufacturing, and end of life product management.  Technical expertise in quality system gap assessment, system enhancement, alignment and implementation of all quality elements including design controls, risk management, purchasing controls, change control and post-market surveillance.  Regulatory experience includes; ISO 13485, 9001 and 14971 certification, providing guidance for FDA PMA/510K and CE clearance, designated Management Representative, company representative and lead during FDA and ISO audits, 483 and warning letter resolution with experience working within consent-decree environments.  Bob currently supports various organizations in remediation and compliance projects through Raland Compliance Partners (RCP) www.raland.com .

 

Michael B. Falkow

Michael Falkow is a Quality Specialist with Raland Compliance Partners. He has served as a regulatory compliance and quality assurance executive with multi-facility/international companies and was an FDA Compliance Officer and Senior Investigator/Drug Specialist.  Michael has subject matter expertise for quality and regulatory compliance, quality auditing, quality assurance, quality control, supplier evaluation and certification, and compliance remediation.  He has been approved by FDA as a GMP certifying authority and is qualified to provide Expert Witness testimony for GMPs.

Currently – Adjunct Professor at Mercer County Community College – teaching courses on Clinical Development for Certificate Program in Clinical Research as part of Drexel University’s Masters Degree in Clinical Development.

footnotes:

[1] Note that here we are explicitly distinguishing the concepts that a single requirement statement is of high quality, versus whether that statement should, in fact, be a requirement; versus  whether the complete set of requirements makes sense or is overly burdensome to the organization.  These are different concepts, and typically only one of them is paid attention to by development organizations.

[2] INCOSE systems Engineering Handbook: A Guide for System Life Cycle Process and Activities, 4th ed. International Council on Systems Engineering. Wiley. 2015.

[3] This is not to diminish the importance of clarity of those individual statements: an unclear or quantitatively ill-defined requirement cannot be suitably verified or validated.

[4] It is often claimed by designers that through experience they “know” what the customers need or want.  However, experience with Usability Engineering, which forces on-the-spot observation of how something is used or how an action is executed (what Lean calls “Going to the Gemba”), frequently reveals large and critical differences between reality and the designer’s assumptions.  Formally going through the process of defining, and defending, stakeholder’s needs is critical to avoid falling prey to incorrect biases or assumptions.

[5] Some might argue that you cannot measure “fits comfortably in the hand of the user” – but yes you can.  That is precisely what Human Factors and Usability Engineering is all about – and why regulatory agencies are presently placing high emphasis on this practice.

Part 3 – Everything I Know I Learned After College

Consider the following posted requisition for hire:

Senior / Principal Quality Lead

“ … This position is responsible for supplying quality leadership throughout the organization.  The Quality Lead will interact with and influence other cross functional team members, including R&D, manufacturing, regulatory, clinical, etc.

Responsibilities include:

  • representing Quality on new product introduction projects
  • developing, assessing, and validating test methods
  • assuring compliance with internal SOP’s and external regulations
  • leading Root-Cause and CAPA efforts, Risk Management, Risk Reviews, Material Review Boards, and Change Control needs
  • Representing the Customer, identifying their needs, and reducing customer complaints
  • Driving and fostering a quality environment and mindset through the business: provides training, guidance, leadership, and expertise to the R&D, quality, and manufacturing organizations.

Skills needed:

Competency in statistical techniques: Gauge R&R, SPC, process capability, sampling plans and sample size determination, DOE, ANOVA, regression, Lean

  • Design FMEA, process FMEA
  • Test method and protocol development for verification and validation studies, and evaluation of test results
  • Demonstrated ability to drive change
  • Strong analytical and problem solving skills
  • Ability to multi-task

Experience

  • Practical knowledge of FDA Quality Systems Regulations, ISO 13485 (medical device quality management systems), ISO 14971 (Risk Management), IEC 62366 (usability engineering)
  • Previous experience working in an FDA regulated environment
  • Bachelor’s degree in an engineering discipline or related STEM field, and 1-5 years experience.

… “

Ok – this is not an actual requisition duplicated verbatim.  But it is a synthesis of real job postings this past year, all of them seeking hires in the medical device and combination device arena.  The “posting” is representative of a great many of the advertised job descriptions and criteria against which candidates are evaluated.

Let’s first look at the needs that appear to drive this job description.  In a previous article (“A Quality System is Not Enough”) we indicated that the best practices of systems engineering, Six Sigma, and project management all supply the tools, and practices using those tools, to support development of high-quality products.  The job description above contains the responsibility to “represent the customer and to identify their needs”.  These activities of identifying stakeholders and evaluating their needs come directly out of project management, Six Sigma, and systems engineering.  This is also the critical first step in the FDA and ISO design controls, which call for identifying what it is that the new product needs to do.  These needs statements are the “Design Inputs (DI).”  It makes sense that for new product development and design…this skill is needed.

Next in the requisition are a series of needs around verification and validation, along with associated test method development and evaluation of results.  First, we note that the activities of verification and validation are integral to systems engineering and project management – though they are often confused.  A useful graphic to illustrate the difference between verification and validation is the “V diagram” from systems engineering:

Follow the arrows from the top left: first define user needs, use those to define system requirements, then sub system design requirements, and so on.  After you do the design work to the lowest levels, work your way up the right-hand side: verify that what you actually made meets the component, sub-system, and system design requirements.

As we will discuss in a later article, the design requirements that are “verified” are (typically) posed in technical language: a dimension is 5 mm, a weight is 12.3g, a volume is 7.8 ml, and so on.  Some non-technical language may be present at the “higher” system levels.  At “lower” levels the language becomes more specifically technical … but at all levels the requirements are always measurable.  The needed verifications can be done in the lab with (as defined by the test protocol) micrometers, balances, MTS machines, etc.  They can also be accomplished by: inspection of drawings; inspection of the physical product; demonstration of function; or analysis via mathematical models or other calculations.  Verification can, and should, be an ongoing activity: it takes place throughout the design process – not just as a single final test.

The language used to define User Needs, however, is a whole different thing.  A user will not come out and say “I want this widget to be 5.6 cm in diameter with a maximum weight of 0.23 kg.”  They say something more like, “I want to be able to comfortably hold this in my hand.,” or “I want to be able to turn the knob in an operating room while wearing sterile gloves.”  A design is validated by giving the user the final product (or a suitable substitute) and asking, “Does this do what you need it to do?”  (The execution of usability engineering is more rigorous than the previous statement, but hopefully the gist of the point comes through).

So – validation is about confirming that the design output (DO), or the device itself as actually built, meets the user’s needs and expectations.  Verification is different: it does not involve human factors, rather it demonstrates that each product specification, at each “level” of the design, is met.  Verification requires that laboratory-based tests, inspection, or analyses, be developed, executed, and interpreted.  It is perhaps crucial for the business needs of an organization to take note that a specification could be developed that passes verification, but fails validation if the design fails to meet the user’s needs.

Returning to our requisition, this position clearly needs skills in test method development.  The methodologies used to perform these activities are mostly statistical methods that are integral to the body of knowledge contained within Six Sigma (gauge R&R, sampling plans and sample size determination, capability analysis, regression, etc.).  Less obvious, but still present in the job description, are tools that assist in effectively determining the ability of a design to meet the stakeholders’ needs (DOE, ANOVA, regression, root-cause analysis, and FMEA).  Again, the use of all of these methodologies makes sense in satisfying the regulatory “requirements.” All of these methodologies are contained in the bodies of knowledge of Six Sigma and systems engineering.

Continuing the evaluation of the requisition, we see a series of responsibilities and skills often considered “soft” skills: “representing quality throughout the organization” (i.e. influence management), “fostering a quality mindset throughout the organization” (i.e. change management), “providing training, guidance, leadership” (i.e. instruction and mentorship), “participating in Material Review Boards” and “change control needs” (i.e. critical thinking).  These skills and methodologies are not taught by systems engineering, Six Sigma, or project management (although project management does discuss and foster them under “develop” and “manage” the project team).  Neither are these skills taught in an engineering education: they are learned over time through experience, and occasionally through hard-learned lessons.  For some, these skills are never learned.

Not explicitly stated in the job description, but definitely present, is a need for something we seldom talk about: confidence, strength, maturity of personality, and a willingness to engage in productive conflict.  In sum: an ability to challenge an organization, its constituent functions, and its leadership, to think differently and to consider options (whether design or process options) other than those with which one is familiar or would otherwise prefer.  This is the ability to see through the fog and identify the core element to execute – and to teach others how to do the same.  In short – to lead.

These skills are definitely not taught in the engineering disciplines.  The authors would argue that the ability to play a leadership role comes from hard-learned experience and time on the job.  Six Sigma and Design for Six Sigma actually teach methodologies that support an individual executing a leadership role in a team.  Although many of the decision-making tools taught in Six Sigma and Design for Six Sigma are taught as “engineering” tools … in reality, when effectively applied, they are really methodologies to lead people to consensus in a team or organization.  Examples of such tools include: prioritization/selection matrices, RACI matrices, affinity diagrams, “house of quality” incidence matrices, process flow charts, fishbone/Ishikawa diagrams, 5-why techniques, FMEA techniques, Pugh concept selection matrices, the analytic hierarchy process, and many more.  Think about it: even graphical display of data (Pareto charts, histograms, scatter plots, regression analysis, etc.) and statistical methodologies, with their agreed-upon significance levels, p-values, etc. are methodologies for building consensus in the face of potentially confusing information.

Finally, in the job description there are requirements of practical knowledge of regulatory and ISO requirements, combined with previous experience working in an FDA regulated environment.

All of this is asked for with 1-5 years of experience after leaving school.  We are not distorting the experience called for – check the job requisitions that are now being posted.

This example posting, and many like it, is for a “Quality Lead.”  What it is really describing however, are many of the activities and skill sets involved in systems engineering, project management, and Six Sigma.

Here is the critical point: the skills, methodologies, and experiences central to designing and manufacturing high-quality product, are NOT taught, to any significant level, in either the undergraduate or graduate engineering curricula.  Also not taught in those curricula are the skills needed to do so in a manner compliant to regulatory expectations.  Practical knowledge and effective execution of this set of skills is only developed through practical work experience.

Everything I know to effectively navigate and execute engineering product development and produce high-quality product I learned after college.

One of the authors has had opportunity to seek input on the following question: “How much experience is needed to effectively execute the skill set and responsibilities required by this job description, or ones similar to it?”  The question was posed to people in the medical device industry with a wide range of experience and job level – varying from Senior/Principal engineer up through the Director and VP level.  The responses were quite uniform:

At least 15 years.

Yet, almost uniformly, postings look for people to fill this role who have less than five years of experience.  Anyone with more experience is automatically considered “over qualified” and not even considered.

Think about it.  In our hiring practices we are specifically excluding from these critical roles the very people who have the practical experience and skill sets needed to produce high-quality product.  Instead, we expect those skills from people right out of college, where the skills are not taught, and with far too little real-world experience to have acquired and therefore to use them.

To make things worse, organizations frequently look askance at any or all of the disciplines of project management, Six Sigma, or systems engineering.  Why?  The reasons are complicated.  Sometimes these disciplines try to create an “empire” within an organization, try to impose too much overhead or constraints on an organization, and truly do slow down organizational execution.  Sometimes the organization rejects the rigor called for because they “know what their design is and just want to go ahead and build it” (a perspective which is, by the way, antithetical to the Design Controls requirements – so such an organization should step very carefully).  Sometimes the organization is just resistant to change and does not embrace the “new” disciplines (regardless of the fact that for decades these have long been world-wide accepted best practices!).

Whatever the reason, organizationally we often do not acknowledge the critical learning coming from project management, Six Sigma, and systems engineering.  Because of this we also do not seek to sustain and propagate them within an organization.

What can we do to change this mismatch between the true quality goals of our organizations and our expectations of who we will hire to achieve those goals?

As we said earlier, the skills to achieve those quality goals reside in the disciplines of systems engineering, project management, and Six Sigma.  The authors do not believe there is a need to create each of these functions in their totality within an organization.  Many organizations, especially small ones, do not need or cannot afford that.  Also, in any sized organization there is real danger if any of those disciplines act in a way that their sole purpose is to promote and sustain themselves, rather than act in a way that promotes the overall health of the organization (this is a topic of a later article).

Rather, we are talking about recognizing, embracing, and fostering the individual teachings and practices embodied by these disciplines.  There is great overlap between the disciplines of systems engineering, project management, and Six Sigma.  But, with regard to producing high-quality product, they also each have unique benefits to the overall objective.  Systems engineering and project management supply focus and structure around identifying the stakeholders, clearly documenting their needs, and identifying how the system can be defined and broken down to sub systems in order to meet those needs.  Design for Six Sigma focuses on identifying a best design to meet stakeholder needs and requirements (requirements are not the same as design – also a topic of a later article). Six Sigma and design for Six Sigma together supply the statistical/analytical horsepower to: quantify the capabilities of proposed design and manufacturing processes to satisfy the requirements; to distinguish between competing design concepts (and in so doing supply the very mechanisms needed to perform design verification and validation); and to monitor and control designs and manufacturing processes over time through control charting methodologies.  Lean, although we have not focused on this practice above, supplies the process flow charting and related graphical and visual methodologies to build team consensus on the understanding of exactly what our processes are (and thus how they can be improved) and how they are performing.

We state again, the clear majority of these methodologies, of which there are literally hundreds, are not taught during college-level training.  We are not going to get them from candidates with just a few years of experience.

So, how to proceed?  At a minimum, seek and hire candidates who are certified by a reputable body at a black-belt level (or equivalent) in these disciplines.   There is a growing trend in Universities and Community Colleges to offer certification training through their Continuing Education programs.  Many companies are taking advantage of this to send selected employees for such certification. The certification gives some level of confidence that these candidates carry with them an appropriate understanding, knowledge and practice in these methodologies.  Do not hire them to create a “systems engineering group … or a “project management group” … or a “Six Sigma process improvement group.”  Hire them with a mandate to use, and to demand the use of, the appropriate methodologies at the appropriate time and place.  Put them in a position of organizational influence and oversight so that they can influence, model, and coach others in the use of these methodologies.

For somewhat larger organizations, we recommend identifying individuals who show interest in and passion for these methodologies.  Sponsor them (in terms of both funding and time) to train to a black belt level (or equivalent) through a reputable outside organization.  Again, do not use them to create additional functions, but use them to identify and use the appropriate methodologies at the right time and place.  Above all, support their activities within the organization.

The largest organizations should seriously consider institutionalizing the learning and internal teaching of these skill sets.  Where this has been done, the use of these methodologies has born great fruit.  The practice of “internalization” of these engineering methodologies is something that we have lost over the years.  The authors believe this loss is a significant cause of the discordance between implementation of “quality systems” as a compliance-focused mechanism versus an organization’s use of fundamental and powerful engineering methodologies to produce high-quality product.

In the first article these authors posted, we made the point that product quality is an outcome of effective interaction between organizational functions, and cannot be “owned” by any one function.  For an organization to effectively wield the methodologies of systems engineering, project management, design for Six Sigma, and Six Sigma, the appropriate “pieces” (i.e. specific methodologies) must be used at the correct time.  Some of them need to be used up front during evaluation of the needs of the stakeholders, others during translation of those needs into product requirements, others during the development of a design that satisfies product requirements, others when the design (and associated manufacturing processes) are verified and/or validated, and still others during manufacturing and post-market surveillance.

We need to empower the individual functions in the organization to use these engineering methodologies appropriately.  A quality system by itself, because it cannot foresee and define reactions and procedures for all eventualities, cannot therefore define what is “appropriate” to a sufficiently fine level of detail.

Rather, the practices an organization puts in to place on a daily basis, used within the framework of the quality system, but not defined by the quality system, define how the quality system is “used.”  It is the discrete engineering practices that ultimately lead to high-quality product.

© DPM Insight, LLC 2017 All Rights Reserved

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for Six Sigma, Lean, and Six Sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (www.dpmillc.com).

 

Bob Parsons

Over 26 years of experience in leading Quality Assurance, Validation and remediation efforts in FDA regulated Medical Device and Pharmaceutical industry. Experience includes product development life cycle management from initial VOC through New Product Introductions (NPI), sustainable manufacturing, and end of life product management.  Technical expertise in quality system gap assessment, system enhancement, alignment and implementation of all quality elements including design controls, risk management, purchasing controls, change control and post-market surveillance.  Regulatory experience includes; ISO 13485, 9001 and 14971 certification, providing guidance for FDA PMA/510K and CE clearance, designated Management Representative, company representative and lead during FDA and ISO audits, 483 and warning letter resolution with experience working within consent-decree environments.  Bob currently supports various organizations in remediation and compliance projects through Raland Compliance Partners (RCP) www.raland.com .

 

Michael B. Falkow

Michael Falkow is a Quality Specialist with Raland Compliance Partners. He has served as a regulatory compliance and quality assurance executive with multi-facility/international companies and was an FDA Compliance Officer and Senior Investigator/Drug Specialist.  Michael has subject matter expertise for quality and regulatory compliance, quality auditing, quality assurance, quality control, supplier evaluation and certification, and compliance remediation.  He has been approved by FDA as a GMP certifying authority and is qualified to provide Expert Witness testimony for GMPs.

Currently – Adjunct Professor at Mercer County Community College – teaching courses on Clinical Development for Certificate Program in Clinical Research as part of Drexel University’s Masters Degree in Clinical Development.

Part 2 – A Quality System is Not Enough

About twenty years ago the FDA re-wrote 21 CFR Part 820, the “Quality System Regulation” (QSR), to require that manufacturers institute and maintain quality systems.  This re-write included the critical element of design controls.  Even before that re-write of Part 820, the practices around design controls were embodied in ISO standards.  Nevertheless, it is still common for design and manufacturing organizations to re-tool their quality systems because of product quality issues – and those organizations pay a significant amount for help to create or completely re-write their quality system, often on an urgent or emergent basis.  These revisions frequently focus on design control practices.  An extensive market exists around this need – and several of the authors have been part of this market for quite some time.

Yet, issues with product quality persist: companies continue to pay the prices associated with poor quality, or they succumb to market and regulatory pressures and do not survive.  Clearly there is room for improvement in practices that actually lead to high-quality product.

The authors believe that, to understand how this situation can be changed, it is important that we first understand what quality systems are and where they came from.

It is common and tempting to view a quality system on a point-by-point basis.  We pull out one of the little books that list the regulations (21 CFR 210/211, 21 CFR 820, 21 CFR 11, etc.), and shuffle the pages to Section XXX.YYY.  We read one sentence and ask “are we satisfying that requirement?”  Worse, we go Section XXX.YYY, read it, and say “this does not say I have to do ‘activity Z’ (i.e. some other activity) … so I won’t do it.”  Come on, admit it … many of you have been part of those or similar discussions.  This is a misleading “forest for the trees” approach to creation, management, and use of quality systems.  When we focus on the minutiae of specific phrases, and debate fine points in the statements’ meaning, we lose sight of the objective of those statements.  Worse, we don’t clearly envision from the beginning what the quality system is trying to accomplish.  In addition, we lose sight of the fact that the QSR regulations are minimum requirements – and in so doing we don’t achieve the desired goal of the quality systems.

So, what are “quality systems?”  Where did they come from?  Information about this exists in many practices that surround us: Project Management, Six Sigma, Design for Six Sigma, and Systems Engineering.  All of these disciplines embody world-wide, accepted best practices. Yet, they are often regarded as disparate systems and, unfortunately, are often thought to conflict with each other in execution.  In reality, these practices share many common elements.

At the beginning of any product design effort, Project Management, Six Sigma, Systems Engineering, and 21 CFR Part 820 all call on you to do the same things: figure out what the customers need; define the disease the product is intended to treat or cure; identify what the customers will buy; and then carefully and clearly define the specific set of needs you choose to address in your product.  And – importantly – do so before you actually design the product so you are not justifying your design after the fact.

Specifically, Project Management calls for: identification of stakeholders, collection of project requirements, and definition of project scope (PMBOK[1] , 5th ed.).  Six Sigma calls for: identification of the customer, seeking input from the customer, documentation of Customer Needs, and setting of Specifications.   Systems Engineering calls for: identification of Stakeholders and definition of Stakeholder Requirements.  Here is what the CFR says about this: “Each manufacturer shall establish and maintain procedures to ensure that the design requirements relating to a device are appropriate and address the intended use of the device, including the needs of the user and patient” (21 CFR 820.30(c)).

All of these disciplines are saying the same thing.

The next step in the product development process is to determine how the pieces of the project (or device) fit together and to identify which “pieces” are most important.  In this development step, all the disciplines identified above are once again calling for the same actions – namely, to define the product with traceability of the functions of its subsystems to the higher-level needs of the stakeholders (to make sure the design covers the intended functions of the product).  We also are called to assure all requirements have been met.

Specifically, Project Management says: create Work Breakdown Structure (i.e. the work that represents the pieces of the project or product) that achieves the requirements, and Sequence the Activities.  Design for Six Sigma says: develop a high level design, identify the Critical to Quality output, drive to a detailed design (i.e. components and subsystems).  Six Sigma also calls for: identification of Critical to Quality output.  Systems Engineering says: Architect the system, its sub-systems, the interfaces between them and trace the design outputs to the inputs.  The CFR says: “Each manufacturer shall establish and maintain procedures for defining and documenting design output in terms that allow an adequate evaluation of conformance to design input requirements” (21 CFR 820.30(d)).

Our next example is the design verification/validation step of the product development process.  Here, we once again see that the disciplines we have identified above are calling for the same activity.  Specifically, Project Management calls for: documentation and traceability of requirements, followed by validation of deliverables against those requirements.  Design for Six Sigma explicitly calls to: Analyze and Verify the design (in the DMADV[2] process).  Systems Engineering calls for: Verification and Validation of a system based on established and traced requirements.  The CFR says: “Each manufacturer shall establish and maintain procedures for verifying the device design.  Design verification shall confirm that the design output meets the design input requirements.” (21 CFR 820.30(f)).  Similar language is used in Part 820 around the need for Validation.   Project Management does not draw quite as clear a distinction between verification and validation … but it does try.  It puts the onus for validation on the customer, and for verification on the project team).

Without going in to detail, the parallels continue in: the practice of identifying, supporting (i.e. funding), and training a team (21 CFR 820.25 Personnel); the practice of identifying, selecting, and managing vendors (21 CFR 820.50 Purchasing Controls); the practice of monitoring and controlling designs and processes (21 CFR 820.70); and the practice of controlling changes to product or processes (21 CFR 820.30(i).  All of these regulatory “requirements”, and more, have parallels in world-wide accepted best practices.  In most cases the regulatory agencies likely did not create these “requirements” out of thin air: The requirements were most likely drawn directly from these accepted best practices.

There are four points to take away from this.  The first point is that these seemingly disparate practices … are not so disparate after all.  They are trying to accomplish the same objective.  The second point is that the conflict or suspicion that often exists between people performing these practices in the workplace is senseless, because these practices are trying to accomplish the same goals.  The third point is that we cannot “cherry pick” the facets of each of these best practices to suit our preferences – doing so ignores the balances inherent in each of the practices, and gives rise to the conflicts that people practicing them frequently experience.  The fourth point is that to succeed in producing high-quality product we need a holistic understanding of what these practices are trying to accomplish.  We need to see the “forest” and not focus on the individual “trees.”

So what is the objective of these practices … what is the “forest?”  The answer is quite simple, actually.  The goal of these practices is to:

  • concisely define and design products that clearly meet the needs of customers/stakeholders at an acceptable level
  • to manufacture and distribute those products in a state that continues to meet the needs of the stakeholders at an acceptable level
  • to do both of the above in a predictable and reproducible manner

The regulatory expectations only add one thing:

  • provide reasonable evidence that you are doing the first three things.

In other words, demonstrate that you built the product you said you were going to.

The Good Manufacturing Practices (cGMP’s), which are often considered separately, are focused on achieving the “predictably and repeatably” goal.

We get into trouble executing to a “quality system” because we tend to focus so much on meeting the letter of each statement in the regulations that we lose track of the regulations’ context and intent.  In the vignette in the first article in this series, we relate in lines 9 – 12 that the quality system has been developed to meet the contents of the regulations, and the development team is trained to adhere strictly to its procedures.  The problem with this approach, which is commonly taken, is that our “quality system” in practice becomes a “compliance system.”  We do what the system “says,” and often nothing more – despite the fact that the regulatory agencies clearly state that the regulations represent a minimum amount of rigor.  We ignore (or, as in lines 11-12 of the vignette, are not allowed to act on) what makes scientific sense or is otherwise defensible based on the specific needs of the current activities; but this is exactly counter to the intent of the regulations.  In the end we lose track of and do not execute the very activities and evaluations that produce high-quality product.  Instead, we create waste and frustration.

We need to recognize and embrace the understanding that a few regulatory statements cannot capture all the meaning and fine points of the rich collection of practices contained in the engineering and project management bodies of knowledge.  It is therefore not wise to assign responsibility for high product quality to the relatively few statements in regulations, or to a single function within an organization.  If instead, as much as possible throughout the organization, we really understand and execute what these best practices recommend, improved product quality will naturally follow, as will regulatory compliance.

It is likewise critical that each one of the individuals practicing in the areas mentioned above do not “cherry pick” their focus.  One of the most useful concepts coming out of Project Management is the “Triple Constraint” or “Project Management Triangle.”  This states that any project is constrained by time, cost, and scope – and that quality results from an appropriate balance between these constraints.  This is a truism.  Wishful thinking or demands on the development team can not change this interdependence.  Therefore, to achieve high product quality, Project Management cannot ignore the need to clearly define and manage Scope in favor of cost and schedule.  Likewise, Systems Engineering and Design for Six Sigma cannot ignore the demands of budget and schedule in favor of their work on scope (i.e. requirements definition and system design).  To be selective in executing within each discipline is to neglect the very internal teachings of each of those disciplines.

So, the solution to our problem lies at least partially in the following: the question of how to achieve high-product quality needs to be changed from “how can we change our quality system” to: “how can we effectively and flexibly execute within and around the framework of our quality system.”  These are two very, very, different statements.  The former requires that we open a small book and start discussing fine points of meaning of individual phrases.  The latter requires that we learn and institutionalize, within our organizations, the rich bodies of knowledge contained in the practices of Project Management, Systems Engineering, and Design for Six Sigma.

© 2016 DPMInsight, LLC all right reserved.

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for six sigma, Lean, and six sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (www.dpmillc.com).

Bob Parsons

Over 26 years of experience in leading Quality Assurance, Validation and remediation efforts in FDA regulated Medical Device and Pharmaceutical industry. Experience includes product development life cycle management from initial VOC through New Product Introductions (NPI), sustainable manufacturing, and end of life product management.  Technical expertise in quality system gap assessment, system enhancement, alignment and implementation of all quality elements including design controls, risk management, purchasing controls, change control and post-market surveillance.  Regulatory experience includes; ISO 13485, 9001 and 14971 certification, providing guidance for FDA PMA/510K and CE clearance, designated Management Representative, company representative and lead during FDA and ISO audits, 483 and warning letter resolution with experience working within consent-decree environments.  Bob currently supports various organizations in remediation and compliance projects through Raland Compliance Partners (RCP) www.raland.com .

 

Michael B. Falkow

Michael Falkow is a Quality Specialist with Raland Compliance Partners. He has served as a regulatory compliance and quality assurance executive with multi-facility/international companies and was an FDA Compliance Officer and Senior Investigator/Drug Specialist.  Michael has subject matter expertise for quality and regulatory compliance, quality auditing, quality assurance, quality control, supplier evaluation and certification, and compliance remediation.  He has been approved by FDA as a GMP certifying authority and is qualified to provide Expert Witness testimony for GMPs.

Currently – Adjunct Professor at Mercer County Community College – teaching courses on Clinical Development for Certificate Program in Clinical Research as part of Drexel University’s Masters Degree in Clinical Development.

 

[1] A Guide to the Project Management Body of Knowledge (PMBOK Guide), fifth edition, Project Management Institute, 2013.

[2] DMADV: “Define, Measure, Analyze, Design, Verify”

Part 1 – Setting the Stage and Initial Thoughts

 

Well-publicized media stories about recalls and regulatory agency action (483’s, warning letters, and consent decrees), have raised the general public’s and medical device manufacturers’ awareness of the critical need for “product quality” in designing and producing medical devices.  Efforts have been made to educate companies and employees on regulatory expectations and the need for stringent quality in design, development and manufacturing.  Also taught is the need for effective execution of Corrective and Preventative Action (CAPA) processes to correct and prevent reoccurrence of lapses in product quality.  Despite this, failure to demonstrate both efficacy of corrective actions and improvement of product quality remain ongoing issues.

Beyond the damaging stories visible to the outside world, gaps in product quality create waste.  They cause undue work for employees (which also affects morale and retention), unacceptable scrap rates, and repeated failure of verification & validation.  They also cause high customer support costs and lead to expensive “tiger teams” created to fix the problem.   These and other costs all affect the company’s bottom line.

Dealing with regulatory actions creates significant distraction from business goals and activities (while incurring significant costs).  Other costs resulting from poor quality (e.g. increased inspection of incoming components, increased supplier oversight or friction with those suppliers, increases in field support personnel, increased warrantee expenses, etc.) are less often discussed, but nevertheless can easily become existential issues to the organization.

A number of organizational functions typically try to address quality issues: Development, Operations, Supplier Management, Quality, Regulatory, PMO (and others).  These functions and the people within them are well intentioned and hardworking – yet they often work in isolation from each other, and thus have differing perspectives and incentives.  This can, and does, interfere with the design and production of a high-quality product.

Let’s present a vignette[1] to illustrate how this occurs (note: line numbers have been added for use in reference in later articles in this series).

The well-respected company, My Quality Device, Inc. (MQD), has an internally developed idea for a new medical therapy.  An internal champion has proposed a general purpose medical device called the Internal Organ Monitoring and Diagnostic Device “IOMDD.”  After spending several years developing and presenting a nearly final design working prototype for the device, the champion has garnered intense interest and financial support from upper management.  A development team has been assembled and funded, and a required launch date for this new potentially lucrative product has been specified.

MQD has an existing quality system suitable for development of the IOMDD.  The quality system has been well thought out and put in place by personnel highly experienced in the content of the regulations.  The development team has been trained in the quality system, and instructed to adhere strictly to its procedures.

As development of the product proceeds, the development team feels uncomfortable.  As they refine the design at its lowest levels, they end up in disagreement over what the specific output levels and detection sensitivities of some of the subassemblies should be.  They even disagree over what some of the physiological signals are that should be detected.  They spend hours and days in meetings arguing about these points.  Ultimately, the internal champion simply says “do it this way.”  Moreover, as they carefully follow the quality system, the development team is sometimes forced to reach conclusions that seem illogical and against common sense.  They even encounter circumstances in which different aspects of the quality system seem contradictory – and they are not sure what to do.  Upper management is extremely clear regarding schedule and launch date, and is putting a lot of pressure on the development team to meet their contracted deliverable dates.  So, the team marches on, defines criteria for the design as well as specifications and supplier sources for the components, and hands the design off to manufacturing.

Manufacturing quickly discovers that many of the defined components are going to be obsolete within the next year.  They rush to define replacements, or do a large (and expensive) “last buy” to lay in enough stock to last for several years of manufacturing.  They also discover that the prices for components and subassemblies from the vendors are much more than expected.  The suppliers indicate that the design tolerances, in combination with the materials specified, are very tight and difficult to manufacture.  There is therefore a very high scrap rate, which increases the cost of the in-specification parts that can be shipped to MQD.  MQD puts intense pressure on the suppliers to reduce cost of the components.  They do so … but the working relationship sours; communication between MQD and the suppliers becomes curt and infrequent.  Occasionally, bad parts or bad lots of components are received anyway, and are not detected in the receiving process.  In a reaction to this the receiving processes are re-written and more resources are hired to support the additional receiving steps and inspections.  On top of all that, internal manufacturing finds that their scrap rates are very high: “the IOMDD is very difficult to assemble reliably”.  Finally, during a regulatory audit, the auditor finds several critical manufacturing tolerances that are not monitored or controlled.  A 483 with several findings is issued.

In the end, the IOMDD launches later than contracted for, and at a significantly higher cost of goods sold than originally forecast.  At first the IOMDD does not meet its sales quotas.  The customers complain that it is too expensive – so the average selling price is lowered.  Sales pick up somewhat, but MQD starts hearing that the IOMDD performs functions the customers do not need or want, and that it is difficult to make it perform desired functions. Additionally, MQD starts receiving complaints that IOMDD are breaking and they are being returned.  A “help center” is opened to coach customers on how to use the product, and costs for running this center increase alarmingly.  Failed products are returned and replaced at no cost to the customer.  Frequently, the customer simply wants a refund.

After a few years, it is clear that revenues are down, and internal indirect expenses are high. As a result, profit is far below expectations of the shareholders.  Senior management and Finance conclude that they need to maintain the revenue stream of MQD, so 25% of R&D, Regulatory, and Quality personnel are let go.  The next year, the situation is still bad – so another 25% of the personnel of each of these non-revenue producing functions are laid off.  The year after that, the Board of Directors meets and concludes that the senior management of the company is not meeting their expectations, and they are asked to resign.

The year after that MQD files for bankruptcy, and shortly afterward goes out of business.

Is this an unrealistic “perfect storm” of things gone wrong?  Perhaps.  But the story illustrates how many typical company behaviors can lower the quality in products as they are designed.  It also illustrates what can happen when high-quality is not present in marketed product.  In their collective experience, the authors have seen many of these typical, but ultimately damaging, behaviors occur in organizations: you have likely seen some of these behaviors in your own organization.  Sometimes all of these behaviors do indeed occur during a single product’s development.  In those cases, a scenario like that of MQD may occur.

It is our belief that organizations as a whole are missing four key points when it comes to designing and producing high-quality product:

  1. Product quality is an outcome of effective interaction between organizational functions. It is not “owned” by any one function, and cannot be “put into” a product by any one function.
  2. Few organizations have a position that coordinates organizational functions and to which the functions are responsible (other than a VP or SVP, who is typically preoccupied with business-level issues). Because organizational functions have different perspectives and incentives, they are, at best, not coordinated in objectives.  At worst, they operate with conflicting objectives.  Some companies attempt to remedy the situation by implementing broad PLM systems.  But these implementations are often not properly managed, and the effect of a poorly-managed implementation is worse than none at all.  In short:  no one is minding the store.
  3. Many functions operate without clear and intentional consideration of how their decisions affect the company’s bottom line. For example: development can produce designs that are not manufacturable or maintainable; manufacturing can miss opportunities to feed information back to development; and quality or regulatory can create procedures that are overly burdensome and perhaps not even executable.
  4. Quality is, to paraphrase, the ability of a product to safely perform its desired function. Meeting this goal is at the heart of Design Controls as defined by the FDA and ISO.  Failure to clearly define the desired function leads to project scope creep, project delays, failed V&V, and more.  Failure to concisely define and limit the collection of desired product functions leads to “too much”: too much of or too complicated a design; too many design outputs to verify and control; too stringent a set of requirements to demand of a supplier, etc.  “Too much” can – and will – “handcuff” the organization.  This is organizational and operational distraction at its source: with that distraction, something essential will always get “dropped,” creating the quality and cost issues at the heart of this discussion.

Solutions

There appear to be a number of potential actions to correct the situation described above.  However, these actions are not the ones that organizations have previously taken.  After all, if we keep trying the same thing and it does not work, clearly we need to try something different!  The authors are interested in starting this discussion with the intent of helping organizations understand and find solutions to the issues that prevent them from consistently producing high-quality products.

Toward that goal, we will be publishing a series of articles to expand on and illuminate specific topics associated with the organizational issues described above.  One hint of things to come: although the authors are all deeply ensconced in the design and use of “quality systems,” we do not feel that the problematic issues and organizational behaviors can be solved by creating or modifying those quality systems.  It is our belief that the solutions to these problems are already known, and lie in the nexus of recognized best practices in systems engineering, design for six sigma, six sigma, and project management.  Further, the issues run deeper into organizational structure, hiring practices, and training practices.  We hope that as we move forward with this exploration, you will all join in the discussion and contribute your insights.

© 2016 DPMInsight, LLC all right reserved.

(This article, and others in the series, are available and archived at http://www.dpmillc.com/reflections.html.

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for six sigma, Lean, and six sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (www.dpmillc.com).

 

Bob Parsons

Over 26 years of experience in leading Quality Assurance, Validation and remediation efforts in FDA regulated Medical Device and Pharmaceutical industry. Experience includes product development life cycle management from initial VOC through New Product Introductions (NPI), sustainable manufacturing, and end of life product management.  Technical expertise in quality system gap assessment, system enhancement, alignment and implementation of all quality elements including design controls, risk management, purchasing controls, change control and post-market surveillance.  Regulatory experience includes; ISO 13485, 9001 and 14971 certification, providing guidance for FDA PMA/510K and CE clearance, designated Management Representative, company representative and lead during FDA and ISO audits, 483 and warning letter resolution with experience working within consent-decree environments.  Bob currently supports various organizations in remediation and compliance projects through Raland Compliance Partners (RCP) www.raland.com .

 

Michael B. Falkow

Michael Falkow is a Quality Specialist with Raland Compliance Partners. He has served as a regulatory compliance and quality assurance executive with multi-facility/international companies and was an FDA Compliance Officer and Senior Investigator/Drug Specialist.  Michael has subject matter expertise for quality and regulatory compliance, quality auditing, quality assurance, quality control, supplier evaluation and certification, and compliance remediation.  He has been approved by FDA as a GMP certifying authority and is qualified to provide Expert Witness testimony for GMPs.

Currently – Adjunct Professor at Mercer County Community College – teaching courses on Clinical Development for Certificate Program in Clinical Research as part of Drexel University’s Masters Degree in Clinical Development.

[1] Disclaimer: the company and product portrayed in this vignette is totally fictitious: it is a compilation of observations made by the authors, and does not represent any company currently or previously in business.