Part 4 – Is This Really a Requirement?

Consider the following statements derived from our laws and guidances for product design:

  • “…Each manufacturer shall establish and maintain procedures to ensure that the design requirements relating to a device are appropriate and address the intended use of the product …” (21 CFR 820.30(c)
  • “… The organization shall determine … requirements specified by the customer … requirements not stated by the customer but necessary for specified or intended use … regulatory requirements … requirements determined by the organization.” (ISO 13485-2003)
  • “… the stakeholder requirements govern the system’s development and are an essential factor in further defining or clarifying the scope of the development project … Outputs of the Stakeholder Requirements Process establish the initial set of stakeholder requirements for project scope …” (INCOSE Systems Engineering Handbook, October 2011)
  • “…In a modern quality systems manufacturing environment, the significant characteristics of the product being manufactured should be defined from design to delivery…” (Guidance for Industry, Quality Systems Approach to Pharmaceutical CGMP Regulations, September 2006)

In a previous article we wrote, about the activities of analyzing and defining stakeholder needs and translating those needs into product requirements.  We also described how these activities are included in the practices of project management, six sigma, and design for six sigma.  Because these steps during product development are so widely accepted, there is clearly something important about their influence on high-quality product.  In this article we focus on requirements: how they benefit the design process, and how they lead – when properly used – to high-quality product.  We also examine the pitfalls that many practitioners and organizations fall into – pitfalls that actively lead to poor quality product.

The attributes of high-quality requirement statements[1] are widely published.  21 CFR 820.30 maintains that requirements should be complete, unambiguous, and non-conflicting.  The INCOSE systems engineering handbook[2] maintains that requirements are necessary, implementation independent, clear and concise, complete, consistent, achievable, traceable, and verifiable – and maintains that the use of the word “shall” implies a requirement, and that use of the word “should” shall be avoided.

The description in the preceding paragraph defining the quality of single requirement statements is the level of analysis of requirements that most organizations use within their quality systems.  However, few organizations give thought or discussion to what a requirement really is – and not asking this question creates significant costs and risks and confusion for the organization.  The authors hold that understanding and rigorously enforcing analysis of what a requirement truly is is extremely important to high-quality product – perhaps more so than the syntax  of individual  requirement statements[3].

So … what is a requirement?  How can we distinguish between something that is, versus is not, a requirement?  Thinking critically, the answer is in INCOSE’s recommended word “shall.”  Words can be loaded, and the authors understand that some organizations dislike the use of the word “shall.”  However, for current purposes we follow INCOSE’s lead.  “Shall” is a statement about a characteristic of a product that absolutely, positively, demonstrably, and without exception, has to be met.  If that product characteristic is not met you do not ship the product.  Period.  There is a key point here that many organizations do not take fully to heart: a requirement is something that has to be met in order to meet the expectations of the stakeholders.  Another way to think about this is to ask: “If the product is shipped without the fulfilment of this ’requirement’, will the stakeholders care or even notice?”

That last sentence might seem trite or obvious, but the authors believe both that there are deeper implications to it, and that truly critical application of this question is a key starting point to producing high-quality product.

Why this belief? Because, as stated in our first article, if we do not rigorously reduce the set of requirements to those that are absolutely critical to meet the needs of the stakeholders, we open the door to “too much.”  We produce designs that are too complicated, too expensive, and take too much time to develop and manufacture.

Also, we need to understand that requirements are not “free.”   Each requirement has implementation costs that must be considered during the development process.  These include costs to collect/define the requirements, costs to design to those requirements, and costs to verify/validate the design.  During manufacture, each requirement results in more critical design outputs that must be monitored and controlled.  This, in turn, results in more criteria being placed on components from vendors.  When we have more monitoring and control activities than we can reasonably achieve, we will, and we do, “drop the ball”.

In the vignette posed in the first article of this series, the team is forced to proceed on a fast timeline and without clear understanding of what the requirements are for their product.  In that circumstance, the tempting thing to do is to rapidly adopt requirements without critical thought about their implications.  Moreover, in the vignette the champion ultimately forced upon the team his/her concept of what the product should “be” based on his/her preconceptions.  Unfortunately, this action  bypassed the critical step of definitively understanding what the stakeholders really need[4].  In the vignette, manufacturing discovers it is difficult to manufacture the product and its components without high scrap.  This likely results from inappropriate requirement setting, and possibly from conflicting requirements.  Finally, in the vignette, manufacturing discovers they are not controlling all the critical manufacturing tolerances – a sign that there are so many requirements it is unclear what the critical design outputs are and/or resources are not available to effectively execute all the monitoring required.

We take too much time and spend too much to complete a design, and then fail to demonstrate that we are monitoring and controlling the critical design outputs that we should.  Not demonstrating that control can result in a regulatory finding during an audit.  What follows is product declared as non-conforming, 483 notices, warning letters, recalls, and worse!  We then experience significant organizational thrashing and non-revenue-producing activities to try to correct the situation.  This is especially regrettable when we go through all this for a “requirement” that the stakeholders would neither notice nor care about.

The worst result of defining requirements that should not be requirements is the following: in the confusion and overload that arises from attempting to monitor and control design outputs that do not need that attention … we fail to monitor and control design outputs we really do need to monitor.  These design outputs are the ones that stakeholders will care about and do notice when they are not present in the product!  Six sigma calls these design outputs “critical to quality.”  21 CFR 820 calls them “design outputs that are essential for the proper functioning of the device.”

Unfortunately, it is far too easy to stamp the label “requirement” on any statement we feel we want to.  If we do so too easily or frequently,  we create the cascade of work, distraction, and errors described above.

So, how can we drive critical thought to differentiate between a true requirement and one that  is not?  As a back-door way to answer this, we ask the readers the following question: “How many times have you been part of a material review board (or similar board) tasked with reviewing non-conforming product … and ended up justifying a deviation to ship that material or product?”  If so, you were also faced with the following question: what is your response to the FDA investigator when you are questioned regarding your decision to effectively ignore a requirement?

Come on, admit it – this happens often.  Here is the point: if you justified a deviation and shipped the product, then that non-conforming characteristic should never have been a requirement.  It is far better to distinguish requirement from non-requirement early during the design process.

There is no single and simple answer regarding how to differentiate between “real” and “not real” requirements.  Fortunately, however, there is a suite of approaches that can be used.  A good starting point is to  ask: will we advertise that function or characteristic? Will we make that claim in a brochure or manual?  Will it become the basis for treating or diagnosing a patient condition that we will build a regulatory claim around?  These types of questions are usually fairly high-level and non-technical, but they help distinguish what we can call a requirement from what we must.

For example, if we are designing a ball to use in a sports game, our users will care (and notice) whether the ball is “round and fits comfortably in the hand of the typical player.”  The players do not care, and likely will not notice, if the ball diameter is 3.5 ± 0.1” or if it is 3.8 ± 0.1” with eccentricity less than 0.1 and roughness less than a certain value.  This is an example of “can do versus must do.”  We can require a diameter, we can require a tolerance for that diameter, we can require an eccentricity, we can require a roughness – but must we?  Certainly things like diameter, eccentricity, and roughness can be design outputs – but we need to ask the question how tightly we need to  control them before a stakeholder notices?  Often we don’t need to control them nearly as tightly as we believe.  In this example,  any ball of diameter, say, 3.3” – 3.9” might well do.  As designers we all too easily and frequently jump to declaring ‘shall be 3.5 ± 0.1” ‘ whereas all we really need is the statement “fits comfortably in the hand of the user.”  If we take the former approach, the manufactured ball with diameter 3.7” is non-conforming product.  If we take the latter approach, that same manufactured ball is just fine[5].

Another trap we fall in to is to substitute “what we can buy or source” for what we really need.  Take that same ball.  We discover that we can source from an outside vendor a ball with diameter 3.6 ± 0.1”.  With that knowledge, we write in to our requirements that the ball must be 3.6 ± 0.1” diameter.  In our vignette in the first article of this series,  under time pressure to get the design finished, the design team likely would have latched on to those values they could quickly define, versus taking the time to understand what was really needed.  What do we do when, later, a vendor’s manufacturing process changes, or we shift to another vendor?  In the example at the start of this paragraph the balls might start coming in with diameter 3.7 ± 0.2”  What we classically do is: panic.  We blame the vendor (which is destructive to  cooperative relationships), exhort  them to change the process back to where it was (which often they cannot), and then finally justify shipping the product anyway.

If instead, in our ball example, the requirement is “fits comfortably in the hand” a minor shift in diameter of sourced balls is a non-issue.  To be sure, a range in the design output of  3.3” to 3.9” might be defined as acceptable but, because this dimension is  not defined as critical to quality, only occasional monitoring is needed.  As designers we make this misstep of mistaking “can source” for “need” too easily and too often.  Examples are: battery size or capacity, container volume, color (few are going to notice the difference between pale green as rgb (102, 255, 102) versus rgb (110, 240, 110)); part dimensions when stack-up is not an issue; roundness of an edge, etc.

The key point here is to recognize the difference between design output that needs to be tightly monitored and controlled because it is directly linked to satisfying a stakeholder’s requirement, , versus design output that is not tightly linked to satisfying a stakeholder need.  The latter needs  much less rigorous and less frequent monitoring and control (this is not to say no monitoring).  Imagine the difference in manufacturing execution between building to a dimensional drawing that has 50 dimensions that need to be tightly controlled, versus a drawing that has 49 dimensions that need to be defined (to manufacture the thing) but not tightly controlled, and only one dimension that needs to be rigorously monitored and controlled…..

If you can accept what was said above, here is the next challenging step in thinking:  better yet, simply do not make something a requirement at all!!  This goes against the grain of many design teams.  But,  if we can master the perspective, it yields incredible freedom to execute and to focus on those requirements that we truly need to focus on.

Lets take our ball example.  Ok, we have the requirement that it is “round and fits comfortably in the hand of the typical player.”  But who or what says that we need to say anything at all about color, or weight, or texture, or internal pressure, etc.?  Certainly these are design choices we need to make to actually source or manufacture the product.  But if they are not initially stamped as “requirements,” and if the design decision is not linked to satisfying any other requirement, then our regulatory and quality burden to monitor and control those aspects of the design is much, much lower.  This then  allows us to truly focus on those design outputs that must be closely monitored and controlled.

Experience has shown that truly critical assessment of a requirement set, i.e. identifying what statements are truly “must have” statements, allows reduction of the number of requirement statements by about an order of magnitude.  Imagine shifting from designing, manufacturing, and controlling a product with 200 “requirements” to one that only has 20 requirements.  Imagine, in the vignette in our first article, the impact on MQD, Inc. if they had driven to accomplish this.

In an earlier article (“A Quality System is Not Enough”) we made the point that what really matters in producing high-quality product is not the quality system.  What matters is how that quality system is used.  The discussion in the present article represents a case in that point: the same quality system can give rise to a product with 200 requirements … or 20 requirements.  What is regarded as a requirement has changed – not the  quality system.

There is another thought to consider here that likely will cause some people to disagree, but which the authors sincerely hope will give design teams pause to consider.  All the best practices describe requirements as being design-free.  They are statements of “what” a design needs to accomplish, not “how” that will be accomplished (the latter is design output).  Thus design controls, especially requirements statements, are fundamentally based on “soft,” more “intuitive,” more “conceptual” facets of the design activity.  Conversely, many engineering disciplines (and thus the people who are attracted to them) are based on a more “physical”, “structural” understanding of our designs.  The difference between these learning and thinking styles is illuminated in an excellent slide shown to incoming freshmen at the University of Minnesota College of Science and Engineering (permission to use this slide has been graciously given by Dr. Paul Strykowski, U of M Associate Dean for Undergraduate Programs):

Dr. Strykowski’s point is that, by his experience,  an individual is more likely to learn and execute effectively only on one side of the dichotomy of either “physics intensive” or “chemistry intensive” disciplines.  This is dependent on the student’s inherent learning and thinking style.  This slide resonated with the authors because we have considered the same dichotomy as distinguished by “concrete,” “physical” thinking (the left side of the slide) versus “conceptual” thinking (the right side of the slide).  You can touch, and feel, and see, and physically manipulate most of the pieces associated with the  disciplines on the left side of the slide.  You can not touch, or feel, or see a chemical, or a molecule, or a chemical reaction rate, or a property of a material, etc. – you need to think about those things conceptually.

In exercising design controls, much of the initial work is conceptual.  Yet many product design teams are made up predominantly by individuals drawn from the left side of Dr. Strykowski’s slide.  The result is that such teams move very quickly and naturally to substituting detailed design output for what should be design-free, conceptual, requirements.  This leads to the problems we all too often experience in designs.

Are engineers ill-suited to manage design controls?  (Don’t yell too loudly!)  We are not sure, but we believe the question is worth serious consideration.  The ability to think conceptually can be taught and needs to be reinforced: this effort needs to be made on all design teams using design controls.  Perhaps also we should strongly consider bringing non-technical people onto the design team to at least manage and oversee the definition of the initial set of product and stakeholder requirements.  Such people should be well placed to quickly distinguish between “what the stakeholder wants” versus design output that does not need to be tightly controlled and that the stakeholders never really think about.

© 2017 DPMInsight, LLC all right reserved.

About the Authors:

Cushing Hamlen

Over 27 years of experience in industry, including 20 years with Medtronic, where he worked and consulted with many organizational functions, including research, systems engineering, product design, process design, manufacturing, and vendor management.  He has also worked with development, regulatory submission, and clinical trials of combination products using the pharma (IND) regulatory pathway.  He has been extensively involved with quality system (FDA and ISO) use and design, and is particularly concerned about effective understanding and use of product requirements and design controls.  He has formally taught elements of systems engineering, design for six sigma, Lean, and six sigma.  Cushing has degrees in chemistry and chemical engineering, is certified as a Project Management Professional, is certified as a Master Black Belt in Lean Sigma, and is the owner/member of DPMInsight, LLC (


Bob Parsons

Over 26 years of experience in leading Quality Assurance, Validation and remediation efforts in FDA regulated Medical Device and Pharmaceutical industry. Experience includes product development life cycle management from initial VOC through New Product Introductions (NPI), sustainable manufacturing, and end of life product management.  Technical expertise in quality system gap assessment, system enhancement, alignment and implementation of all quality elements including design controls, risk management, purchasing controls, change control and post-market surveillance.  Regulatory experience includes; ISO 13485, 9001 and 14971 certification, providing guidance for FDA PMA/510K and CE clearance, designated Management Representative, company representative and lead during FDA and ISO audits, 483 and warning letter resolution with experience working within consent-decree environments.  Bob currently supports various organizations in remediation and compliance projects through Raland Compliance Partners (RCP) .


Michael B. Falkow

Michael Falkow is a Quality Specialist with Raland Compliance Partners. He has served as a regulatory compliance and quality assurance executive with multi-facility/international companies and was an FDA Compliance Officer and Senior Investigator/Drug Specialist.  Michael has subject matter expertise for quality and regulatory compliance, quality auditing, quality assurance, quality control, supplier evaluation and certification, and compliance remediation.  He has been approved by FDA as a GMP certifying authority and is qualified to provide Expert Witness testimony for GMPs.

Currently – Adjunct Professor at Mercer County Community College – teaching courses on Clinical Development for Certificate Program in Clinical Research as part of Drexel University’s Masters Degree in Clinical Development.


[1] Note that here we are explicitly distinguishing the concepts that a single requirement statement is of high quality, versus whether that statement should, in fact, be a requirement; versus  whether the complete set of requirements makes sense or is overly burdensome to the organization.  These are different concepts, and typically only one of them is paid attention to by development organizations.

[2] INCOSE systems Engineering Handbook: A Guide for System Life Cycle Process and Activities, 4th ed. International Council on Systems Engineering. Wiley. 2015.

[3] This is not to diminish the importance of clarity of those individual statements: an unclear or quantitatively ill-defined requirement cannot be suitably verified or validated.

[4] It is often claimed by designers that through experience they “know” what the customers need or want.  However, experience with Usability Engineering, which forces on-the-spot observation of how something is used or how an action is executed (what Lean calls “Going to the Gemba”), frequently reveals large and critical differences between reality and the designer’s assumptions.  Formally going through the process of defining, and defending, stakeholder’s needs is critical to avoid falling prey to incorrect biases or assumptions.

[5] Some might argue that you cannot measure “fits comfortably in the hand of the user” – but yes you can.  That is precisely what Human Factors and Usability Engineering is all about – and why regulatory agencies are presently placing high emphasis on this practice.

Leave a Reply

Your email address will not be published. Required fields are marked *