Design validation

From apppm
Revision as of 02:46, 23 March 2022 by S212834 (Talk | contribs)

Jump to: navigation, search

Developed by César Delafargue

The Ford company realized in 1957 that making a perfectly functional car was not enough to achieve commercial success. Indeed, nobody wanted to buy it, and the company suffered a loss of 250 million dollars [1]. The goal of validation is to ensure that a design meets the user's needs, and it is just as important as producing a functional design. Indeed, an inadequate validation will result in designing an undesired or unsuitable product, wasting significant amounts of resources, money, and time. The increase in complexity and duration of design projects often causes the initial objectives to get forgotten. Validation closes the production loop and ensures that the functionality intended for the user is fulfilled.

Validation is closely linked to the management of any project that aims to design a product, a system, or a service. It is a very powerful tool to reduce the risks associated with the project and to increase the overall quality of the rendering. This article aims to explain what design validation is, when it should be applied and why it is so important, for example through famous and very costly failures. It then tries to detail how to apply it, through different methods commonly applied by designers. The final point of the article warns the reader about some limitations of the validation such as the difficulty to establish the needs of its users or the long duration of its implementation.


Contents

What is design validation?

Definition

According to the U.S. Food and Drugs Administration, “Design validation means establishing by objective evidence that device specifications conform with user needs and intended use(s).“[2]. Validation verifies that a product meets the requirements of the users and the market. The question that designers must ask themselves here is: "Are we building the right product?" It can also be accompanied by other more specific questions like "Is the product solving a real problem?" and "Is there a big enough market out there for the product?".

The key to validating a product is to first define exactly what needs the product is trying to meet. Validation will ensure that each of these needs is met, and will evaluate how effective this achievement is. The purpose of validation is to ensure that the product will find a customer base and be a commercial success when released [3]. It is therefore also about ensuring that there are potential users for the designed product.

Opinions on the place of validation in a design process differ. Indeed, in theory, it should be applied at the end of the design process, just before marketing, because this is when the product can be tested as it will be marketed [4]. However, validation is much more effective if it is done iteratively at each phase of the project, testing the product as it goes along and modifying it accordingly. It allows designers to ask fundamental questions about the product early in the design process when modifications are easy and inexpensive. By being carried out throughout the project, it also gives the confidence needed to innovate because each choice is validated. The validation described in this article is a process that must take place during the entire life cycle of a product's development. It starts as soon as the first design ideas are born in the mind of the designers, and can continue even after the final solution is deployed [5].

Iterative process of validation. Own drawing. Inspired from [4]

Difference with verification

Design verification and validation, although often combined, are two different things. They are not necessarily applied in a specific order or a predefined structure. Most of the time, both form a constant process that is applied at the same time throughout the whole project.

Unlike validation, the main question designers must answer when performing verification is: "Are you designing the product right?". According to the FDA, "Verification means confirmation by examination and provision of objective evidence that specified requirements have been fulfilled" [2]. In other words, the purpose of verification is to ensure that the various components of the system are working as expected. It compares the design inputs (the specifications for the product that the designers chose) with the design outputs (how the product actually works) and verifies their correspondence[4]. It is mostly about ensuring that the product corresponds to the designer's idea. This is usually done through testing, inspection, and analysis of a prototype, or calculations and simulations for a building for example. Verification allows proving that the product that has been built is the product that the designers wanted to build, while validation allows making sure that this product is really useful for the users.


Integration of validation and verification in one iteration of the design process. Own drawing. Inspired from [4] and [6]

Requirements definition

Design validation evaluates a product for the exact requirements of the end-users and stakeholders. A major issue in order to validate a product is to establish an exhaustive list of all the requirements it must fulfill in order to satisfy those needs. The quality of the validation depends strongly on this list because if some important aspects are forgotten, then the product may have flaws because some of its aspects will not be validated (hence the interest in using pre-established heuristics).

A functional requirement should define how and to what extent the system should satisfy a specific need of users or stakeholders in general. Their establishment and usage are done as follows [1] [7]:

  1. Determining the needs of the stakeholders
  2. Transforming these needs into a list of requirements
  3. Determining which part of the product should fulfill each requirement
  4. Verifying and validating the requirements by testing that specific part

Examples through famous failures

Verification without validation : The Edsel Ford

In 1957, Ford launched the first vehicle of its new Edsel brand, named after the deceased son of Henry Ford. Promised to thrill the middle classes, the Edsel project wanted to go beyond modernity. It is presented as the first incarnation of the car of the future, full of gadgets and innovations, and determined to "break the codes", including aesthetics[8].

However, the long-developed model was immediately condemned by critics and the public. Ford had promised an avant-garde vehicle, but the Edsel was technically not very original and some of its details were mocked. The designers' choices were also criticized, in particular the unusual gaping vertical grille around which the whole front face was structured. The Edsel was a sort of accumulation of all the desires expressed by potential buyers of the time, which made it a real monster almost unsellable [8].

This failure is typically an example of poor validation. The verification was good because the vehicle was fully functional and met the expectations of the designers, who were proud of their product. However, good verification does not necessarily mean good validation. Management produced what management wanted, not what the customers wanted, and they produced the wrong car. They failed to design the right product for their clients, and no one wanted to buy it [1]. In the end, the failed program cost Ford $250 million.

Validation without verification : Ariane 5 first launch

On June 4, 1996, the first flight of the Ariane 5 rocket ended after 37 seconds by the loss of control of the spacecraft, followed by its explosion. The 4 satellites that the rocket was carrying, worth a total of 370 million dollars, were destroyed in the process.

After analysis of the recordings of the flight parameters and the control software, this accident was attributed to a faulty design of the error recovery software in the navigation module. This module was duplicated to face a possible failure. After 37 seconds, an unrecoverable failure occurred in the backup module, which was taken out of service. A fraction of a second later, the same failure occurred in the active module. The simultaneous failure of the two modules was an unforeseen situation, which resulted in an inappropriate response: error diagnostic data was sent to the commands of the rocket. This data, incoherent in this context, triggered a full deflection of the thrusters, which caused the disintegration of the craft and its self-destruction[9].

Let's now try to understand what, in the design process of the rocket, brought this error to appear. The development of Ariane 5 was, in reality, an enlargement of the Ariane 4 rocket which was functional, in order to be able to transport more satellites, and thus bring more profit. The requirements of the Ariane 5 rocket being the same as those of Ariane 4, these were correctly selected. The rocket created corresponded exactly to the initial need (i.e. to transport more satellites). It was simply a matter of scaling up a design that had already been tested many times, so the validation was fulfilled. However, the designers decided to reuse the Ariane 4 rocket software for their new design, but with an inadequate testing process because the error that occurred had not been detected beforehand. The software was no longer adapted because the size of the rocket implied values that were too large and could no longer be stored in the memory. It is therefore a verification error. However, it could be argued that the idea of scaling up an existing design was also a validation error, as there is no evidence in itself that a larger model is as good a design as its predecessor [1].

Application

Methods

Validation methods can be divided into 2 main categories: qualitative and quantitative. They can of course be combined, and they do not all allow to analyze the product in its entirety, but only on certain aspects such as ergonomics for example. The choice of methods is highly dependent on the kind of product that is being developed, so it is difficult to give a guideline on which one is the best for any project. Qualitative methods are often easier and cheaper to operate because they don’t necessitate the development of specific tools and metrics. However, they can be less comprehensive and give less accurate data on the product, making it harder to compare to other products or standards. A good practice would be to use at least one quantitative and one qualitative method to validate a design [5].

A sample of some of these methods is presented below. These methods are very general and applicable to many products. But validation methods can also be very dependent on the type of product being analyzed, and designers are often forced to create custom tests specific to their product.

Qualitative Quantitative
Usability test Data analysis
User feedback Metrics monitoring
Heuristics analysis A/B testing
Pilot user test Pilot test
Stakeholder interview Post-market surveillance

Qualitative methods

Heuristic evaluation

The purpose of this type of validation is to ensure that the product meets a set of pre-established requirements. It is a powerful way to ensure that the product is relevant enough before submitting it to users' experimentation. The heuristics used can be customized for the company and the type of product to be analyzed, or be pre-made evaluation heuristics, commonly used in the field.

Heuristics are a good way to establish a pre-validation of a product without directly asking the users. They can therefore be preferred in the case of a confidential project. It is a simple, inexpensive, and quick method to set up. It can be done at the end of the design process for the final validation as well as throughout the process to ensure that a project is on the right development path. Its effectiveness depends largely on the quality and completeness of the criteria selected.

A good example is Nielsen’s Heuristics, which are used to validate machine-user interfaces of computer software. It is a ten points checklist that highlights as many requirements as possible that need to be fulfilled by a piece of software. If even one of these points is not checked, Jakob Nielsen argues that there will be inevitable flaws in the use of the product [10]. Many of these criteria can be adapted to other types of products, and a similar heuristic can be created in any domain to ensure that a design meets the user's needs. It is also possible to assign weights to the different criteria according to their importance, to focus the analysis on the most critical points.

The 10 Nielsen’s Heuristics [10] include:

  1. Visibility of System Status

    The user of the website, digital application, or other digital product must understand what is going on with appropriate feedback (notifications, progress bar...).

  2. Correspondence between the system and the real world

    Words, sentences, information, graphic elements, must be natural and look like the experiences the user already had.

  3. Control and freedom

    Users can easily exit the system in case of error, redo or cancel an action.

  4. Consistency and standards

    Respect for standards, in terms of functionality, navigation, graphics, language used, etc. Similar elements should also look similar.

  5. Error prevention

    The design is conceived so that the user does not make serious mistakes and can always correct their mistakes.

  6. Recognition rather than recall

    Reduce the mental load with clear instructions, showing the path taken, to minimize the effort of memorization during use.

  7. Flexibility and efficiency

    The interface is convenient for the user regardless of their digital maturity. For example, the keyboard shortcuts are not very useful for a novice. On the other hand, for an expert who must use the product regularly, it is a productivity gain.

  8. Aesthetic and minimalist design

    The content, graphic design, etc. must have a reason to exist.

  9. Error recognition, diagnosis, and repair

    Error messages are clear, simple, and constructive.

  10. Help and documentation

    If necessary, always provide easily accessible instructions to answer users' questions and needs.

Pilot User Test

The Pilot User Test allows testing the product continuously, in a typical environment of use, and in real-time. The goal is to analyze if the product can exist in the real world as it was designed, to highlight defects in its use (time, ergonomics, etc.) or potential risks.

The implementation of this test requires a group of test users. They must be experienced or trained beforehand: they must know how to share their opinion efficiently and exhaustively, they must know how to test as many functionalities as possible and they must know where to look for possible problems or ways of using the product that were not taken into account by the designers. These test users then receive a scenario to follow which is a part of the typical use of the product, and should then give their opinion on the overall experience and rate different aspects of the product. This feedback can also be done continuously, both after and during use [11].

This test should be combined with measurement and analysis tools to get the most out of it. Indeed, the verification of the product can be done in parallel by trying to find possible bugs, to measure the execution time and the global functioning of the product, in addition to validating its design. At this stage of product development, the object under test is often a prototype and not the final product. This test can therefore be iterated on the different successive prototypes.

Quantitative methods

Analytics and benchmarking

A good quantitative method to validate a product is to define measurable values that will evaluate different aspects of its design. This data can be obtained through digital tools whose aims are to gather quantified data on the use of the product: number of testers at the same time, execution time, mean task realization time, etc. Then, it is easy to compare two versions of the product based on numerical values, applying these measures to both versions. This is a good way to estimate if a design decision is good. The disadvantage of this method is that it is retrospective: a new design must be analyzed from the previous one to validate it. The systematic collection of data, therefore, lengthens the process.

One way to improve this method is to combine it with a benchmark of the sector in which the product is included. A benchmark allows to directly compare the product with industry standards, by comparing the collected measurements with those of existing products. This method can be useful for companies with experience in a field, who can compare their new products with older ones that have performed particularly well. Although not encouraged, it is also a way to avoid the iterative process of successive analysis for each design change, by analyzing only the final version of the product.

A/B testing

A/B testing is a technique that consists of proposing several variants of the same object that differ according to a single criterion in order to determine which version gives the best results with consumers. In fact, it is also possible to modify several criteria at once to speed up the process, or to propose more than 2 different versions for a single criterion. This technique is particularly used in online communication where it is possible to test with a sample of people several versions of the same web page, the same mobile application, the same email, or an advertising banner in order to choose the one that is the most effective and to use it on a large scale.

For example, if we apply A/B testing to the design of a car, it will consist of varying the color of the car, its maximum speed, its shape, or perhaps an element of its packaging and observing which version will be the most convincing [12]. The process of A/B testing is therefore:

  1. Defining your objective (what you want to improve about your design)
  2. Identifying your target precisely (what part of your design will be modified)
  3. Varying an element of your product
  4. Letting your audience accept these variations, interact and react to your new proposals
  5. Collecting the results of your test
  6. Changing or not the design according to which version is best

This method is a very good way to validate and justify through numbers any design choice of your product. However, it is again a very long process and it is almost impossible to apply it to every single design choice. It should be used mostly for decisions evaluated as critical.

Example of a validation process

Let's try to apply the whole validation process to a simple product. A field where validation is particularly important is the medical field where standards are very strict and require a particularly documented validation. The example chosen is a ventilator that allows a patient to continue breathing[6].

  1. Define the users of the system

    Being a medical device, this ventilator will most certainly be used by nurses or doctors. In preparation for the next step, it should be noted that these are experienced users so some of the ease of use needs may be overlooked.

  2. Define the needs of these users

    There are many needs here, so we will select one and analyze it in more detail. Users may need to move a patient in the hospital. So one of the user needs could be simply defined as: "the ventilator can be used during transportation of patients in the hospital". We can note that the need specifies that these movements are done in the hospital. Ambulance trips for example would be a second need.

  3. Define the associated requirements

    Possible requirements associated with this need would be :

    -The ventilator can be carried by 15 hospital staff members

    -The ventilator continues to operate according to its specifications while being moved

    -The ventilator can pass through doors, corridors, or elevators in a typical hospital

  4. Validate these requirements

    Assuming we have a working prototype, we can test all these requirements with a simple usability test, by trying the prototype under the specified conditions and collecting user feedback.

    If we want to add quantified measurements, we can add to these tests time measurements, for example, to be able to evaluate the ease of this transportation. It will then be possible to compare it to other ventilators, or to standards imposing a maximum duration for example.

Limitations

  • Validation can give a false idea that a product is good. Indeed, validation needs to be well performed to be efficient. Otherwise, it might create a greater risk for the project and be a waste of time and money. Validation is not only about ticking boxes to check if your product is good or not, but also about asking the right questions. A very important part of the quality of a validation relies on whether the definition of user needs and requirements is adequate or not. If not, then the product is validated according to wrong standards and will not satisfy user needs, while the designers will believe that it does [3]. Consequently, the validation must also be validated.
  • Validation takes time. It is a long process that must be applied several times during the project. Thus, it can slow down the project a lot, even more so if the validation fails multiple times. Most projects have restricted timelines and managers cannot afford to test every single idea before implementing it. It is then up to the project managers to measure the risk of taking a decision without validating it, but it can sometimes be worth it.
  • Validation deals with user needs. But does the user know exactly what he really needs? Validation is what we can call an evolutionary process. It allows evaluating designs by comparing them to existing ones and it also sticks to user needs. The best that we can hope to get from this is an improved version of an existing product. But it doesn’t really leave room for revolutionary designs, which can sometimes create new needs for the users. The biggest innovations are often revolutionary, such as the development of the internet which became a need through the years, while it was not before it existed[13].
  • Designs keep increasing in complexity. This point is a key challenge in various domains. The variables to create the right product are ever-changing and with them, outdated methods of validation fail to produce the right product quite consistently.

Annotated bibliography

Bahill, A. T., Henderson, S. J. (2005). Requirements Development, Verification, and Validation Exhibited in Famous Failures

This article details the basics of validation, verification, and how to define requirements for a system. It explains clearly all the vocabulary of system testing in general. Moreover, the authors analyze several different cases of project failures to exacerbate the importance of validation and verification.

Nielsen, J. (1994). Enhancing the explanatory power of usability heuristics

This article explains how to use heuristics to validate a user interface. Nielsen has developed a lot of methods to assess user experience in the domain of software development that are particularly relevant to design validation in general. A lot of references from his website https://www.nngroup.com/articles/ were used in this article.

Krüger Nico. (2020). Design Verification vs Design Validation, 6 Tips for Medical Device Makers

This website article is focusing on applying validation and verification in the medical field. However, it gives a good example of how a validation process should be performed, that was used in this article.

Olivier de Weck. 16.842 Fundamentals of Systems Engineering. Lecture 9: Verification and Validation. Fall 2015. Massachusetts Institute of Technology

This is a course from MIT that defines validation and verification for an engineering system and explains some of the stakes related to it. It allows to get a good overall view of the subject.

References

  1. 1.0 1.1 1.2 1.3 Bahill, A. T., & Henderson, S. J. (2005). Requirements Development, Verification, and Validation Exhibited in Famous Failures. (Www.Interscience.Wiley.Com). Syst Eng, 8, 1–14. https://doi.org/10.1002/sys.20017
  2. 2.0 2.1 eCFR :: 21 CFR 820.3 -- Definitions.Retrieved February 18, 2022, from https://www.ecfr.gov/current/title-21/chapter-I/subchapter-H/part-820/subpart-A/section-820.3
  3. 3.0 3.1 Olivier de Weck. 16.842 Fundamentals of Systems Engineering. Lecture 9 : Verification and Validation. Fall 2015. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu.
  4. 4.0 4.1 4.2 4.3 Hamilton Thomas.Design Verification & Validation Process. Retrieved March 22, 2022, from https://www.guru99.com/design-verification-process.html</div>
  5. 5.0 5.1 Yllobre Carlos. (2017). Understanding Verification and Validation in Product Design. https://blog.prototypr.io/understanding-verification-and-validation-in-product-design-ef8c993fd496</div>
  6. 6.0 6.1 Krüger Nico. (2020). Design Verification vs Design Validation | 6 Tips for Medical Device Makers | Perforce. https://www.perforce.com/blog/alm/design-verification-validation-medical-device</div>
  7. Bandurek, G. R. (2005). Making design validation effective. BioPharm International, 18(3 SUPPL.), 18–24.</div>
  8. 8.0 8.1 (fr) Normand Jean-Michel. (2017). Edsel, histoire d’un fiasco de l’automobile américaine. https://www.lemonde.fr/m-voiture/article/2017/02/24/edsel-histoire-d-un-fiasco-de-l-automobile-americaine_5084871_4497789.html</div>
  9. Ariane flight V88 - Wikipedia. Retrieved March 23, 2022, from https://en.wikipedia.org/wiki/Ariane_flight_V88</div>
  10. 10.0 10.1 Nielsen, J. (1994). Enhancing the explanatory power of usability heuristics. Proc. ACM CHI'94 Conf. (Boston, MA, April 24-28), 152-158.
  11. Schade Amy. (2015). Pilot Testing: Getting It Right (Before) the First Time. https://www.nngroup.com/articles/pilot-testing/</div>
  12. A/B testing - Optimizely.Retrieved March 23, 2022, from https://www.optimizely.com/optimization-glossary/ab-testing/</div>
  13. Drues Michael. (2020). Why Design Validation is More Than Testing: How to validate your validation - YouTube. https://www.youtube.com/watch?v=H3hieepeLJ0</div>
Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox