Quality, Validation, and SDLCs

Discussion of software development methodologies – which is really the GMP, Quality approach, Waterfall or Agile? Many QA people will tell you that you need to follow some form of Waterfall method. Write up requirements, then a functional specification, then a design specification, and then you can begin coding. Only this method will ensure that you create a quality piece of software.

Now, the FDA has shifted their view on the subject, to some degree. This is most notable in some of their guidance documents on software validation, where they require that a company select an SDLC, without requiring a specific one. However, if you take the entirety of the text, you can still feel a call for the more traditional Waterfall approach. Perhaps this, as well as the general unwillingness of people in quality to change, is why.

What I would like to do, though, is get back to the fundamentals. Get back to exploring what quality is, what validation is to accomplish, in order to answer the question posed at the beginning of this entry. This will all be very simple, its just forgotten most of the time.

Quality is simply the fitness for use of a product. If it meets your needs, in terms of function, price, longevity, etc., then it is a quality product. Validation is simply a way of establishing that a product is quality product. We validate processes to establish that they produce products of acceptable quality. We validate software to establish that it meets the needs of the end user. It is interesting to note that there is an understanding that fitness for use does not necessarily mean conformance with requirements. Just because something meets documented requirements does not mean that it actually is fit for use. I imagine the infamous Tacoma Narrows bridge met its requirements. It was simply that someone forgot a key requirement that was the problem.

If, then, we need to ensure that software is fit for use, yet we can’t rely on software requirements as the arbiter of what fit means, where does that leave us? Traditional Waterfall methodology says that we must develop requirements, then build. While many accept that you can revisit requirement during the build phase, this practically does not happen very often. What if we missed something? What if there was a subtle need that we didn’t detect during a typical requirements gathering exercise? Usually, this problem is not detected until the product has been released. Then, it is too late. The end user can request changes, even complain, but then they are often accused of changing requirements, and the changes have to wait until some subsequent project.

This brings me to the subject of Agile programming. I suspect most people view this as a means to shortcut documentation and get the program out faster. While it is true that Agile may result in less meaningless documentation, and will likely produce software faster, these are results, not drivers of the process. The key driver of Agile is good software. This is enabled, in part, by much closer integration with the business during the process. It is dependent on extensive prototyping as a means of confirming understanding by both the customer and the developer, as to the actual needs the system is to fulfill. In other words, it is focused on developing a solid understanding of what will make the system fit for use. It does this, by placing a focus on business understanding, first and foremost.

So, if Agile produces software that better fits the needs of the business (vs. the stated requirements) that makes the software quality software, the process used to develop it, and the software itself, eminently validatable.

IT Catches On

During a discussion around the future of LIMS over at LinkedIn, one poster suggested that the term “WebLIMS” be stricken from our vocabulary. I tend to agree, for the reasons specified. Perhaps we should start a list like the one that comes out every January, listing terms that should be stricken from the language for misuse and overuse. The first one on my list would be the term LIMS, itself.

LIMS would qualify on the misuse grounds. The reality of every LIMS that I’ve seen(and I’ve seen all of the major ones) is that they are not at all focused on managing laboratory information. Rather, they are generally sample logging and test result recording systems. To argue that this represents “laboratory information” is like arguing that an engine is the entire car. The engine may be central and critical, but it is not what most users of the car interact with directly.

When a laboratory embarks on implementing a LIMS, there are very much competing interests on the part of the stakeholders. Unfortunately, this is often overlooked. It may even be that some stakeholders misunderstand their own interests. For instance, people who submit samples to an analytical lab think that by supporting the analysts’s request for increased efficiency they are supporting their own desire for improvement in their business processes. In other words, they substitute the analytical groups requirements for their own. However, simply improving turnaround won’t generally fulfill all the needs of the submitting groups. Does faster turnaround result in easier submission? Easier status updates? Easier data mining? Getting results faster is certainly a requirement, but it is not the only, nor necessarily the most important, to key sets of stakeholders.

If we look management, a similar problem occurs. For a service organization like an analytical lab, happy customers are the requirement, as is the collection of metrics that facilitate a true quality management program (you just knew I would throw that in there).

If we look at all of these additional requirements, we should realize a couple of things. One is that laboratory information is much broader than most LIMS really take into account, and the second is that we should rethink the process by which we implement LIMS. In other words, in order for LIMS to truly be LIMS, we need to do things and think about things very differently. No longer will simply slapping the latest interface technology, or installing yet another new LIMS fix the inefficiencies and challenges our analytical laboratories face. Many years ago, W.E. Deming made the point that the latest new gadget will not fix your quality challenges. That notion is as true today as it was then.

If LIMS is to be truly LIMS, it needs to be as concerned with metrics as with test results, with metadata as much as logging procedures. These cannot be secondary concerns, relegated to mere afterthoughts in the implementation process. Only once LIMS really addresses, and effectively, all laboratory information, will LIMS implementations begin to succeed.