That Blooming appFlower

Sorry, the best humorous line I could think of on short notice.

I’m working on developing an application for managing requirements and RFPs/RFIs for LIMS acquisition projects, and as part of that, I was asked to take a look at appFlower and use that to build the app. In the interest of documenting my findings with regard to appFlower, I figured it would be most useful to blog about things as I go along.

However, to begin, let me provide a little background as to what appFlower is. appFlower is a somewhat unique beast in that it is an application development framework built on… another application development framework. This other application framework is Symfony, a php framework somewhat akin to Rails for the Ruby development crowd. In the interest of full disclosure, I should reveal that I’m well acquainted with Rails, so forgive me if that colors any of this commentary.

As far as I can tell, there were a couple of goals behind the creation of appFlower. One was to shift from the extensive use of yaml to manage configuration in Symfony to the use of xml, which one can argue is the more broadly used standard. The second goal, I believe, was to provide a rich UI for creating the various forms, etc., that you might desire in a basic web application. Therefore, you can build some basic lists and data entry forms with a fairly nice UI, and the ability to add and edit records with effectively no programming. That in itself is worthy of a couple of gold stars.

One thing that it is important to realize when discussing rapid application development (RAD) frameworks is that they fail immediately on one key element of their basic sales pitch, “Build web apps in minutes”, or my favorite from the appFlower site, “Why write code that can be created automatically? Skip learning SQL, CSS, HTML, XML, XSLT, JavaScript, and other common programming languages (Java, Ruby and .Net)” This latter statement besides being incorrect, is also an example of the sort of verbal sleight of hand most commonly seen in political advertising. One might note when reading the above statement that they don’t mention PHP, which is at the heart of appFlower. You’d better know PHP if you want to do anything really useful here. You will also need to know CSS if you want the app to look any different that what you get out of the box, and for sure you will want to be able to deal with XML. You certainly won’t need to know Java, Ruby, or .Net, as those aren’t the languages of this framework (a somewhat disingenuous statement). I expect I’ll be writing javascript along the way as well. Oh, and you may not need to know SQL (we’ll see about that), but you will need to understand basic RDBMS concepts.

All application frameworks, and appFlower is simply no different, are intended to help seasoned developers move along more quickly by building the basic pieces and parts of a web application automatically. All that is left is adding in the additional functionality you need for your specific requirements. By building the framework, the learning curve is simply reduced for everyone. At the same time, there is a feature of these frameworks that can be frustrating. Frequently you will find that if you simply put a URL in with a keyword like edit, that the app just knows what you intended and you get the desired result. For instance, in appFlower, if you create an action and associate it with an edit widget, the app figures out from the context (if the action is associated with row of data it knows you want to edit, and if it isn’t it know you want to create a new row of data) what to do. However, you’ll never know when the app will or will not be able to figure out what you want until you try and get an error message. Which, by the way, is important safety tip #1, turn on debugging. Otherwise you will never have any idea of what is happening. Even at that, the messages may not be that helpful. Like my favorite, “Invalid input parameter 1, string expected, string given!” How silly of me to give the app what it expected.

As I was building a simple screen to add categories to assign to requirements, I wanted to add a delete button. So I looked for an example, and the only example exists in a canned plugin. In other words, I couldn’t find, even in the walkthrough for creating an app, the simple process of deleting a piece of data. Perhaps the appFlower team doesn’t think you should ever delete data. Typically, when you assign a URL to an action they tell you to just specify the filename for the widget you just created (these files are typically xml files). However, in the case of deleting, there is no widget, even in the plugin. So, I thought maybe the app simply knows if I say delete (like with the plugin), but that didn’t work. deleteCategory? nope. Finally I was able to dig through the code to find the right action class php file for the plugin, and was able to determine where the appropriate file existed that I needed to edit to create the action. Then I used the model code in the plugin to create the code I needed. In php. In other words, this is not some magical system like Access, or Filemaker, or 4D that allows you to quickly bring up a fully functioning app with no coding. You will need to code.

This takes me to the next issue, which is support. Support appears to primarily exist as a couple of guys who run appFlower who occasionally answer questions. Perhaps if I wasn’t using the free VM that I could download, it would be different. Paid support usually gets a better response. I’ve got a number of problems and questions, including the one about how to delete a row of data. Three days in and no response. Other questions/problems have been sitting in the queue for much longer. Don’t plan on a lot of help with this. Unfortunately, unlike Rails (and probably Symfony itself), there is not a robust community of users and experts out there available to help answer your questions at this point. The only way to figure things out is in the documentation.

There is an extensive documentation set and some nice examples. Unfortunately, the documentation lags development, and sometimes in a significant way. I attempted to setup up the afGuard plugin, which is for handling authentication and security. Unfortunately, if you follow the docs (there are a couple of different processes described) you will not be successful. For instance, widgets and layouts they claim exist, do not. It turns out you don’t need to copy anything yourself either, you can reference them where they are by default, and this is the recommended behavior. Recommended by the developer, at any rate, who happened to answer this question. The documentation tells you to copy the files elsewhere. I’ve tried a couple of times to get security working, but in both instances the app just stopped working. Since I couldn’t get any response to my posting of problems on the site, I deleted the VM and started over again.

I’m on my third time now, and am waiting to try afGuard again until later. I will also make sure to backup the app and the data before I start, so I can restore the work I’ve done without having to wipe the slate and start over. The latest rendition of the app is moving along nicely. Let me tell you what has been easy, especially since I’ve been so negative. Dealing with my own tables and widgets, once you read a bit through the docs and look at a couple of examples to understand the approach, I can generate, very quickly, a nice tabbed interface with lists and forms to edit. The framework get’s one to many relations, and if you put in a related field to edit, it will by default assume you want to create a drop down list with the description field from the other table (which most of the time is precisely what you want to do.) Adding and updating records, and creating those buttons is trivial. I now know how to delete records, so that code will be transportable. I don’t have it working perfectly yet. Apparently there is a template I need to create that will popup and let you know how things went in the delete attempt. I don’t think that is documented anywhere so I will have to see what I can figure out. Here is one page of the app as it stands right this second.

applicationSnap1-2012-05-28-17-12.png

So, I certainly didn’t get this up in minutes. If I had left security out, perhaps that would have been closer to the case. The question is, did I save time vs. using straight Symfony or Rails? With Rails, I can say categorically, no. This is because I know Rails and I can get a basic app, including authentication, up very quickly. If I had used straight Symfony, I would have had to install the framework, then get used to the way it generates files, and get up to speed on PHP, which I hadn’t looked at in quite a while. So, appFlower may have saved some time there. Once I know the framework well, future apps will likely move along much faster, and maybe even faster than rails.

The Autodidact

The other day I had a individual request some pointers on where to go to learn about a topic that was ancillary to a project she would be working on. The question arose as to whether there was real value in that. Certainly, since she would merely be executing scripts, there is not a significant short term benefit to time spent learning about this topic.

On the other hand, given the organization and her general responsibilities, there will likely be long term benefit to a little time spent now. However, there is a more significant issue in view. In my experience, an individual who likes to spend time educating himself makes a very powerful team member. This willingness to self teach tends to result in stronger problem solving skills and greater overall creativity. Failure to support such behavior means a failure to develop your team (and in this case it’s not taking much of a manager’s energy), as well as potentially driving a strong contributor to find an environment which values this behavior more. Something to think about.

A Core Issue

The Wall Street Journal this week, published a piece about the impact of “bad apples,” or toxic, lazy, or incompetent employees on the work place. It makes sense that a bad employee affects an organization more than a good employee. It’s basically the second law of thermodynamics for corporations. There is a tendency toward maximum entropy (disorder), so the the bad employee is simply going with the flow. A good employee is fighting against that tendency.

I’ve seen this in action in the past, in a number of environment, both big and small. In smaller organizations, the problem is generally dealt with more rapidly, although there was one individual years ago who was hired on reputation, but who’s only skill appeared to be criticizing the work of others. This went on for a while until someone in management realized that he hadn’t actually produced anything, and then he was gone. In larger corporations, toxic or incompetent employees tend to stick around, in larger numbers, for a much longer time. There are a number of reasons for this, not the least of which is that HR and legal departments are extremely nervous. The slightest threat of litigation if not every i is dotted and t crossed. Therefore, elaborate processes are established, which allow the employee in question to just do well enough to not be terminated, but not actually become one of the strong employees. There are other reasons, as well, such as decisions being made fairly high up the management chain where the decision maker is too far removed from the actual operations, and a generally useless performance review process. I’ve even heard of situations where senior management makes ranking and merit increase decisions before evaluations have been written, thus ensuring that the direct supervisors have little input in the process and thereby eliminating one very effective tool for managing difficult employees.

What is interesting is that this situation, of a large number of bad apples, especially in larger organizations, came up at the end of last year, as I discussed here. The one issue I haven’t discussed regarding causes of the existence of bad apples, is that of training. There are plenty of bad apples who have absolutely no interest in improvement, but there are those employees who either start out wanting to improve, or wake up and smell the coffee and come to realize that they need to improve their skills. Unfortunately, especially in larger corporations, where actual process improvement to drive the lowering of costs is out of the question, the solutions are often the following: restrict travel, restrict cell phones, restrict training, and outsource. All of these force the short term improvement in the bottom line that drives most CEO’s. All of these, at the same time, can cripple in the long term.

So what does all of this have to do with the mission of InfoSynergetics? Well the first is simply a reality check. If one is going to engage in a project in any organization, especially a larger one, you have to be prepared to cope with the bad apples. I find it most useful to attempt to engage them head on. If they are toxic, don’t let them succeed with you. I had one situation where a somewhat toxic person involved with a project I was running would come in my office almost looking to pick a fight over some imagined slight. The more I refused to engage in either combat or becoming defensive, and merely sought to solve their problems, the quicker the relationship improved and the toxicity became less prominent. For employees who might not be very skilled, they present a worst case scenario from a systems standpoint. How can I design a system that will allow them to be functional in spite of their lack of skills? In some cases, the lack of skills is not actually in the area they get paid for, but more in the IT arena. While these people hardly fit the category of bad apples (they are usually very effective in their disciplines), they represent a similar challenge from a design perspective – and thus can help in the creation of a truly useful system.

So, I guess that the moral of the story is that while bad apples can be a drain on an organization, from a project and systems perspective, we can find them to be, if not beneficial, then at least not inhibitory in successful project execution. We need to anticipate their presence, and be prepared to adapt appropriately.

The Legacy of Steve Jobs

Steve Jobs passed away yesterday. There are probably few people in at least the western world who don’t know that. Like many thousand of other bloggers, I’m left pondering the Jobs legacy.

Like many of those thousands of bloggers, much of what has happened in my career (at least the good parts), may not have happened if it weren’t for Apple and the philosophy of the Macintosh and subsequent products. Note that I said the philosophy. It wasn’t really about the machines per se. Although they are a fitting tribute to Jobs’ creativity and approach to life, that isn’t where his real legacy lies.

In the mid 90’s, the San Diego biotech world underwent a dramatic shift. What had been a predominantly Macintosh world was lured into the Novell/Lotus/Microsoft realm. While short term hardware costs were reduced, to this day I’m not convinced that the loss in productivity that accompanied this change was ever offset by the short term cost savings. Such is life in the modern business era. Shortsightedness is the mantra, even if it’s not intentionally so. I, unfortunately, had to go along for the ride. At least part of the ride. I kept a Mac for a number of years, as the core systems of the department I had created were running on an Apple server. The tool I used, 4D provided what was, I believe, the first truly, and easily cross-platform development environments. You could host the solution on a PC or a Mac, and serve it to either platform. 4D, of course, was significantly influenced by Apple back in the day – even hoping to become the database standard for the platform. It was that influence, I think, that allowed 4D to be such a powerful tool in the hands of entrepreneurs. With little budget, and no full time developers, you could put in place extremely powerful solutions.

So that brings me back to the philosophy thing. Apple has long been about aesthetics. Steve, in his commencement address to Stanford, told the story of how his “dropping in” on a calligraphy class ultimately led to the beautiful typography we have on personal computers and other devices today. As any of those who lived in the command line world of DOS, the switch to a pleasing interface, especially for the non-techie folks, had a massive impact on how people worked. For the first time it felt as if the computer was a tool meant to empower the user – not a task master under who’s control we slaved. That is where the real legacy comes. The entire mindset that computers must truly benefit us in order to be valuable. That means that they must be intuitive, in the words of Steve Jobs, they should “just work.” In the words of a book on interface design I’m reading, “don’t make me think.” My job is primarily not about computers per se. In the early years of my career, my job technically had nothing to do with computers. My brain power needed to be focused on solving business challenges. Every second spent trying to figure out how to make the &@!*& computer work was a waste of time.

This same is true of most business people. To the degree that developers of software and hardware follow this philosophy, their customers benefit, and they will succeed. To the degree that they don’t… This has been true of Apple, as well as others. In the field that I’ve predominantly been involved with for the past 10 years or so, that of LIMS, I’ve seen many shifts in the top product. The previous front runner didn’t go away, they just failed to follow this basic philosophy, and fell to the back of the pack. Ultimately they end up acquired by someone else. It will happen again, and again, until somebody finally gets it.

The legacy that Steve leaves us, and the challenge as well, is to continue to make computer technology a thing that benefits mankind. It serves no purpose on it’s own, and that is something Steve, I think, would want us to remember.

Stuck in a Rut

Forbes has a series of interesting articles (part 1 here, part 2 here, and more to come) on the potential value of Enterprise Architecture(EA), written by Dan Woods, of their CIO Central group.

He begins the series by citing Lens Fehskens of the Open Group, who cites three levels of maturity that IT organizations can progress through:

Level 1: Getting technology to work at all
Level 2: Getting technology to do the right thing for the business
Level 3: Adapting technology to do the right thing efficiently

Fehskens argues that most companies are trapped at level 1, basically spending their time putting out fires. Woods argues that the “Raw Technology Persona” dominates at this level – where technology is implemented for its own sake, as opposed to for the benefit of the business. This topic, by itself, warrants extensive exploration, but I’ll leave that for another time.

More interesting is that the development of an architecture can provide the means for an IT organization to escape from level 1. This is because a properly defined architecture can provide a roadmap that describes both how IT is creating value for the organization and how new technologies and opportunities can, or can not continue on that path. If a technology doesn’t fit in the architecture (or presumably any system vs. a specific technology), the decision can be made to not pursue the technology. Of course, it is also imperative that everyone understand that EA is a process, and an expensive, difficult one, at that.

This last bit in the first article really caught my eye. The development of a proper strategy should be difficult and time consuming, and nobody should get locked into a non-changing strategy. The strategy, then, should primarily be about the strategy. Make the development of a long term, progressively elaborative strategy be the goal, and everything else is more likely to fall in line. I recently was speaking to a friend who indicated their company has agreed to a 5 year plan to harmonize 18 of their sites. I was amazed, because most companies don’t really work in 5 year horizons. Frequently they work in next quarter horizons. If they stick to their plan, I’ll be even more surprised.

The second article goes on to describe how the architecture leads you up the ladder, because you begin to understand what the business needs, and how technology supports that. Over time, as the architecture and organization mature, the vision becomes even more cohesive, where the interplay between technology and the business is viewed across portfolios and the entire company.

Woods also points out that there is a risk that, owing to the complexity of the methodologies (TOGAF occupies 750 pages), organizations can become absorbed in the process, and make the entire exercise about it instead of the desired result. Once again, the technologists can find themselves implementing a technology (or in this case a methodology) for its own sake.

Perhaps, then, the first step down this road is not convincing senior management to fund the process fully, nor to implement an architecture program. Perhaps the proper first step is for all of us technologists to agree that the success of the business is what is most important.

What Corporate IT Can Learn from Apple

There’s been an interesting couple of articles in the past week about Apple that hold some interesting lessons for those of us doing IT in a corporate setting. I’ll take the last first, a story in the Wall Street Journal about Apple’s successful retail stores.

I’m hardly an expert in retail sales, but one point in this story provides an important insight we can all use: “According to several employees and training manuals, sales associates are taught an unusual sales philosophy: not to sell, but rather to help customers solve problems. “Your job is to understand all of your customers’ needs—some of which they may not even realize they have,” one training manual says.” They go on to quote one employee who said, “You were never trying to close a sale. It was about finding solutions for a customer and finding their pain points.”

Frequently, even in corporate IT, there is a tendency to want to close a sale. I’ve got a tool we’re deploying, or a skill you can use. More frequently the sale is for a project – usually several. For anyone in the Business Analysis space, and I use this term somewhat broadly, a solid project portfolio is demonstrative of your value. Therefore, the point is always to have a good project list in your pocket that can be presented to senior management. This mindset runs all the way to the top. Frequently, the CIO demonstrates value to the CEO and board by having a robust portfolio of programs that IT is working on. What would happen if portfolio size was no longer much of a metric?

The challenge here is, of course, how do you measure IT effectiveness? For Apple, its straightforward – you look at the sales numbers per square foot. In a corporate IT environment, you would need to come up with both robust measurements of customer satisfaction, and then probably some good stories to share with management exemplifying this alternate approach. This is hard work, and a different way to approach IT management.

The other lessons come from an article at TechCrunch on the announcement of Apple’s iCloud offering. The focus on this article is the now famous Steve Jobs line, “it just works.” He proposes that users are generally not interested in how something works, nor are they particularly interested in having to learn a complex series of button clicks and selections, along with deciphering technical jargon, in order to make an application or system work. Some are, that is for sure, but most aren’t.

One of the fundamental challenges in many systems deployed in the life sciences industry, especially with the niche systems, is that they don’t “just work.” Interface design tends to be an afterthought, at best. Usually, the core users of the system get used to how to make it work, but many more casual users never do. I’ve seen with EDMS, with LIMS, with financial apps, with… just about everything.

If those of us responsible for implementing solutions made it a key point to ensure that the solution “just worked” at the end of the day, I wonder how much easier all of our lives would be.

The User/Programmer Barrier

I do some web site management on the side, mostly for my church, whose site I recently inherited. I was asking one of the ministry leaders for content for an extremely outdated portion of the site. Her response was one I’ve heard countless times before, both as an analyst and as a programmer, but I seldom hear discussed. She was hesitant to tell me what she was after because she doesn’t do a lot with computers.

Of course, when we are working on building a solution or an organization, whether it be a large enterprise class system, or a small website, many of our customers will not be extremely computer literate. Some day that will change to some degree, but the reality is that it will never change completely. In order to bridge the gap between end user and programmer, the analyst grew up, either as an IT function or within the business. Frequently, on many projects, you end up with the analyst in IT, and someone who is functionally an analyst in the business (often known as the super user). Their jobs are to take what they understand the requirements of the business and translate them into something that programmers/developers can deal with. Along the way, however, end users tend to disappear from the process. The assumption is that their super user will be able to do the job.

I’ve stated before that the super users (and for that matter the IT business analysts) are usually lacking the necessary skills to perform all of tasks assigned to them in the process of implementing an IT solution. That is still very much the case, but even a highly skilled super user or business analyst needs to keep in mind that a key part of their job must be engaging the key stakeholders in the process. The key stakeholders have a vision, or at the very least, a notion of what’s not working. However, frequently their lack of computer saavy leads them to believe that they have little of value to add. I suspect they have been told too many times in the past that what they’d like is simply not possible, so they’ve grown to think that there is no point in asking. The analysts and super users, however, must come to the realization that they can’t take no for an answer.

At the same time, you can’t simply just force stakeholders to tell you what they want. Often it helps to prime the pump a little bit. In that earlier blog, I mentioned this book, which is focused on capability envisioning, not merely requirements. The authors prefer a process where the IT folks (analysts or super users) come to the table with a knowledge of what sorts of technical possibilities exist and engage the user in a discussion about what they would like to see from a business perspective (either opportunities to do more or problems that need to be fixed). In the example that triggered this post, I started down that road by suggesting some possibilities of things we could do with the page, and asking questions like, “what do you want to communicate?”, “what are the most important elements of the program?”, etc. Once that process started, then the feedback started. Interestingly, that may be one of the most important skills that an analyst can acquire.

Are CIO’s Really IT Smart?

I have to start by expressing my continuing pleasure at the reporting of the Wall Street Journal. We’ve only been subscribing since the beginning of the school year, but since that time there have been a number of great articles on a variety of subjects. Recently I blogged on one report covering advances in healthcare delivery. This week, they published a detailed review of IT, it’s impact on businesses, and the challenges that corporate America faces with understanding and leveraging IT.

The motivation for creating the section was apparently a survey where only 26% of business leaders reported being smart about IT. The editors felt that this was too low. While I agree that this number is much lower than it should be, I am not entirely certain that it is accurate. I think the number is likely higher than reality, because many business leaders have too high an opinion of their IT knowledge. Frankly, I think that 26 percent would only be low for CIOs, and then only by about 10-20 percentage points. Of the many CIOs I’ve met over the years, most fall into two categories. Either they are old network or programming guys who have little appreciation for increasing business value, or they are graduates of some MBA program or another, remember a couple of lectures on IT, and figure that this is the best route to the coveted CEO job. They usually climb the ladder by skipping any time in the “trenches”, learning about how the world really works.

As evidence supporting my rather insulting assertion, we need to look no farther than SONY and their Playstation Network debacle (6 days along and going strong). I can’t identify a CIO of SONY Playstation, but since Drew Martin is CIO of a SONY Electronics, which appears to be a parent of SONY Playstation, I imagine he is the one to shoulder the blame. Now, we are talking about an organization entirely focused on IT products. Not only did they clearly fail to secure their network, they also did not have any form of backup in place. To make matters worse, and this goes to the crux of my argument, they have utterly failed to leverage a key element of 21st century IT to help mitigate damage amongst their customers. They have almost completely failed to utilize social media to communicate. Any CIO who claims to be IT smart, would realize that social media can be your savior in these sorts of situations.

Of course, if one looks at Martin’s resume, you can’t be surprised that he isn’t that “smart”. He doesn’t appear to have any practical experience whatsoever. He took his Cornell engineering degree to the Irish Basketball league, and then immediately to Accenture as a consultant. I’ve never ceased to be amazed at how consulting companies will hire people with no experience to consult. After 4 years as a JIT consultant he then is qualified to be a senior level manager at Pepsi, in the business process improvement area. Perhaps he learned much while there, but its interesting that he doesn’t appear to have any real experience implementing IT solutions. Directors tend to not be in the details (presumably because they have spent sufficient time gaining experience in the details), and I don’t think he is an exception. That he would then create a culture that has little understanding of how to leverage IT to enhance the business is less than surprising. This is all quite sad, because I would hope that someone who has real experience in business process reengineering and systems implementation would be the ideal CIO. I am just not sure that he has real experience.

Of course, I didn’t intend for this to be an anti-SONY diatribe. I didn’t expect to focus this much energy on one CIO. On the other hand, I didn’t expect Martin to fit my model of the poorly prepared CIO quite so well. In the future, I do plan on returning to the Wall Street Journal, and explore the details of some of the articles, which should inspire a great deal of soul searching among IT professionals in corporate America.

Update: You should watch this video in light of the Playstation disaster.

Integration Leads to Improvement

The Wall Street Journal recently ran a report on medical care and the sorts of innovations that are being proposed, and in many cases, tried, in the interest of improving care and reducing costs. I recommend most of the articles, as they are very interesting for a variety of reasons. In particular, though, I would like to focus on an article about electronic medical records.

Although it is beyond the scope of this entry, my initial reaction to the article was surprise that the government was the innovator. However, the more I thought about it, the surprise was unwarranted. If I think about programs like DARPA (which most famously gave us the Internet), or NASA (a host of technologies, with the most basic, but most useful being Velcro), then I realize that the government is the source of the some of the best technology improvements we’ve experienced over time. Those innovations may have been very expensive (and probably all were), but they were also revolutionary.

In this case, we know nothing of the cost, but we clearly see the outcome. What was a system that used to require thousands of folders, and countless pieces of paper, became one that was entirely electronic. I would be curious to know the cost savings in paper processing, alone (labor, paper, ink, pens, etc.). However, the focus of the article was on the improvements in care that came from this system. These improvements did not result from mere paperwork reduction, but rather from integration and accessibility.

We see that any doctor, anywhere within the VA system can access his patients records to determine his medical history. Not only can he see what diagnoses and tests have been performed, he can review the actual output of things like x-rays and MRI’s. Access to this information not only prevents the ordering of duplicate tests, but it allows the practitioner to more thoroughly evaluate the patient – even before the patient arrives for care. This accessibility also supports metrics generation, which can, in turn, be used to monitor doctor performance as well as treatment vs. outcome sorts of intelligence gathering.

The other key aspect of the system, is its integration with other systems such as drug interaction databases and home health monitoring. These allow alerts to be provided to the practitioner, not only to prevent mistakes, but also to allow for intervention where it might otherwise not occur. In the example cited in the article, we learn that a patient has an inappropriate glucose level. This problem remains flagged in the system until it is corrected.

What I would really have liked to know is what was the original program charter for this record system? I suspect somehow that the goal (especially back in the 70’s) was largely the reduction of paper. The integration and global accessibility were probably not primary goals of the system. However, I would guess that the majority of the benefit of the system, even on a cost-only basis, derives from these two items. This might well be an example of what I discussed earlier, where the real requirements for the system – the ones that would result in the most value – were not the primary drivers of the system. Unfortunately, there is no way to tell from the article, but I do wonder.

Quality, Validation, and SDLCs

Discussion of software development methodologies – which is really the GMP, Quality approach, Waterfall or Agile? Many QA people will tell you that you need to follow some form of Waterfall method. Write up requirements, then a functional specification, then a design specification, and then you can begin coding. Only this method will ensure that you create a quality piece of software.

Now, the FDA has shifted their view on the subject, to some degree. This is most notable in some of their guidance documents on software validation, where they require that a company select an SDLC, without requiring a specific one. However, if you take the entirety of the text, you can still feel a call for the more traditional Waterfall approach. Perhaps this, as well as the general unwillingness of people in quality to change, is why.

What I would like to do, though, is get back to the fundamentals. Get back to exploring what quality is, what validation is to accomplish, in order to answer the question posed at the beginning of this entry. This will all be very simple, its just forgotten most of the time.

Quality is simply the fitness for use of a product. If it meets your needs, in terms of function, price, longevity, etc., then it is a quality product. Validation is simply a way of establishing that a product is quality product. We validate processes to establish that they produce products of acceptable quality. We validate software to establish that it meets the needs of the end user. It is interesting to note that there is an understanding that fitness for use does not necessarily mean conformance with requirements. Just because something meets documented requirements does not mean that it actually is fit for use. I imagine the infamous Tacoma Narrows bridge met its requirements. It was simply that someone forgot a key requirement that was the problem.

If, then, we need to ensure that software is fit for use, yet we can’t rely on software requirements as the arbiter of what fit means, where does that leave us? Traditional Waterfall methodology says that we must develop requirements, then build. While many accept that you can revisit requirement during the build phase, this practically does not happen very often. What if we missed something? What if there was a subtle need that we didn’t detect during a typical requirements gathering exercise? Usually, this problem is not detected until the product has been released. Then, it is too late. The end user can request changes, even complain, but then they are often accused of changing requirements, and the changes have to wait until some subsequent project.

This brings me to the subject of Agile programming. I suspect most people view this as a means to shortcut documentation and get the program out faster. While it is true that Agile may result in less meaningless documentation, and will likely produce software faster, these are results, not drivers of the process. The key driver of Agile is good software. This is enabled, in part, by much closer integration with the business during the process. It is dependent on extensive prototyping as a means of confirming understanding by both the customer and the developer, as to the actual needs the system is to fulfill. In other words, it is focused on developing a solid understanding of what will make the system fit for use. It does this, by placing a focus on business understanding, first and foremost.

So, if Agile produces software that better fits the needs of the business (vs. the stated requirements) that makes the software quality software, the process used to develop it, and the software itself, eminently validatable.