Uncategorized

The Legacy of Steve Jobs

Steve Jobs passed away yesterday. There are probably few people in at least the western world who don’t know that. Like many thousand of other bloggers, I’m left pondering the Jobs legacy.

Like many of those thousands of bloggers, much of what has happened in my career (at least the good parts), may not have happened if it weren’t for Apple and the philosophy of the Macintosh and subsequent products. Note that I said the philosophy. It wasn’t really about the machines per se. Although they are a fitting tribute to Jobs’ creativity and approach to life, that isn’t where his real legacy lies.

In the mid 90’s, the San Diego biotech world underwent a dramatic shift. What had been a predominantly Macintosh world was lured into the Novell/Lotus/Microsoft realm. While short term hardware costs were reduced, to this day I’m not convinced that the loss in productivity that accompanied this change was ever offset by the short term cost savings. Such is life in the modern business era. Shortsightedness is the mantra, even if it’s not intentionally so. I, unfortunately, had to go along for the ride. At least part of the ride. I kept a Mac for a number of years, as the core systems of the department I had created were running on an Apple server. The tool I used, 4D provided what was, I believe, the first truly, and easily cross-platform development environments. You could host the solution on a PC or a Mac, and serve it to either platform. 4D, of course, was significantly influenced by Apple back in the day – even hoping to become the database standard for the platform. It was that influence, I think, that allowed 4D to be such a powerful tool in the hands of entrepreneurs. With little budget, and no full time developers, you could put in place extremely powerful solutions.

So that brings me back to the philosophy thing. Apple has long been about aesthetics. Steve, in his commencement address to Stanford, told the story of how his “dropping in” on a calligraphy class ultimately led to the beautiful typography we have on personal computers and other devices today. As any of those who lived in the command line world of DOS, the switch to a pleasing interface, especially for the non-techie folks, had a massive impact on how people worked. For the first time it felt as if the computer was a tool meant to empower the user – not a task master under who’s control we slaved. That is where the real legacy comes. The entire mindset that computers must truly benefit us in order to be valuable. That means that they must be intuitive, in the words of Steve Jobs, they should “just work.” In the words of a book on interface design I’m reading, “don’t make me think.” My job is primarily not about computers per se. In the early years of my career, my job technically had nothing to do with computers. My brain power needed to be focused on solving business challenges. Every second spent trying to figure out how to make the &@!*& computer work was a waste of time.

This same is true of most business people. To the degree that developers of software and hardware follow this philosophy, their customers benefit, and they will succeed. To the degree that they don’t… This has been true of Apple, as well as others. In the field that I’ve predominantly been involved with for the past 10 years or so, that of LIMS, I’ve seen many shifts in the top product. The previous front runner didn’t go away, they just failed to follow this basic philosophy, and fell to the back of the pack. Ultimately they end up acquired by someone else. It will happen again, and again, until somebody finally gets it.

The legacy that Steve leaves us, and the challenge as well, is to continue to make computer technology a thing that benefits mankind. It serves no purpose on it’s own, and that is something Steve, I think, would want us to remember.

Stuck in a Rut

Forbes has a series of interesting articles (part 1 here, part 2 here, and more to come) on the potential value of Enterprise Architecture(EA), written by Dan Woods, of their CIO Central group.

He begins the series by citing Lens Fehskens of the Open Group, who cites three levels of maturity that IT organizations can progress through:

Level 1: Getting technology to work at all
Level 2: Getting technology to do the right thing for the business
Level 3: Adapting technology to do the right thing efficiently

Fehskens argues that most companies are trapped at level 1, basically spending their time putting out fires. Woods argues that the “Raw Technology Persona” dominates at this level – where technology is implemented for its own sake, as opposed to for the benefit of the business. This topic, by itself, warrants extensive exploration, but I’ll leave that for another time.

More interesting is that the development of an architecture can provide the means for an IT organization to escape from level 1. This is because a properly defined architecture can provide a roadmap that describes both how IT is creating value for the organization and how new technologies and opportunities can, or can not continue on that path. If a technology doesn’t fit in the architecture (or presumably any system vs. a specific technology), the decision can be made to not pursue the technology. Of course, it is also imperative that everyone understand that EA is a process, and an expensive, difficult one, at that.

This last bit in the first article really caught my eye. The development of a proper strategy should be difficult and time consuming, and nobody should get locked into a non-changing strategy. The strategy, then, should primarily be about the strategy. Make the development of a long term, progressively elaborative strategy be the goal, and everything else is more likely to fall in line. I recently was speaking to a friend who indicated their company has agreed to a 5 year plan to harmonize 18 of their sites. I was amazed, because most companies don’t really work in 5 year horizons. Frequently they work in next quarter horizons. If they stick to their plan, I’ll be even more surprised.

The second article goes on to describe how the architecture leads you up the ladder, because you begin to understand what the business needs, and how technology supports that. Over time, as the architecture and organization mature, the vision becomes even more cohesive, where the interplay between technology and the business is viewed across portfolios and the entire company.

Woods also points out that there is a risk that, owing to the complexity of the methodologies (TOGAF occupies 750 pages), organizations can become absorbed in the process, and make the entire exercise about it instead of the desired result. Once again, the technologists can find themselves implementing a technology (or in this case a methodology) for its own sake.

Perhaps, then, the first step down this road is not convincing senior management to fund the process fully, nor to implement an architecture program. Perhaps the proper first step is for all of us technologists to agree that the success of the business is what is most important.

What Corporate IT Can Learn from Apple

There’s been an interesting couple of articles in the past week about Apple that hold some interesting lessons for those of us doing IT in a corporate setting. I’ll take the last first, a story in the Wall Street Journal about Apple’s successful retail stores.

I’m hardly an expert in retail sales, but one point in this story provides an important insight we can all use: “According to several employees and training manuals, sales associates are taught an unusual sales philosophy: not to sell, but rather to help customers solve problems. “Your job is to understand all of your customers’ needs—some of which they may not even realize they have,” one training manual says.” They go on to quote one employee who said, “You were never trying to close a sale. It was about finding solutions for a customer and finding their pain points.”

Frequently, even in corporate IT, there is a tendency to want to close a sale. I’ve got a tool we’re deploying, or a skill you can use. More frequently the sale is for a project – usually several. For anyone in the Business Analysis space, and I use this term somewhat broadly, a solid project portfolio is demonstrative of your value. Therefore, the point is always to have a good project list in your pocket that can be presented to senior management. This mindset runs all the way to the top. Frequently, the CIO demonstrates value to the CEO and board by having a robust portfolio of programs that IT is working on. What would happen if portfolio size was no longer much of a metric?

The challenge here is, of course, how do you measure IT effectiveness? For Apple, its straightforward – you look at the sales numbers per square foot. In a corporate IT environment, you would need to come up with both robust measurements of customer satisfaction, and then probably some good stories to share with management exemplifying this alternate approach. This is hard work, and a different way to approach IT management.

The other lessons come from an article at TechCrunch on the announcement of Apple’s iCloud offering. The focus on this article is the now famous Steve Jobs line, “it just works.” He proposes that users are generally not interested in how something works, nor are they particularly interested in having to learn a complex series of button clicks and selections, along with deciphering technical jargon, in order to make an application or system work. Some are, that is for sure, but most aren’t.

One of the fundamental challenges in many systems deployed in the life sciences industry, especially with the niche systems, is that they don’t “just work.” Interface design tends to be an afterthought, at best. Usually, the core users of the system get used to how to make it work, but many more casual users never do. I’ve seen with EDMS, with LIMS, with financial apps, with… just about everything.

If those of us responsible for implementing solutions made it a key point to ensure that the solution “just worked” at the end of the day, I wonder how much easier all of our lives would be.

The User/Programmer Barrier

I do some web site management on the side, mostly for my church, whose site I recently inherited. I was asking one of the ministry leaders for content for an extremely outdated portion of the site. Her response was one I’ve heard countless times before, both as an analyst and as a programmer, but I seldom hear discussed. She was hesitant to tell me what she was after because she doesn’t do a lot with computers.

Of course, when we are working on building a solution or an organization, whether it be a large enterprise class system, or a small website, many of our customers will not be extremely computer literate. Some day that will change to some degree, but the reality is that it will never change completely. In order to bridge the gap between end user and programmer, the analyst grew up, either as an IT function or within the business. Frequently, on many projects, you end up with the analyst in IT, and someone who is functionally an analyst in the business (often known as the super user). Their jobs are to take what they understand the requirements of the business and translate them into something that programmers/developers can deal with. Along the way, however, end users tend to disappear from the process. The assumption is that their super user will be able to do the job.

I’ve stated before that the super users (and for that matter the IT business analysts) are usually lacking the necessary skills to perform all of tasks assigned to them in the process of implementing an IT solution. That is still very much the case, but even a highly skilled super user or business analyst needs to keep in mind that a key part of their job must be engaging the key stakeholders in the process. The key stakeholders have a vision, or at the very least, a notion of what’s not working. However, frequently their lack of computer saavy leads them to believe that they have little of value to add. I suspect they have been told too many times in the past that what they’d like is simply not possible, so they’ve grown to think that there is no point in asking. The analysts and super users, however, must come to the realization that they can’t take no for an answer.

At the same time, you can’t simply just force stakeholders to tell you what they want. Often it helps to prime the pump a little bit. In that earlier blog, I mentioned this book, which is focused on capability envisioning, not merely requirements. The authors prefer a process where the IT folks (analysts or super users) come to the table with a knowledge of what sorts of technical possibilities exist and engage the user in a discussion about what they would like to see from a business perspective (either opportunities to do more or problems that need to be fixed). In the example that triggered this post, I started down that road by suggesting some possibilities of things we could do with the page, and asking questions like, “what do you want to communicate?”, “what are the most important elements of the program?”, etc. Once that process started, then the feedback started. Interestingly, that may be one of the most important skills that an analyst can acquire.

Are CIO’s Really IT Smart?

I have to start by expressing my continuing pleasure at the reporting of the Wall Street Journal. We’ve only been subscribing since the beginning of the school year, but since that time there have been a number of great articles on a variety of subjects. Recently I blogged on one report covering advances in healthcare delivery. This week, they published a detailed review of IT, it’s impact on businesses, and the challenges that corporate America faces with understanding and leveraging IT.

The motivation for creating the section was apparently a survey where only 26% of business leaders reported being smart about IT. The editors felt that this was too low. While I agree that this number is much lower than it should be, I am not entirely certain that it is accurate. I think the number is likely higher than reality, because many business leaders have too high an opinion of their IT knowledge. Frankly, I think that 26 percent would only be low for CIOs, and then only by about 10-20 percentage points. Of the many CIOs I’ve met over the years, most fall into two categories. Either they are old network or programming guys who have little appreciation for increasing business value, or they are graduates of some MBA program or another, remember a couple of lectures on IT, and figure that this is the best route to the coveted CEO job. They usually climb the ladder by skipping any time in the “trenches”, learning about how the world really works.

As evidence supporting my rather insulting assertion, we need to look no farther than SONY and their Playstation Network debacle (6 days along and going strong). I can’t identify a CIO of SONY Playstation, but since Drew Martin is CIO of a SONY Electronics, which appears to be a parent of SONY Playstation, I imagine he is the one to shoulder the blame. Now, we are talking about an organization entirely focused on IT products. Not only did they clearly fail to secure their network, they also did not have any form of backup in place. To make matters worse, and this goes to the crux of my argument, they have utterly failed to leverage a key element of 21st century IT to help mitigate damage amongst their customers. They have almost completely failed to utilize social media to communicate. Any CIO who claims to be IT smart, would realize that social media can be your savior in these sorts of situations.

Of course, if one looks at Martin’s resume, you can’t be surprised that he isn’t that “smart”. He doesn’t appear to have any practical experience whatsoever. He took his Cornell engineering degree to the Irish Basketball league, and then immediately to Accenture as a consultant. I’ve never ceased to be amazed at how consulting companies will hire people with no experience to consult. After 4 years as a JIT consultant he then is qualified to be a senior level manager at Pepsi, in the business process improvement area. Perhaps he learned much while there, but its interesting that he doesn’t appear to have any real experience implementing IT solutions. Directors tend to not be in the details (presumably because they have spent sufficient time gaining experience in the details), and I don’t think he is an exception. That he would then create a culture that has little understanding of how to leverage IT to enhance the business is less than surprising. This is all quite sad, because I would hope that someone who has real experience in business process reengineering and systems implementation would be the ideal CIO. I am just not sure that he has real experience.

Of course, I didn’t intend for this to be an anti-SONY diatribe. I didn’t expect to focus this much energy on one CIO. On the other hand, I didn’t expect Martin to fit my model of the poorly prepared CIO quite so well. In the future, I do plan on returning to the Wall Street Journal, and explore the details of some of the articles, which should inspire a great deal of soul searching among IT professionals in corporate America.

Update: You should watch this video in light of the Playstation disaster.

Integration Leads to Improvement

The Wall Street Journal recently ran a report on medical care and the sorts of innovations that are being proposed, and in many cases, tried, in the interest of improving care and reducing costs. I recommend most of the articles, as they are very interesting for a variety of reasons. In particular, though, I would like to focus on an article about electronic medical records.

Although it is beyond the scope of this entry, my initial reaction to the article was surprise that the government was the innovator. However, the more I thought about it, the surprise was unwarranted. If I think about programs like DARPA (which most famously gave us the Internet), or NASA (a host of technologies, with the most basic, but most useful being Velcro), then I realize that the government is the source of the some of the best technology improvements we’ve experienced over time. Those innovations may have been very expensive (and probably all were), but they were also revolutionary.

In this case, we know nothing of the cost, but we clearly see the outcome. What was a system that used to require thousands of folders, and countless pieces of paper, became one that was entirely electronic. I would be curious to know the cost savings in paper processing, alone (labor, paper, ink, pens, etc.). However, the focus of the article was on the improvements in care that came from this system. These improvements did not result from mere paperwork reduction, but rather from integration and accessibility.

We see that any doctor, anywhere within the VA system can access his patients records to determine his medical history. Not only can he see what diagnoses and tests have been performed, he can review the actual output of things like x-rays and MRI’s. Access to this information not only prevents the ordering of duplicate tests, but it allows the practitioner to more thoroughly evaluate the patient – even before the patient arrives for care. This accessibility also supports metrics generation, which can, in turn, be used to monitor doctor performance as well as treatment vs. outcome sorts of intelligence gathering.

The other key aspect of the system, is its integration with other systems such as drug interaction databases and home health monitoring. These allow alerts to be provided to the practitioner, not only to prevent mistakes, but also to allow for intervention where it might otherwise not occur. In the example cited in the article, we learn that a patient has an inappropriate glucose level. This problem remains flagged in the system until it is corrected.

What I would really have liked to know is what was the original program charter for this record system? I suspect somehow that the goal (especially back in the 70’s) was largely the reduction of paper. The integration and global accessibility were probably not primary goals of the system. However, I would guess that the majority of the benefit of the system, even on a cost-only basis, derives from these two items. This might well be an example of what I discussed earlier, where the real requirements for the system – the ones that would result in the most value – were not the primary drivers of the system. Unfortunately, there is no way to tell from the article, but I do wonder.

Quality, Validation, and SDLCs

Discussion of software development methodologies – which is really the GMP, Quality approach, Waterfall or Agile? Many QA people will tell you that you need to follow some form of Waterfall method. Write up requirements, then a functional specification, then a design specification, and then you can begin coding. Only this method will ensure that you create a quality piece of software.

Now, the FDA has shifted their view on the subject, to some degree. This is most notable in some of their guidance documents on software validation, where they require that a company select an SDLC, without requiring a specific one. However, if you take the entirety of the text, you can still feel a call for the more traditional Waterfall approach. Perhaps this, as well as the general unwillingness of people in quality to change, is why.

What I would like to do, though, is get back to the fundamentals. Get back to exploring what quality is, what validation is to accomplish, in order to answer the question posed at the beginning of this entry. This will all be very simple, its just forgotten most of the time.

Quality is simply the fitness for use of a product. If it meets your needs, in terms of function, price, longevity, etc., then it is a quality product. Validation is simply a way of establishing that a product is quality product. We validate processes to establish that they produce products of acceptable quality. We validate software to establish that it meets the needs of the end user. It is interesting to note that there is an understanding that fitness for use does not necessarily mean conformance with requirements. Just because something meets documented requirements does not mean that it actually is fit for use. I imagine the infamous Tacoma Narrows bridge met its requirements. It was simply that someone forgot a key requirement that was the problem.

If, then, we need to ensure that software is fit for use, yet we can’t rely on software requirements as the arbiter of what fit means, where does that leave us? Traditional Waterfall methodology says that we must develop requirements, then build. While many accept that you can revisit requirement during the build phase, this practically does not happen very often. What if we missed something? What if there was a subtle need that we didn’t detect during a typical requirements gathering exercise? Usually, this problem is not detected until the product has been released. Then, it is too late. The end user can request changes, even complain, but then they are often accused of changing requirements, and the changes have to wait until some subsequent project.

This brings me to the subject of Agile programming. I suspect most people view this as a means to shortcut documentation and get the program out faster. While it is true that Agile may result in less meaningless documentation, and will likely produce software faster, these are results, not drivers of the process. The key driver of Agile is good software. This is enabled, in part, by much closer integration with the business during the process. It is dependent on extensive prototyping as a means of confirming understanding by both the customer and the developer, as to the actual needs the system is to fulfill. In other words, it is focused on developing a solid understanding of what will make the system fit for use. It does this, by placing a focus on business understanding, first and foremost.

So, if Agile produces software that better fits the needs of the business (vs. the stated requirements) that makes the software quality software, the process used to develop it, and the software itself, eminently validatable.

IT Catches On

During a discussion around the future of LIMS over at LinkedIn, one poster suggested that the term “WebLIMS” be stricken from our vocabulary. I tend to agree, for the reasons specified. Perhaps we should start a list like the one that comes out every January, listing terms that should be stricken from the language for misuse and overuse. The first one on my list would be the term LIMS, itself.

LIMS would qualify on the misuse grounds. The reality of every LIMS that I’ve seen(and I’ve seen all of the major ones) is that they are not at all focused on managing laboratory information. Rather, they are generally sample logging and test result recording systems. To argue that this represents “laboratory information” is like arguing that an engine is the entire car. The engine may be central and critical, but it is not what most users of the car interact with directly.

When a laboratory embarks on implementing a LIMS, there are very much competing interests on the part of the stakeholders. Unfortunately, this is often overlooked. It may even be that some stakeholders misunderstand their own interests. For instance, people who submit samples to an analytical lab think that by supporting the analysts’s request for increased efficiency they are supporting their own desire for improvement in their business processes. In other words, they substitute the analytical groups requirements for their own. However, simply improving turnaround won’t generally fulfill all the needs of the submitting groups. Does faster turnaround result in easier submission? Easier status updates? Easier data mining? Getting results faster is certainly a requirement, but it is not the only, nor necessarily the most important, to key sets of stakeholders.

If we look management, a similar problem occurs. For a service organization like an analytical lab, happy customers are the requirement, as is the collection of metrics that facilitate a true quality management program (you just knew I would throw that in there).

If we look at all of these additional requirements, we should realize a couple of things. One is that laboratory information is much broader than most LIMS really take into account, and the second is that we should rethink the process by which we implement LIMS. In other words, in order for LIMS to truly be LIMS, we need to do things and think about things very differently. No longer will simply slapping the latest interface technology, or installing yet another new LIMS fix the inefficiencies and challenges our analytical laboratories face. Many years ago, W.E. Deming made the point that the latest new gadget will not fix your quality challenges. That notion is as true today as it was then.

If LIMS is to be truly LIMS, it needs to be as concerned with metrics as with test results, with metadata as much as logging procedures. These cannot be secondary concerns, relegated to mere afterthoughts in the implementation process. Only once LIMS really addresses, and effectively, all laboratory information, will LIMS implementations begin to succeed.

LIMS is Dead! Long live LIMS!

During a discussion around the future of LIMS over at LinkedIn, one poster suggested that the term “WebLIMS” be stricken from our vocabulary. I tend to agree, for the reasons specified. Perhaps we should start a list like the one that comes out every January, listing terms that should be stricken from the language for misuse and overuse. The first one on my list would be the term LIMS, itself.

LIMS would qualify on the misuse grounds. The reality of every LIMS that I’ve seen(and I’ve seen all of the major ones) is that they are not at all focused on managing laboratory information. Rather, they are generally sample logging and test result recording systems. To argue that this represents “laboratory information” is like arguing that an engine is the entire car. The engine may be central and critical, but it is not what most users of the car interact with directly.

When a laboratory embarks on implementing a LIMS, there are very much competing interests on the part of the stakeholders. Unfortunately, this is often overlooked. It may even be that some stakeholders misunderstand their own interests. For instance, people who submit samples to an analytical lab think that by supporting the analysts’s request for increased efficiency they are supporting their own desire for improvement in their business processes. In other words, they substitute the analytical groups requirements for their own. However, simply improving turnaround won’t generally fulfill all the needs of the submitting groups. Does faster turnaround result in easier submission? Easier status updates? Easier data mining? Getting results faster is certainly a requirement, but it is not the only, nor necessarily the most important, to key sets of stakeholders.

If we look management, a similar problem occurs. For a service organization like an analytical lab, happy customers are the requirement, as is the collection of metrics that facilitate a true quality management program (you just knew I would throw that in there).

If we look at all of these additional requirements, we should realize a couple of things. One is that laboratory information is much broader than most LIMS really take into account, and the second is that we should rethink the process by which we implement LIMS. In other words, in order for LIMS to truly be LIMS, we need to do things and think about things very differently. No longer will simply slapping the latest interface technology, or installing yet another new LIMS fix the inefficiencies and challenges our analytical laboratories face. Many years ago, W.E. Deming made the point that the latest new gadget will not fix your quality challenges. That notion is as true today as it was then.

If LIMS is to be truly LIMS, it needs to be as concerned with metrics as with test results, with metadata as much as logging procedures. These cannot be secondary concerns, relegated to mere afterthoughts in the implementation process. Only once LIMS really addresses, and effectively, all laboratory information, will LIMS implementations begin to succeed.

Buy or Build?

Having just finished phase I of my eResume website, I was given cause to ponder the debate that you will hear around IT systems in most industries, but in particular in the pharmaceutical business. Do you buy a solution, or build it yourself. My website, as it turns out (and yes, it is very basic) consists of some built elements and some third party components.

It turns out that this was the best solution all around. While my javascript isn’t too shabby, it was much easier to locate a library that provided the functionality I needed. At the same time, I built my own style sheets so that things would be positioned the way I liked, and styled to my tastes. I could download a very complex style sheet, but then I’d have to figure out what combinations of styles would work to give me what I wanted, and suddenly the project would get to be a bit much. Similarly, my webhosting company recommends a php based form management tool for contact forms, and the like. However, there was simply too much functionality that I would have had to spend a couple of hours figuring out, vs. simply downloading one of the flavors of the FormMail Perl script from the web. In the Pharmaceutical systems sphere, you run into many of the same challenges. My deepest experience is with LIMS selection and development, so I’d like to reflect on these questions with regard to LIMS.

The preference, in my experience, has been to opt for the purchased solution. The greater the out of the box match with our operations, the better. Minimal configuration, and no customization. This is based on a sentiment you will often hear, and I’ve often said, over the last few years, that in the pharmaceutical industry our job is the manufacture of drugs, not IT systems. I hope to blog on this more in the future, but it occurs to me that we are, too, in the business of IT system development. The modern pharmaceutical company can barely operate without robust IT systems. So, it is, in fact, a part of our business. We therefore, need to be less hesitant to explore a build solution.

Early in my career, working for a small startup, I had limited staff, even more limited budget (not to mention limited off the shelf tools at that time), and a big job to get done. The natural solution was to engineer and build my own solutions. Those solutions lasted for well over a decade (with a major overhaul of all of them at least once). Because I ran the operations the tools were designed to support, there wasn’t the typical requirements headache that goes with system design and build processes. Over time, a member of my team and I attended a course on tech writing so we could develop top flight owners manuals, and we subjected the systems to a very rigorous validation process which helped us tighten up the systems even further. However, even in that environment, the time came to consider switching to more off the shelf solutions. The rationale is that these would, in theory, be easier to manage as we wouldn’t be responsible for coding, bug tracking and fixes. Also, as the supported departments were expanding, and we weren’t, our capacity to build new functionality was greatly limited. Of course, we still had limited budgets, even for system acquisition, so we tended to acquire purely off the shelf with very minimal configuration options. Because the solutions were completely off the shelf, they didn’t fit our operations very well. I’ve never seen an off the shelf pharma system that did that very well at all. One system was so unpopular that the “receiving” department (as in, receiving punishment) refused for a couple of years to give up the old system. On one occasion, I was able to identify a system with a significant amount of out of the box functionality, with limitless customization options, and a good price tag. Unfortunately, the fact that we would be the first adopter in our industry made the selection unpalatable to certain key individuals, and the system was never acquired. The alternative we were forced to implement was a narrowly focused solution, that was never very popular with it’s users.  When it came time to expand to additional groups, it didn’t fit.

In later years, I had an opportunity to be involved in the selection and implementation of systems that ran from customizable to configurable, and all with less than optimal success. Certainly, some of the difficulties were with the requirements process. In particular, the lack of willingness to execute business process evaluation, and re-engineer where necessary, is always troublesome. However, there were other more challenging issues. For the highly configurable system, we had a system with a sound data model and a pretty robust set of business logic. However, the interface was very outdated, and there was literally nothing we could do about it. The interface difficulties really render the system very unpopular with significant portions of the user community. All of the vast configuration options had to do with business logic and data models, and virtually nothing to do with the interface. The highly customizable system had an extremely complex development environment and some companies, in order to make things work properly, hired vast teams of developers. Of course, backing out those changes during the system upgrade, was nearly impossible

What was interesting is the system I mentioned earlier, that had nearly limitless customization options, was that such customization would be possible by smaller teams of developers, and would be much easier to accomplish. I think therein lies the sweet spot. Although most pharmaceutical companies have similar systems, they are not identical, and some customization (especially around interface design) is likely warranted. Further, the most ideal situation is a system with a robust API. This would enable the development team to create a layered (wrappered may be a better description) system where if the underlying system is modified may not result in much if any impact to the custom piece. The only remaining danger is for the development and maintenance teams to think that once delivered, further customization will never take place, so no allowance is made to support that activity. You would think that this is self-evidently an erroneous view, but it is not uncommon. Even predominantly off-the-shelf configurable systems suffer from this belief system.