Companies invest considerable time, effort and money when selecting and implementing a major mission critical computer system. Successfully completing the process can be complicated and frustrating; as a result, it doesn't take a lot for implementations to fail. The problem is that the business organization that undertook the implementation in the first place is stuck - stuck with the time, money and the inconvenience of not having the system they purchased. The company has not only left without the new system, they are back to using the systems that they originally thought so inadequate that they committed to spending the time and money for a new system.
Today organizations typically implement systems written by established software vendors who have many implementations of their software into similar types of business or governmental organizations. Implementing a prewritten software system brings with it certain problems and issues. And ... for those organizations that custom build their own systems, they have their own set of implementation problems and issues. The approach described in this white paper to turning around a failed system implementation applies to either recovering a failed prewritten system or custom written system.
The first step is to determine if the new system implementation effort is salvageable and what would it take to recover the already sunk investment to result in a successful implementation. In my career I have been fortunate to have successfully accomplished this task for several large business and governmental organizations. This white paper explains the process I have found successful for turning around a failed system implementation into a system that is being effectively used by the organization to meet its operational objectives.
In this paper I will refer to three specific projects in which I managed the turn around of failed systems as examples or demonstrations of particular points being made:
In talking with system implementers and reviewing literature regarding the implementation of a new system I frequently find one topic missing - the discussion of the expectations of the various stakeholders for the success of the implementation. At the time the decision was made to move to a new computer system each of the stakeholders has/had expectations regarding what the system would do for the company and their particular business function. It is against those expectations that each stakeholder judges whether the system implementation was, or was not, successful. It is critical that a system implementer have a clear articulation of each stakeholder's expectations prior to beginning the implementation process.
One interesting experience that I invariably have when reviewing project success is how many users are unhappy with processes that the new system performs that were different (not better or worse, just different) in the prior system. Being different may make the users believe that are wrong.
A mistake almost all inexperienced implementers make is that when looking to select or custom develop the new system, they define the new system specifications around problems the users are/were having with the old system. Most ignore the features and functions in the old system that are working just to be included as specifications for the new system. Evaluators of the new systems forget that the new system may perform certain tasks differently than are now being adequately performed by the current system and this may cause unexpected problems. Then, when the new system is implemented, users complain that they expected the new system to do the tasks the same way as did the old system and, to them, this looks like a problem. Users now complain that the system "does not work" when it may be working correctly and the problem is really with poor training, erroneous expectations or inadequate operating procedures. Now that the new system is operational, implementers have to address these issues, in addition to other issues that are arising.
Another problem in establishing and understanding stakeholder expectations is the use of what I call "desk drawer" systems created by users because the old system doesn't properly perform these functions for one or more reasons. A "desk drawer" system is one where users need to track specific data or a process off-line from the main system without any direct link to the main system and, usually, without the knowledge of the system management. In the past the "desk drawer" systems were often off-line paper; now they are off-line spreadsheets. This off-line system may have evolved when the old system was not able to stay current or data was not processed in a way that reflects the organization's business processes and the users created the "desk drawer" system to effectively process transactions or track certain critical data so they can make meaningful decisions. A problem often is that implementers do not know about or consider these systems in establishing the expectations and specifications for the new system and when the new system is selected and implemented these "desk drawer" systems are not converted to the new system causing major problems in the integrity or timeliness of the new system and the conclusion that the new system "doesn't work".
A common mistake that implementers make when defining system specifications is to inquire regarding what data the users create, store, use or report from the main system. The way to address this issue is for the implementer to inquire as to how the users make the business decisions, not how they enter or process data. This allows the implementer to work with the user to effectively define needed business operations and establish user expectations.
Another factor for expectations to be properly established is that the implementer needs to identify all stakeholders. Often implementers consider the organization's management (CEO, CFO, etc.) and research how they believe the organization needs to function. However, there are many more stakeholders in the organization. In fact, the key stakeholders are the lowest level managers and staff that are going to use the system and make it work for them. Below are some examples of stakeholders in turning around failed system implementations for different organizations:
As you can see, every department is impacted by the implementation of a mission critical computer system. Each department, and depending upon the roles of the internal team, the different functions in each department may have different requirements for a system and have different expectations of what will make the system successful for the department's information processing needs. Each of these needs to be considered in planning for the implementation and use of the system.
It is not easy to get people to describe their expectations regarding their perspective of what will have to happen for a system to be successful. A typical discussion may go:
This is where my experience has paid off in trying to get the individual(s) being interviewed to clearly articulate what they mean by "the system works". On occasion I have had different stakeholders define implementation success as competing expectations. On that basis, at least one of the parties is going to be unhappy with the implementation no matter the outcome. This happened during my busing system implementation for the school district. In that case I had to bring both stakeholders into one room and explain the situation and have them discuss the issue and come to a mutually agreeable expectation that would satisfy both stakeholders.
At the conclusion of the expectation definition process, I document all of the expectations for all of the stakeholders. I then facilitate a meeting in which I present these as a set of collective expectations for the group and open the meeting for discussion. At this point, each stakeholder understands their expectations and those of the other stakeholders. This avoids many later disputes.
After the stakeholders have accepted the collective set of expectations, the implementation team works with the same stakeholders to convert the expectations into effective success metrics. (As a note, I have often had to use the definition of objective success metrics as the articulation of user expectations.) For the success metrics to be meaningful and effective they have to relevant in that they truly reflect the success of the project in meeting user expectations, measurable in that they have to be able to be quantifiable, accurate in that they have to correctly demonstrate that the project is moving to success and timely in that they have to be a reasonable measure at the time the measure is taken, objective in that they should not be subject to interpretation by subjective measures, controllable in that the project team has to be able to impact the results of the metrics, prioritized in that certain metrics must mean more than other and non-contradictory in that two or more success metrics can't contradict each other.
In most cases creating meaningful success metrics is easier said than done. In both the school district bus system implementation and in the managed care system implementation, we found it difficult to convert subjective measures ("it works") into objective measures. In the bus system implementation, it was difficult to determine what comprised as effective school bus route while there were some easy metrics such as the times by which the routes had to be completed. In the managed care implementation, we had to determine times for having data entered into the system, some of which were beyond the control of the team; this required considerable work to find objective ways to determine if the project was really successful. For the beer distributor project, the initial success metrics were easy - we had to process a certain number of cases of beer at the end of the conveyor system to the palletizer with a certain case loss rate (breakage). However, in this case we found that the ability to meet the case rate was impacted by issues outside out control such as keeping the automated warehouse filled with beer cases that could be extracted and manufacturing the conveyor structure to not break cases.
Going through this process before the implementation is restarted is critical for thinking through how the system will work, what results to expect and where these results are impacted by issues outside the control of the project. Often, preliminary system design assumptions from the original implementation effort are affected causing issues in the project to have to be reconsidered. While this does take time, it becomes very important for all stakeholders to understand how the system is being used and where issues may be found.
What system implementers don't often consider is that it is likely that these issues will arise anyway - usually after the system has been implemented and the stakeholders are experiencing the impact on their operations. By then it is often too late to do anything about the issues and there are unhappy stakeholders and inefficient systems.
It is also common that by going through this process we find that stakeholders sometimes have to change their expectations and success criteria. However, going through this process at this time saves having to resolve bigger issues and avoids recriminations after the project is completed.
I have found that one key to planning for the completion of the implementation of an initially failed computer system is to set the proper starting point. This encompasses making a determination regarding how much, if any, of the original system implementation can be accepted as completed and then developing the plan for the completion of the implementation from that point. Don't minimize the importance of setting the starting point.
There may have to be some assumptions made but, you need to have a set starting point for the project going forward. This needs to have universal acceptance from ALL stakeholders as "working" according to the user expectations and meeting success criteria. If not, the new implementation project is likely to relive some of the mistakes of the past. This may require conducting an acceptance test of the software to validate that the functionality anticipated for the accepted part of the system is performed for all stakeholders to agree that the system works as expected and provides the valid pickup point for the rest of the implementation.
Effective implementation planning requires a strong understanding of the goal you are trying to achieve. To do that you need to have means for knowing whether your team is making real progress towards the implementation completion. The best way to prepare an implementation plan is to have confidence in the meaningfulness of the success metrics and establish the plan to perform the tasks needed to achieve those metrics. If tasks are being performed to accomplish metrics that turn out to be irrelevant to the project, go back and reconsider the metric and revise the metrics accordingly.
Implementation planning is, in itself a separate topic for a paper and not the subject for this white paper. Suffice it to say for this paper that the project plan needs to include the following attributes:
I strongly encourage my implementation teams to establish sub-success metrics to the primary success metrics for each of the subtasks of the implementation. It is critical that each sub-success metric support the success of the higher-level success metric. It is counterproductive for a sub-success metric to be achieved and not proceed to the overall goal of the project. Again, this requires considerable thought to make sure the metrics support each other up the ladder to the overall project success.
It can be safely assumed that the implementation plan will become out-of-date as soon as the implementation begins and events start occurring. Although a nice objective and worthy goal, it is a well-recognized truth that trying to keep a plan current with changing issues are monumental tasks and often is not performed.
The key to successful implementation management is to manage to the success metrics. As long as the implementation plan was created with the goal of meeting the project success metrics, major implementation plan changes only have to be made if there are changes in the success metrics. And, by managing the incremental subtasks to the sub-success metrics, the implementation is effectively tracked to its successful conclusion.
During the conduct of the project event(s) are likely to occur that have the potential to lead the project away from its intended objectives. This requires the implementation team to assess the impact of the event(s) on the project. First, the implementation team needs to assess whether the event will significantly change the expected results of the project. If the answer is yes, the implementation team needs to determine what the new results may look like. If the impact of the event leads to a different result than originally planned, the team has a decision to make.
Just because the event will result in a different result does not necessarily mean that the result is a bad thing. First, the team must quantify and articulate the changes in the expected result as best they can. Then the team must determine if the new expected result would be preferred to the originally expected result. If so, the team must determine if the event leading the new result was a good thing and the project should be changed to accept the event and the new result. If the new result is preferred, the project plan should be appropriately changed to include the event and new result. If the new result is not desired, the implementation team needs to define actions to reverse the event so that the original result would be achieved.
History has taught in several projects that the approach described in this paper has resulted in successfully turning around previously failed system implementations and resulted in successful system implementations. The process, with some modifications, also works well when undertaking a major project from scratch.
James B. Wener, BSME, MBA is a Business Systems Consultant with over 45 years of experience in successfully managing small through very large projects and implementing a large number of computer systems for healthcare, manufacturing, and distribution organizations. A Management Systems consultant since 1991, Mr. Wener has a significant record of accomplishment. His Information Technology (IT) projects include the implementation and management of a wide variety of application software systems.
©Copyright - All Rights Reserved
DO NOT REPRODUCE WITHOUT WRITTEN PERMISSION BY AUTHOR.