A BETTER WAY TO PLAN YOUR NEXT IT INNOVATION

Large-scale IT projects are highly uncertain and vulnerable to the laws of unintended consequences. But the risks can be contained and the opportunities still realized, as long as uncertainty is respected and taken into account. As this author writes, there are practical, proven approaches that allow you to accept uncertainty and maintain good project discipline.

To study innovation is to a large extent to study missteps, failures, false assumptions and outright flops. While learning from failure gets great press, in theory, relatively few organizations are good at learning effectively from their failures. Even worse, it’s often possible to see a flop unfolding long before funds are irrevocably committed and people’s hopes and dreams come crashing down with it. Encouragingly, the problems that lead to big flops are not usually the result of a lack of talent, poor management, bad planning or other common excuses for things going wrong. The primary problem is that in the face of highly uncertain conditions, we fund, plan and manage projects as though we had all the facts.

In this article, I will argue that a more sensible and much lower-risk way to plan uncertain, new IT initiatives is to take a discovery driven approach. Discovery-driven plans assume that you need to learn what the right answers are. A discovery-driven approach to projects encourages learning, redirection and flexibility throughout the course of a project’s unfolding. It limits risk because at any given time you know exactly what your exposure is, giving you the opportunity to stop or to do something differently. I’ll apply what we have learned about projects that fail in general to the specific case of large projects with a significant information technology (IT) component (and today, what large project doesn’t have an IT component?).

As a former IT director, I’ve always been amazed at the number of projects that are begun with the potential to transform a business but so often go dreadfully wrong. While exact numbers are hard to come by, reputable sources report failure rates – even for very high-profile projects – of 50 percent or greater. The numbers can be staggering. Avis is reported to have lost $55 million on a failed ERP system. The U. S. Justice Department’s failed “Virtual Case File” for the FBI cost taxpayers $104 million and left the agency no better off than it was before. Both Denver and Heathrow airports suffered traumatic IT problems when they first opened, requiring even more significant investment to put things right. Heathrow’s Terminal 5 baggage-claim system went dreadfully wrong, even after an investment of over $344 million in IT systems. Even technology juggernaut Google hasn’t succeeded in many of its attempts to grow beyond its core search business.

The backdrop: Investing under uncertainty

It’s unavoidable. Any time you innovate, which you can think of as getting into new businesses, developing new processes, entering new markets or otherwise doing something that is new and unfamiliar to your firm, you’re going to have to cope with uncertainty. One way to think about this is to consider that when you are doing something new, you are entering into a world in which you will have to make far more assumptions than for which there are facts available. And unfortunately, people and organizations aren’t great at operating on the basis of assumptions. In some corporate cultures, admitting that you haven’t got the information or don’t know the answers is almost seen as a fatal managerial weakness. In today’s perilous times, it strains credibility to think that most managers could honestly claim to know what the future holds, much less be able to predict it.

Nonetheless, when one examines the practices companies use to make substantial investments in projects of any kind, the illusion of certainty persists. Consider, for instance, the time-honored practice of applying net present value (NPV) tools to new projects. The idea behind net present value calculations is sensible: it asks you to lay out projections for costs and revenues over the lifetime of a project, and then discount these back to the present at some discount rate that reflects the cost of the capital used in the project. So far, so good. Yet, any leader who has ever had to try to justify an uncertain project knows that the accuracy of an NPV analysis is only as good as the assumptions that go into judgments about future cash flows. The bigger, riskier, and more long-term a project is, the more likely the NPV number is to depart from reality. Moreover, projects with highly uncertain payoffs – for instance, when one is making IT investments to improve organizational capabilities – are apt to look massively unattractive, because the source of future value is so unclear.

Similarly, the correctness of a plan or skill of a manager is often couched in terms of being “right.” In other words, if what you planned to accomplish happens, you’re a good manager. If something happens that was unexpected, or an activity takes longer, or you learn that the initial design needs to be reworked, that’s considered undesirable. It isn’t surprising, therefore, that in many organizations, “meeting plan” becomes a primary goal – even if new information reveals the assumptions in the plan to be wrong. Couple this with the tendency to fund projects all at once and you can find that you have spent a lot of money and people’s time before anyone has gone back to check whether the project is still on track.

What’s needed, I’ll propose, is a mindset that makes it acceptable to flag assumptions in the plan; to recognize when new information makes some of those assumptions irrelevant; and a funding process that encourages re-thinking and re-direction.

Begin by specifying what success really looks like

A short, powerful summary of what your business strategy is and how you plan to compete as a business should be developed and made available to everyone involved with the system design. It should capture the strategy in a nutshell. This will naturally begin to suggest key ratios and metrics and important data flows, and help make sure that any work that is done is relevant to what will drive outcomes that are important to the business. At a minimum, the summary should include:

  • What is the architecture of revenue and cost flows in the business?
  • What are the critical ratios that you believe drive business success?
  • What will allow this business to beat the competition?

With these questions understood, next spell out what a successful outcome for a project would be. As obvious as it may seem, a good many systems projects are begun without a clear answer to two simple questions, “Who gets to say what success is?” and “What constitutes a successful outcome?” Before you spend any money at all, make sure that you are clear on these two points.

With a discovery-driven approach, the definition of success determines everything else, because you start with what success should look like and work backward into the minimum number of activities that would drive that success. For instance, a major insurance company learned that an overhaul of its claims operation would accomplish two things: it would accelerate the efficiency of people processing claims, and it would reduce fraudulent claims by automating steps that were previously done manually. These two outcomes were clearly understood by key decision-makers, which also gave them a great way to avoid ‘scope creep’ – the tendency for IT systems to grow as more users try to get their particular requirements embedded in the plan.

Given that business strategies evolve over time, you may wish to plan projects with iterative releases, so that people don’t feel they have to get their pet features in the first release; think of it like trains leaving a station rather than one big bang. At any given time, however, your team should be clear on what is “in” as a priority and what is “out.”

Plan to implement iteratively

The classic way in which systems were designed is what some have called the “waterfall’ method. With this approach, the system is scoped out in terms of desired outcomes and users or analysts prepare specifications. These are then given to the programming team who create the code, which is then tested by a testing team and put into production. You can see that this is a relatively linear process. But it creates all kinds of problems, beginning with the fact that users and analysts often really don’t know what a system should do until they get the chance to play with it. Also problematic is the fact that coders operate on specifications which may not lead to a desired result, and the project can already be way down the road to development before people realize it should have been done differently.

A far better approach is to build it in different iterations. Mock up the functionality to get a basic process flow, and then build it out little by little, working intimately with future users so that when the system is finally delivered, it’s exactly what was wanted. Thus, in the case of the insurance system, the first set of analyses that was done looked at sources of inefficiency in the current handling of claims and built mockups in beta for users to play with. The important point here is that the beta sites only required the coding of the user interface and some made-up information to get great feedback from users as to whether the new design would in fact make them more productive. The discovery-driven principle at work tests a lot of assumptions for as little investment as possible. Note that this idea is often resisted by technical people, who believe that you need to build the whole system before you can test whether it will deliver. This is not so.

Document key assumptions

As your team is planning the system’s next iteration, they will of necessity be making assumptions. The critical discipline here is to write down the assumptions, so that they can be tested explicitly. It goes very nicely with an iterative approach to system development, because each iteration allows you to update the assumptions in the plan. In the case of the insurance system, the planning team made a considerable number of assumptions about how claims processors would work, which turned out to be flat-out wrong. They assumed, for instance, that a common sequence of steps would apply to all claims to be processed. When agents played with the beta system, however, they discovered a huge variation in the sequence and types of steps. In some cases, claims were basically pre-approved and could be immediately forwarded, while in others, third parties had to weigh in on their legitimacy; in others still the claim required information on several different types of insured risks to determine the right outcome.

While doing this work, you may find that organizational processes need to be redesigned to accomplish the project goals. This discovery is often not made early enough in a systems project to redirect the development effort to include more effective processes. It is much better to fix the processes first and automate them afterwards.

Develop key checkpoints and release funds as they are reached

A major difference between a discovery-driven approach and a conventional approach is that with the former you only ask for and receive enough funding to move the project to the next relevant checkpoint. You’re better off getting just enough funds to get to the next major checkpoint, at which time a mini-post mortem can reveal problems before a lot of expense has been incurred. At that point, the project can be redirected or stopped and no further expenses incurred. With many large organizations, unfortunately, it is so painful to get funding approval that the temptation to try to get funding all at once is irresistible. Projects can become very large and very expensive before anyone has a good hard look at them.

The solution is to make sure that the funding and other needed resources are allocated, but to not allow them to flow freely until successive checkpoints have been reached. While it does increase the review burden, it dramatically decreases the program’s risk. It also allows you to think through, ahead of time, what assumptions will be tested at which checkpoints. Ideally, move as much learning forward in time as you can while postponing as much fixed-cost investment as possible. As you can see in Figure 1, a project manager knows at any given time, how much will need to be invested to move to the next checkpoint. Figure 1 gives some hypothetical examples of checkpoints for the insurance system. Note that at any checkpoint, the project could be stopped or redirected. You may not know how much the total system will cost, but it should be relatively easy to figure out how much it will take to get to the next checkpoint.

Figure 1: A hypothetical sequence of checkpoints

Number Checkpoint Cost to achieve the checkpoint
1 Assessment kickoff $5,000 to investigate feasibility
2 Estimated benefit of more efficient claims handling & better fraud detection / process analysis $20,000 consulting project
3 Specification development for initial prototype system – efficient claims processing $40,000 systems analysis
4 Specification development for initial prototype system – fraud reduction $50,000 systems analysis
5 User iteration with interface mock-ups $100,000 users & coders
6 Define code to go underneath the mock-ups $100,000 systems analysis
7 User iteration with interface and underlying functionality $200,000 users & coders
8 Data clean up and conversion $300,000 users and coders & consulting firm
9 User training & development of documentation $200,000
10 Help desk training and certification $50,000
11 User testing $50,000
12 System cutover $100,000

As you can see from this example, and the way checkpoints are structured, large fixed costs are essentially pushed back until checkpoint #8, by which time the conditions for the project’s success should be well established.

Budget more than you thought you needed for cleanup

A final thought with large-scale IT projects is that one of the most expensive and intractable parts of the process occurs when people suddenly realize that the data they are working with have to be moved from the old system to the new system, and that it can’t be done easily. I estimate that a good 40 percent of an IT project’s budget could go into just making sure the information in the system is clean, accurate, and representative of what it is supposed to do. The first thing to do is to make sure you have given some thought to the state of your data before you throw the switch on a new system. Even better, try to build successive clean ups into the ongoing project – don’t leave it until the very end and get surprised. What some people do to minimize problems is to move the data to an interim ‘holding pen’, then get it straightened up, and only then move it to the new system.

Cause for optimism

While large-scale IT projects will continue to be highly uncertain and vulnerable to the laws of unintended consequences, we have learned a considerable amount about how to reduce risks while still seizing the opportunities that such projects represent. The key concept lies in recognizing that you need to plan with a different mindset under highly uncertain conditions. The good news is that there are practical, proven approaches that allow you to accept uncertainty but maintain good project discipline.

Leave a Reply

Please submit respectful comments only, including full name, professional title, and contact information (only name and title will be posted). Required fields are marked *