WHETHER TO BET, RESERVE OPTIONS OR INSURE: MAKING CERTAIN CHOICES IN AN UNCERTAIN WORLD

Uncertainty is a business perennial, and making decisions in uncertain times is a staple on the manager’s agenda; it comes with the territory. But decision-making can be made easier, particularly when a manger has a framework to analyze the uncertainty and determine how he or she should go forward. This author describes a dynamic, highly useful framework managers can use.

The year was 1959 and little-known Haloid Company was in a position familiar to many innovators. It had approached IBM with a proposal to invest in an equally little-known new technology for “dry copying.” IBM took a pass, and Haloid, despite the vote of no confidence from the pre-eminent high technology company of the time, raised capital from other sources. Its first product, the 914 copier, went to market in 1960. Within a decade, Haloid’s bet had revolutionized the way documents were reproduced all over the industrialized world — with the possible exception of the Soviet Union, where it was deemed illegal. From its common origins, Haloid had won a rare honor for innovative products; its product’s name had become both noun and verb — “Xerox” and “to Xerox”.

As is common for companies – and even entire industries — attempting to ascend the growth curve, there was uncharted territory ahead and hard decisions were needed. Haloid/Xerox made a bold bet in response to the situation. IBM too placed a bet by choosing not to play. Deciding how and how much to commit scarce resources presents a strategic dilemma even for the most experienced of managers as markets, technologies, industry structures change and force an uncomfortable juxtaposition of certainty and uncertainty. What makes the challenge even greater is that a bold decision upfront can determine the outcome in your favour – as was the case with Haloid/Xerox. Often, recognizing the advantages of making bold up-front decisions causes managers to make a pre-emptive commitment rather than take a second approach: creating an option to react after waiting to see how the uncertainty will be resolved.

There are, however, situations where an options-creation strategy is appealing. Xerox had an opportunity to put this approach into practice. Its management was sufficiently farsighted to anticipate that the reproduction of information could be overtaken by technologies beyond dry copying. Xerox’s strategy was not to precommit to any particular next-generation opportunities but to reserve options to participate in the future by staying close to the cutting edge of computing and information technologies. For this purpose, it established Xerox PARC. The investments in PARC were, by no means, a corporate commitment to commercializing specific technologies; they were relatively small, focused investments in R&D, which would provide managers with visibility over the horizon and an option to play in the next round.

The options that Xerox had invested in were, however, (famously) cashed in not by Xerox but by others. From the graphic user interface to prototypes of the personal computer or the Ethernet networking protocol, Xerox’s options laid the foundations for new generations of an “information industry.” This offers a segue into a third approach for dealing with uncertainty, which is distinct from both the first approach of making pre-emptive bets and the second approach of reserving the option to commit later. This third approach involves envisioning alternative scenarios for the market and ensuring that, no matter which of the plausible scenarios occur, the current choice provide insurance against their occurrence. This may require additional resources – but that is the insurance “premium.”

One of the outcomes of the new generations enabled by the discoveries at Xerox PARC was that the reproduction of paper documents was increasingly being made not by Xerox’s core product, the photocopier, but by the laser printer. Once again Xerox had to make a choice without full knowledge of what the future might bring. It had missed opportunities to commercialize the next generation options from PARC. Could it now afford to repeat the mistake? Could it afford to cannibalize its core product by shifting its focus to laser printers? The company needed a way to assure its minimum market position in any scenario: whether the copier would hold its ground, or if the laser printer was going to displace it in terms of total “page-share.”

In this instance, Xerox bought insurance: an up-front investment that evened out the impact in either scenario. It launched a new product line of multi-function devices that print and copy. The devices were an insurance policy: join the trend towards printers without killing the copier, by binding one to the other. The commitment to a new class of hybrid machines was the insurance premium.

In sum, in its remarkable history, Xerox has had several confrontations with the strategist’s perennial companion: uncertainty. It ran the gamut of strategic alternatives in dealing with uncertainty: placing the bold pre-emptive bet, reserving options to commit later, and buying insurance. The three strategies deployed by Xerox over the course of its history span the range of alternatives to managers considering the path forward in the face of uncertainty.

This raises an important question: How to choose among the different forms of commitment? If a “strategic situation” is a market environment with many uncertainties, a manager needs a framework to diagnose the particular situation and determine which of the three types of commitment applies. I shall offer such a framework in this article; it will help us diagnose the situation and then direct us to the appropriate type of commitment.

Diagnosing strategic situations

Strategic situations can be complex and quite diverse, but they can be characterized by certain intuitive criteria that bear directly on how much and how soon a strategist must commit scarce resources. The criteria give rise to a framework summarized in the following table. For simplicity, any strategic situation can be “scored” along each criterion on a 1(Low) to a 5(High) scale.

Once the scoring is done along each criterion, an assessment of the strategic situation emerges. It can then be mapped to the alternative forms of commitment. Let us now consider each of the three alternatives – bets, options and insurance — in turn, and how performance on the various criteria would lead us to each one.

When is placing a bet appropriate?

A bet is a decisive choice. It generally involves a high up-front investment and an irreversible commitment. There is little room for making adjustments for contingencies. This means that you have to be quite sure of the benefits of making a pre-emptive choice and must have strong expectations about being able to steer the choices of others to follow in the desired direction.

I would argue that the conditions for making such commitments are more favorable when the strategic situation rates “High” across most, if not all, of the criteria in the framework above. The advantages of a pre-emptive commitment outweigh the disadvantages. To see how this might occur, consider the example below.

A case study of a bold bet was the launch of an Internet site and business that was potentially highly disruptive to a particular industry’s status quo. It was not created by an upstart entrant — as was the case in most industries during the frenzied 90s — but by the largest incumbents seeking to act pre-emptively by reinventing their legacy ways of doing business. This was the case of Covisint, an electronic exchange for automotive OEMs and their suppliers.

The automobile supplier hierarchy has a pyramid-like architecture, with the automaker at the top. Each tier of suppliers sells to the tier just above it, with little visibility or access to the tier beyond. The automakers are at the top of the pyramid. As a result, demand-side information takes a while to trickle down. Equally important, the pricing between businesses includes multiple layers of mark-ups. This creates inefficiency in the allocation of resources, limited communication among multiple tiers, and inefficient cost structures for the automakers.

There had been a growing awareness that the Internet could act as an excellent medium to address many of these issues by replacing the somewhat inefficient status quo with non-hierarchical electronic markets. Such markets would migrate all the diverse supplier transactions to a unified online platform. Pricing and allocation of contracts would be done through auctions to ensure unrestricted competition, thus helping simulate a highly transparent and “efficient” market.

Covisint was a creation of the Big Three automakers: GM, Ford and DaimlerChrysler. It was intended to be a gigantic auction and exchange platform for businesses in the automobile industry’s value system. The move was a big bet – an irreversible commitment — in many ways. First, its co-creators were historically bitter industry rivals. The highly publicized launch of Covisint meant that these industry competitors were taking on several key groups, whose support was critical to their own viability: the Federal Trade Commission, the labor unions, and the supplier community. Second, by declaring a commitment to switch to a more efficient market for their supplies, the automakers were creating expectations of lower prices among consumers, thereby fundamentally altering the market dynamics. Third, the structure of Covisint itself represented a deep commitment: it was not set up as a small experiment. The founders had committed money, personnel and prestige to help establish a stand-alone entity and a role model for other industries as well as other automakers.

Covisint would be costly to dismantle; a Pandora’s box had been opened. The fact that several dominant automakers had collaborated to create it meant that they were tying their hands by foregoing the flexibility of independent, unilateral action.

What led to such a bold commitment? Why did the automakers not wait to respond to a third-party entrant to establish such a site, as was the predominant model for Internet businesses, that is to encroach on traditional business models? Consider the criteria in the diagnostic framework just presented. The situation in the auto industry rated “High” on practically all counts and would provide a rationale for placing a big bet:

Motivation to disrupt status quo: The motivation to change was high. The supply chain’s hierarchy meant inflexibility in the automakers’ cost structures. The many layers of mark-up in the supply tiers mattered: raw materials and parts account for approximately 45 percent of a car’s costs. The technology promised by Internet connectivity appeared, on paper, to cut at least 10 percent of these costs – which itself would be a strong motivation for an automaker to break from the status quo.

Potential for proprietary pioneering benefits: The score on this criterion was expected to be medium to high. Covisint came before several other major industry consortium exchanges. It was recognized at the outset that there would be a learning curve in establishing a marketplace such as this with no precedent. Because of its early start, it would have an opportunity to make mistakes, discover the execution challenges, and find ways to integrate the learning into its development process.

Need to create a new network vs. using a pre-existing network: This, too, scored high. The status quo in the industry consisted of an entrenched set of legacy practices. The system also needed scale to generate liquidity and bring down per unit transactions costs. A sufficiently large network of buyers and suppliers was needed up-front. There would have to be a large upfront investment in learning the use of a new system, agreements and connectivity among the various suppliers, sharing of data among traditional competitors, and the hiring of skilled personnel well-versed in Covisint software. Once the investments were made, and other suppliers and procurers were similarly invested, the economics of the new system could be very compelling.

Before Covisint, there was no way to connect the suppliers and the automaker in a non-hierarchical way. Participants would be inclined to join the network only if they believed that such a network were inevitable, and if it would be in their interests not to be left out. A collaborative commitment from three major automakers would be a natural way to establish such a network.

Signalling Value: There was also high potential for signalling. Covisint would play a major role in mobilizing wide adoption of the system. The creators of the exchange had sent a credible signal of their commitment, particularly by setting it up as a joint venture among industry competitors. This approach also made it costly to back out, which in turn sent a strong message to the others in the industry about their determination to conduct business primarily on this platform once all the start-up issues had been dealt with.

Barriers to creating limited tests: Covisint scored high on this criterion. A market relies on the volume of transactions and participants for its viability and efficiency. For Covisint, a limited low-risk pilot would capture few of the benefits – and mimic few of the characteristics — of an appropriately scaled, “liquid” market. Thus, a certain threshold level of commitment would be necessary.

Leverage across the adopter network: The auto industry’s strategic situation would score high on this count as well. Recall the pyramid structure of the supply chain. The automakers sit on top of the pyramid; their direct line of influence is to the Tier 1 suppliers, which in turn influence the Tier 2 suppliers, and so on. Each tier would act as a direct channel of influence on the next. The high-profile nature of the announcement and the fact that it would be the joint procurement platform for the three biggest automakers in the U.S. helped guarantee that suppliers would show interest.

When is reserving options appropriate?

In situations that score low on the criteria, it is prudent to defer making the decision to a point where the dynamic forces in the market evolve further and more information is available. Reserving the option to act in the future requires that you put some thought into the “triggers” – events that would suggest that a change is needed. These might include scaling up or scaling down the degree of investment, exiting the market, creating another option to defer the decision even further.

Consider the example of Microsoft’s approach to entering the new applications software markets. It generally follows the lead of others often by co-opting the bets of smaller players. Intuitively, this would appear odd since smaller players would have less of an ability to withstand the risks of a bold bet – and Microsoft ought to have the market predominance and assets to take on that role. Instead, typically, it waits for a new product to be introduced and pass a threshold where one of two outcomes become likely: either there is a high potential new business opportunity for Microsoft or there is a threat to an existing profitable Microsoft product. At this point, it enters the fray. As history has shown, it enters in a big way. Low scores on our framework for strategic situations created by each generation of applications software may explain why this choice makes sense.

This approach has been honed over generations of new applications at Microsoft, beginning with its cooptation of the “killer app” that helped bring the PC into wider circulation in the early-80s — VisiCalc, which ultimately evolved into Lotus’ spreadsheet application 1-2-3. Microsoft co-opted 1-2-3 and surpassed it with Excel. These were the beginnings of a commitment strategy for Microsoft that persisted even as it grew into the predominant player in the industry.

Consider our diagnostic criteria in the context of a more recent development — Web services. Once again, a new paradigm was on the horizon — a world where software is no longer purchased and installed on a PC but is, instead, accessed over the Internet as a service, much like cable TV. The pioneering players in the field were Hewlett-Packard, Oracle, IBM, and Sun Microsystems. Predictably, there were several start-ups, BowStreet and WebMethods that were also on the leading edge of this movement. Microsoft was not a pioneer, but it had contingency plans in place. It was also monitoring developments.

Several early triggers suggested the increased likelihood that Web services would be widely enabled and supported. A once obscure technology, Extensible Markup Language (XML), was emerging as a standard that could hold multiple software tools together. Influential technology evangelists were bandying about stories of compelling applications. In addition, simpler versions of Web services provided by major portals such as Yahoo and Lycos’ e-mail and personal calendar accounts offered some early learning experience.

Finally, Microsoft made its entry in June 2000. It announced its Next Generation Windows Services (NGWS) and a new strategy for enabling Windows for a Web-based environment, dubbed Microsoft.Net. Since then, in a little more than a year, it raced ahead of the others to become one of the principal leaders in the definition of industry standards. Once again, Microsoft had come from behind to co-opt an initiative.

Consider how Microsoft’s strategic situation scored on the various criteria. This analysis certainly helps rationalize its deferred commitment approach:

Motivation to disrupt status quo: The score on this criterion would be low. Microsoft is a highly successful incumbent in the “traditional” paradigm where software is purchased on a one-time basis and installed on the PC for its stand-alone use by consumers. New versions of the software can be purchased and installed in a similar manner. Microsoft incurs a significant amount of risk in actively promoting a migration away from this status quo, which is very profitable in the short-term.

That said, it cannot be assured that the current paradigm will continue to retain its value. Competing approaches to the traditional model are constantly being experimented upon by a variety of players. For this reason, even though it may not be advantageous for Microsoft to actively disrupt the status quo, there is a strong incentive for it to scan the horizon for emerging alternatives — and to develop its own position in case one such alternative shows signs of taking hold.

Potential for proprietary pioneering benefits: The score on this criterion is also close to low. With virtually every software innovation, there is a “canary” that bears the initial risk of development and has the strong motivation to pioneer a change away from status quo. The canary could be an established competitor, such as Sun Microsystems, motivated by a desire to bypass Microsoft’s dominance, or a start-up with a new application or breakthrough approach. In all of its successful attempts at co-optation, Microsoft has demonstrated that a pioneer’s benefits can be co-opted by a player with the capabilities to adapt the technology once it has been proven to work and then scale up rapidly.

Need to create a new network vs. using an existing network: The score on this criterion is low as well. Software is costly to develop initially, but once a market-ready version is created, the ability to distribute it and establish a scalable business depends on access to a network of relationships and partnerships in marketing, sales and product development. If you do not have these networks, the incremental costs of scaling up are very high. If you do have them, the incremental costs can be quite low since software products can be replicated with little additional expenditure. Microsoft has among the most powerful network of relationships of any player in the industry.

Signalling value: There is low signal value in Microsoft making early investments in emerging software innovations. With its predominant position, there is very little new information value to making a large investment in promoting an alternative. Even though any significant investment on Microsoft’s part would be a widely read message and would be taken very seriously, the default expectation of most market players would be that Microsoft’s intent would be the preservation of status quo. Pioneering the development of an alternative that displaces the status quo in a very visible way would only serve to confuse the rest of the market.

Barriers to creating limited tests: The score on this criterion is low. The nature of software development is such that testing is not only feasible; it is standard practice. Early versions are released to a limited group of users, and market reactions can be evaluated even within such a limited context. In addition, the availability of earlier generations of products – possibly created by others – provides an alternate channel for testing without having to make a commitment of significant scale.

Leverage across the adopter network: While it may appear surprising, Microsoft’s score on this criterion is also fairly low, despite its formidable market presence. The programming community that acts as the leading edge of developing and bringing software innovations to market has been among the most dedicated in championing alternatives to Microsoft products. This was the case with, say, the development of Java and Linux. Likewise, the early adopters of new Web services are likely to be those looking for alternatives to the traditional PC-centric software purchase model.

When is buying insurance appropriate?

We have focused on the polar ends of commitment choices under uncertainty: move now and reserve the option to move later. High scores along the criteria in the framework rationalize moving now; low scores suggest reserving the option to move later. In fact, even low scores on the first three criteria suggest reserving an option to move later. There is a middle ground that arises when the situation scores “High” on the first three criteria but not on the rest. In such instances, there may be a case for moving early with a relatively sizable and inflexible commitment, while finding ways to reduce the downside risks of different scenarios for market evolution. In such circumstances, the strategist must buy insurance.

What considerations prevent the automatic purchase of an insurance policy? The main deterrent is that it is an expensive, up-front investment. With any form of insurance, you must pay a “premium” – that is, you need to over-invest relative to the investments that would have proven sufficient in the absence of uncertainty. Applying our criteria suggests why some players do engage in such over-investment in some situations.

Consider the example of Sony and its investments — considered excessive by many analysts — to respond to the emerging trend towards networked entertainment. While its name has been considered synonymous with entertainment, Sony has found its core business environment has been changing. The delivery of entertainment has increasingly been intersecting with developments in networking technologies. This has begun to shift the way consumers choose to be entertained, as well as the ways in which content is produced, stored, and delivered to them.

In 1995, during this period of heightened anticipation, Sony’s new leader, Nobuyuki Idei declared his intentions: “We are bringing entertainment into the network era.” Mr. Idei’s vision was founded on a flagship product, the forthcoming generation of Sony’s popular PlayStation game console. It would be, in Mr. Idei’s words, “a challenge to Intel and Microsoft.” It would have standalone playing capability, could double up as a DVD player, and could be used as an Internet access device. Although Sony was fully committed to making the collision happen, challenging the likes of Intel and Microsoft with a specialized device presented plenty of risks. Entertainment and networking might come together, in other ways.

Sony’s approach amounted to a strategy of buying insurance. Its campaign effectively amounted to studying the various points of entry of network-borne entertainment into the home and then establishing a significant position at each entry point.

The TV set was one such entry point into the home, a natural place to start for Sony. It was already a leading manufacturer of TV sets; it now needed to expand its role and position itself at the link between the TV and the network. Sony launched a broad-based play with a 5 percent investment in the largest set-top box maker, General Instruments, an 11 percent stake in the leading digital satellite system, DirecTV, and an agreement to manufacture and market Internet terminal devices with WebTV, the leading player attempting to transform the TV into the primary Internet access appliance in the home. It signed agreements with Spyglass for browser software and made investments in developing an operating system, Aperios, which could be used in multiple settings, ranging from set-top boxes to game consoles.

Elsewhere in the home, Intel and Microsoft’s favourite device, the PC, was still eclipsing the TV as the primary point of access for the consumer to network-based content of the interactive kind. Sony made a major investment in a new line of ultra-thin laptop PCs with a design that was distinctive and was bound to attract notice if the PC were to morph someday into an entertainment appliance. The Vaio group that produced the PCs also invested in many audio-visual technologies, such as digital video recording, in preparation for a world of networked entertainment.

Having dealt with the obvious suspects, Sony was still not satisfied that it had covered all bases. Where else could there be an entry point? It was becoming clear that mobile devices were going to become a significant medium for receiving entertainment from a network. Sony went into net-ready wireless handsets as well and over the years signed deals with a disparate group of competing players, such as Ericsson and Nokia. It invested in other mobile devices like network-enabled digital music players, and cameras, etc.

You might think that all of this coverage would have provided Sony some assurance that, no matter how the network brought entertainment into the home, it would be right there at the entry point. What about the interworking of all of these different modes? What if the true value lay in the inter-connection: the “home network?”

Sony’s response to this remaining piece of uncertainty was to make an investment in the home-networking domain as well. It co-founded a home networking platform, HAV, with several other electronics heavyweights. But wait. We are not yet done. All of this coverage resided in a single layer of the networked entertainment value system, the point or device of access to the network. Surely, it was possible that the real leverage resided in other parts of the system. In keeping with the insurance mindset, Sony made bets in several of these other parts as well. These included the online music exchange pressplay, which was sponsored by its music subsidiary in collaboration with Vivendi Universal, a consumer entertainment oriented Web site in collaboration with Yahoo!, an agreement to develop a broadband network and browser jointly with AOL Time Warner, and Movielink, a digital movie library created jointly with several Hollywood studios for transmission-on-demand over a network.

All in all, in its virtually end-to-end coverage of the various points of connectivity of entertainment devices to a network, Sony was clearly over-invested. Unlike Microsoft’s wait-and-learn approach, Sony’s portfolio took the form of relatively irreversible bets. Why? By scoring its strategic situation on our criteria we can trace the logic for why Sony’s position was, indeed, different from that of Microsoft. Moreover, it did not have enough confidence in its anticipation of a particular endgame to simply place a bet; it could not afford to defer its bets either. Thus, its strategy was to seek insurance and protect itself from the emergence of networked entertainment on practically all fronts.

Motivation to disrupt the status quo: Sony’s rating on this criterion would be high. With the oncoming advance of broadband applications, it was quite apparent to Sony’s more forward-looking leaders that, if the company did not take steps of its own, the collision of entertainment and networking would take place without Sony having a seat at the table. There were multiple enabling technologies developing and several powerful competitors, such as Microsoft and AOL Time Warner, closing in on the space. The status quo for Sony was also becoming unattractive for its own structural reasons. Sony’s strength was in hardware electronics where margins were shrinking. It was becoming clear that value would migrate away from Sony’s core business.

Potential for proprietary pioneering benefits: The situation would rate high on this criterion. Unfortunately for Sony, the “canaries” in this particular innovation would have been none other than Microsoft or AOL Time Warner. By the time Sony would have learned from the data, it would have been too late to mobilize its forces. These canaries, after all, being birds of a very different feather, would have the power – and the power of their incumbency position in networked markets – to make it difficult for a relative newcomer, even one with Sony’s reputation and resources.

Further, networked applications, particularly in a home context, can have a high switching cost. This confers additional benefits on a pioneer.

Need to establish a new network vs. using an existing one: The score on this criterion would also be high. As observed earlier, Sony’s prime competitors were players such as Microsoft and AOL. They would be the potential beneficiaries of their own existing networks and relationships. Sony would have to establish a network of its own or form an alliance with an incumbent player. Signaling value: Sony’s situation would rate a medium on this front. It was important for Sony to send a message to the incumbents in the relevant information industries that it was ready to play in their territory and leverage its existing connections with consumers. Even if this meant a significant over-investment in alternative paths of entry, the value of the signal was high.

Barriers to creating limited tests: Sony’s situation would score medium to low on this criterion. On the one hand, it is the case that consumers’ evolving behavior towards how they choose to be entertained can be, and is, tested through pilots and experiments and even in simulated experiences. Even so, network effects and cultural shifts made it likely that such tests would not be truly predictive of market reaction in a scaled-up, realistic setting. Several of these “softer” and quirkier aspects of consumer adoption behaviour with regard to entertainment make them somewhat more challenging to simulate.

Leverage across the adopter network: Sony’s situation would score low on this criterion. Although it had a recognized consumer products brand name, Sony had limited leverage on the decision-making system in a networked environment.

The mixed scorecard for Sony suggests that it could not afford to fall behind, and yet it was not in a position to eliminate the substantial uncertainties. To play the game, it had to take out insurance.

Many find uncertainty unsettling. Uncertainty in an increasingly networked world is especially unsettling since its origins and destinations are connected in a nonlinear manner. Given that uncertainty cannot be eliminated, it is preferable to acknowledge its presence and develop asymmetric strategic advantages by making superior strategic commitments.

I have offered here a framework for formulating a “commitment policy” and to decide when it is best to either make a pre-emptive bet, reserve an option, or buy insurance. When it comes to making certain choices in an uncertain world, one size does not fit all.