Some managers might think that they’ll be able to comprehend a complex situation by reducing their analysis into a few, simple rules. But simplification can be dangerous and costly. Instead, as these authors describe, managers can address complex situations with certain appropriate, and sometimes counterintuitive, practices — and a hefty dose of humility.
Albert Einstein once said, “Make everything as simple as possible, but no simpler.” Unfortunately, executives forced to make decisions in high complexity contexts can easily fall victim to excessive simplification. A common definition of complexity tells us that it is “the quality of being intricate and compounded.” Complex systems work in ways that often cannot be anticipated in advance. The human brain, however, has tremendous limitations when it comes to our processing capacity. In a now famous study, the psychologist, George A. Miller, concluded that the best our brains can do is to process a rather limited number of elements at one time, what he called “the magical number seven, plus or minus two.” As we start to try to comprehend more than seven interconnections or elements at a time, the ones we were thinking about beforehand get shoved into the backs of our minds and are no longer being held in short-term memory. If that weren’t enough, we are also thrown off by interruptions. We now know that being interrupted (as by an incoming phone call or email) disrupts human cognitive processing for up to 25 minutes. Now juxtapose the cognitively challenged, interruption-laden world of you, the executive, on the need to confront and correctly interpret a complex situation, and it becomes quite clear that this is going to be challenging. In this article, we reflect on what we know about how complexity can bedevil executives, and how some of the worst consequences might be remedied.
How we often go wrong when faced with complexity
The Sensemaking Paradox
By now, most executives are familiar with a certain, popular short video clip. It shows a group of people – some in white t-shirts and some in black – passing basketballs to one another. The audience is instructed to focus on counting how many times the white or black-shirted players bounce the ball. In the midst of the play, a person dressed in a gorilla suit walks through the group of players, stops to pound his chest, and walks off again. Intent upon counting passes, many observers fail to see the gorilla at all! The lesson, of course, is that when we are focused on one thing, we tend not to take in information about other things. This is emblematic of what some academics call the “Sensemaking Paradox.” The reality is that to understand any situation at all, we have to make sense of it. We do this by imposing filters on the noisy signals of messy reality. Filters serve the purpose of telling us what is important and what we should pay attention to.
Sensemaking is vitally important – without it, we would have no framework for viewing and understanding the world. Sensemaking is also, however, a potential trap. As we filter information through the lens of our existing experiences, key interrelationships and new pieces of data may be missed, leaving us with a poor interpretation of reality. For instance, engineers at fabled technology powerhouse Sony had for decades focused on technologies that were competitively advanced in the manufacturing of compact disks, mini-disks and today, Blu-Ray disks. The dominance of disk-based technology, however, led many in the firm to reject new technologies that subsequently became important, such as content tied to a hard drive (the technology used by Apple in its iPod) or content traveling to users via networks (as in ‘on demand’ movie and music services).
Unintended consequences
One area in which sensemaking consistently lets us down has to do with unintended consequences. Like the observers of the basketball players in the video, we are so focused on the few things we’re trying to keep track of that we miss other, equally important phenomena until it’s too late. Unintended consequences tend to emerge in complex situations because interdependent actions and reactions arise beyond our immediate field of view. Many are negative. The desire to increase the living standards of parliamentarians in the United Kingdom without a politically dangerous pay increase led to the idea of allowing them to be reimbursed for living expenses. The program of reimbursements evolved to the point at which politicians were claiming for such manifestly non-public purposes as moat repairs, dog food and wide-screen television sets, setting off a public uproar when the records were released. Unintended consequences can also yield unexpected positive results. Swine flu scares, which shut down public schools in the United States for days or weeks, proved to be a boon to the adoption of e-learning technologies, in which as one observer put it, “Just because the campus is closed doesn’t mean learning has to come to a halt.”
A particularly virulent form of unintended consequences occurs when decisions are made that, individually, are rational, but that lead to collective negative outcomes, or even outright disasters. For example, the global financial meltdown of 2008 can be traced to the structuring of incentives for individual action that increased leverage and risk exposure within a system that was not structured to buffer or contain those risks when they were aggregated. Many businesses have experienced disappointments when they failed to see the collective consequences of individual actions. For instance, when the market for Winchester hard drives appeared to be attractive, over 130 firms entered it, each looking for “just” five percent of the market. As of this writing, the same phenomenon is unfolding in the emerging market for LCD TV’s, with six or seven leading firms combating incursions by literally dozens of new entrants, many from low-labor-cost markets.
Rare events
A second challenge to our sensemaking skills occurs when events are rare. Without a recurring pattern to observe, people find it hard to truly grasp the implications of a rare event and to mount an appropriate response. Natural and man-made disasters that disrupt systems fall into this category. Hundred-year storms (such as Katrina) and truly violent volcanic eruptions such as that of Iceland’s Eyjafjallajokull, are thankfully, not common. The dilemma is that complex systems affected by rare events often respond in unexpected ways. The fact that volcanic eruptions are relatively rare and typically much milder than that which took place in Iceland meant that air-traffic systems were able to cope with them in effect by not coping with them. They practiced simple avoidance, directing flights around the volcanic ash. When the main air- traffic lanes between Europe and the United States were affected, however, the system had only one response: a complete shutdown. The challenge now is to adjust the practices of the airline industry to a new reality, which is that the likelihood of an aircraft encountering high-altitude ash has increased, a reality for which airplanes were not designed.
Quantification
Of course, some of us supplement whatever heuristics and lessons we have learned from our own experiences with statistical or modeling capabilities. These have had formidable success in the worlds of design and engineering, but as decision aids they often create as many problems as they solve. Those of us who are trained in rational decision-making models are familiar with the Law of Large Numbers, which is at the heart of modern organizations’ reliance on statistical analysis. An offshoot of the Law of Large Numbers is the Central Limit Theorem. It specifies that when you are looking at statistics, many distributions assume what we call a “normal” distribution, the familiar bell-shaped curve beloved of college statistics professors. Get enough observations, and the differences among them fade away, yielding the curve for the population as a whole. This insight leads many to build systems whose premise is that normal distributions will yield predictable results.
The problem, of course, is that many phenomena do not conform to the Central Limit Theorem. People constantly forget that the percentages in the bell curve apply to the whole population, and are therefore not good predictors of individual outcomes. In many cases, there are not enough observations for the bell curve to take shape. In still others, distributions are not bell-shaped, but take some other form. And finally, for many situations, our interest is in the outliers, not in the central tendencies. Why are we intrigued by stories about Bill Gates or the founders of Google? Precisely because they are so far from the average of a normal distribution of success. Why did the models underlying financial meltdowns, from the Long Term Capital fiasco to today’s financial crisis, not prevent disaster? Because reality acted in ways that were beyond the parameters of the financial models used to try to understand them.
Irreversbilities and network effects
A further characteristic of complex systems that often befuddles observers is that they are prone to irreversibilities and network effects. Some things that happen early in the evolution of a system have imprinting effects that can never be reversed—a typical setting in which this happens is when there are major network externalities that are hard to predict. The ubiquitous QWERTY keyboard is a striking example. Think about it – a keyboarding system deliberately designed to slow down rapid typing not only became the standard in typewriters, but in word processors, computers and now even tiny little QWERTY arrangements on hand-held devices such as Blackberries. Economist Paul A. David uncovered three features of the production system that helped ‘lock-in’ the QWERTY standard during its spread in the 1890’s. He defines these as technical interrelatedness, economies of scale, and quasi-irreversibility. Whereas scale advantages would not have been so hard to predict—at least once a standardized production system and sufficient demand were available—the other two advantages are related to network externalities whose interrelationships and resulting causal effects cannot be readily predicted in complex systems. In a classic ‘competence trap,’ once sufficient numbers of typists had become accustomed to the QWERTY layout, no subsequent employer wanted to undertake the bother and expense of retraining them, no matter what the proposed efficiency advantages might be.
The challenges to our sensemaking capabilities when confronting complex situations are, in short, formidable. The presence of unintended consequences, rare events, limits to mathematical modeling and unexpected and irreversible effects are almost guaranteed to challenge the skills and intuition of even the most accomplished strategist. We’ll turn next to practices that can be used to mitigate these problems.
Facing down complexity
Simple decision rules, structures and relationships, as we have suggested, are not likely to be effective approaches when the task at hand involves making decisions in the context of complex systems. Ironically, many of our most embedded management practices – such as designing for optimization and for efficiency – only exacerbate the risks of things going wrong at a systems level. Somewhat counter-intuitively, the most robust complex systems are often not designed for optimization. They may in fact embed sub-optimal features, such as redundant operations, multiple paths and substitute components. To recognize the value of these seemingly sub-optimal structures, it is worth considering two questions. The first is whether an investment allows the risks and decisions to be made in a complex situation to be spread over time periods. The second is whether interdependence between components can be better managed by the way they are connected, which you can think of as ordering complexity in space.
Buffers of time and space
Building in time delays simultaneously yields both more time to respond and the chance that more information will be available by the time a commitment has to be made. Many organizations have learned that investing resources to gain time is worthwhile. In emergency medical situations, for example, the practice of triage, in which patients are prioritized depending on the urgency of their needs, consumes medical resources that could have been applied to immediate treatment to spread out the time period for action in a way that benefits the system as a whole. While it means that less critically injured patients may need to wait to be cared for, the performance of the system as a whole (in terms of providing the most care to the greatest number of injured people) is improved.
The interdependency, and hence the vulnerability of a complex system can also be reduced by re-thinking how different system elements are linked together. When the amount of interdependence is reduced, the vulnerability of the system to a failure or problem in a part of the system is also reduced. Similarly, redundant but non-interdependent parts of the system can then substitute for one another in times of need. Steel manufacturer Nucor, for example, operates with a divisional structure in which many operations are duplicated within their divisions. While this redundancy may seem inefficient, it allows personnel and staff from one part of the company to pitch in and help when problems affect another division. A similar logic applies to systems that are designed to be modular. By linking together modules to achieve desired functionality rather than designing a single-purpose comprehensive solution, a failure in one module does not necessarily mean the system fails, and further creates greater flexibility to change should conditions change.
Addressing oversimplification in sensemaking
The challenge of effectively navigating the paradox of sensemaking (in which to understand a system means to simplify how you comprehend it) is to achieve the gains of simplification without its dangers. One approach that has been widely used to do this is to make simplifying assumptions explicit as assumptions, and give organizational members the right to challenge them. Unfortunately, without a conscious effort to label assumptions as such, an organization can get way down the wrong track before anyone notices the need for a course correction. This is the insight behind a technique called “Discovery Driven Planning” that seeks to help executives make decisions even in the face of uncertain outcomes. This technique requires that assumptions be tested frequently at key checkpoints when new information is available. Because complex systems often behave in a non-linear fashion, the value of stopping frequently and re-assessing is enhanced, rather than planning to achieve targets specified in advance, before much information is known.
At software maker SAP, for instance, a major business risk has emerged in the form of a threat to its core Enterprise Resource Planning (ERP) software by so-called Software as a Service or SaaS. In 2007, SAP executives announced an ambitious plan to launch a software-as-a-service offering targeted at small and medium sized enterprises. Called “Business ByDesign,” the software was to create new growth for a customer segment that SAP had traditionally not served. In 2008, the company had as one of its strategic goals by 2010 “offering a comprehensive solution portfolio for small businesses and midsize enterprises, including SAP Business One, SAP Business ByDesign, and SAP Business All-in-One.” Unfortunately for SAP, the assumptions underlying their launch of Business ByDesign did not hold, in particular assumptions about cost and overheads. A post-hoc analysis suggested that SAP executives ‘made sense’ of this new business essentially by importing the assumptions prevalent in their core business, for instance, that offers needed to be heavily engineered and comprehensive in their approach. The result was that Business ByDesign is now being completely reworked, with entirely different assumptions, after a rather public and embarrassing failure to meet its heavily promoted goals.
Even documenting and testing assumptions, of course, will not help improve sensemaking without a diversity of inputs into the decision process. As a general rule, the more complex the situation is to be understood the more there is to be gained by both increasing the amount of communication used by the organization to make sense of it, and increasing the diversity of perspectives on it. As an executive, it is practically impossible to over-communicate – indeed, recent studies of failures in strategy implementation have consistently found that failure to communicate to those implementing the strategy is a significant source of execution problems. Further, high-quality decisions, as was famously documented by Graham Allison in his groundbreaking study of the failure of “groupthink,” almost always result from the clash of different perspectives which explore more angles of a complex situation than would be possible from a single perspective.
Prevention versus resilience in response to unintended consequences and rare events
Political scientist Aaron Wildavsky had a hugely important insight into systems at risk. Such systems, he proposed, can be designed to prevent potential dangers or alternatively to cope with whatever outcomes might occur. Prevention is targeted at keeping negative unintended consequences from ever occurring. Resilient organizations, in contrast, are designed so that when and if unintended negative consequences arise, the organization can mount an effective response. The quandary is that in organizational design, processes that emphasize prevention often dominate, thus creating under-investment in the adaptive processes and skills that might create resilience.
Wildavsky concluded that the strategies decision makers develop in anticipation of a threatening event are heavily biased against risk and failure. Indeed, they may go so far as to ban all activities that could create potential harm. The dilemma is that in such a situation, positive discoveries and outcomes can also be hindered. To see how this works in business, one has only to look at the behavior of incumbents in an industry in the face of potential threats to that industry. Often, rather than exploring what the threat might look like and devoting resources to understanding it pro-actively (a pro-resilience response), executives often throw up the barricades and try to prevent the threat from gaining momentum. Eldredge Reeves Johnson, the CEO of Victrola manufacturer Victor Talking Machine Company, famously told a new employee, “Young man, if you want to succeed in this company don’t even mention the word radio.” Victor was, of course, swallowed up by the Radio Corporation of America some years later, unable to mount an effective, resilient response to the threat of the new technology. Today, the responses of the music, newspaper, television, radio, book and retail industries to the unprecedented changes digitization has wrought in their markets often take on a flavor of trying to prevent the outcome rather than investing to adapt to it. The real challenge as an executive is that focusing on the prevention of outcomes leaves you at risk of being unable to adapt when the inevitable occurs.
Anticipation and prevention can be effective when the sources of risk and failure are predictable. When they are not, however, investment in resilience is essential. The dilemma is that in many organizations, entrenched processes mitigate against investing in resilience. Pervasive anti-failure bias leads companies to resist investing in uncertain initiatives, while long-term success can lead to what researcher Danny Miller has called the “Icarus Paradox.” The term refers to the widespread phenomena of companies bringing about their own sudden demise after long periods of success, typically due to their inability to respond to unanticipated threats. The consequence is that essential variety gets sucked out of organizations, since those employees interested in working on novel and creative approaches (by definition, best able to help build resilience in the face of unintended consequences) are those most likely to leave. Those employees are also likely to be those that perceive the looming threats on the horizon, and are often the least likely to be heard.
The importance of shared values to action in the face of the unexpected
Ironically, in the face of complex and unexpected events, many executives resort to micromanaging. By imposing direct supervision and rules on workers in the organization, they assume that this is likely to make performance more reliable. In highly predictable settings, such a use of Taylorism can work extremely well. When the situation becomes unpredictable, however, more rules and tighter supervision can actually sow confusion among the people entrusted with executing the strategy, because the rules can’t possibly keep up with unfolding reality. Consider, for example, a common experience in a retail environment – customers seeking to return a product. In many stores, policies are governed by a virtual thicket of rules, placing the store employees in the awkward position of trying to judge which rule applies to the customers’ request. The stress of making such judgments was recently identified as a major source of dissatisfaction for sales people in retail operations. Contrast this instead with the policies in place at retailer Nordstrom’s. The company’s corporate policy is famously captured in a single phrase, as described by a former employee:
Lest one think that this gives employees a license to do whatever they want, nothing could be further from the truth. Nordstrom’s rigorously selects, trains, and mentors its people, inculcating in them a clearly shared set of values which can be used to make decisions. When the environment is unpredictable and complex, shared values can provide directional guidance while facilitating a creative, adaptive response by employees. To illustrate how this works in the case of customer returns, Nordstrom’s customer service became famous when an employee in an Alaska store graciously accepted the return of a set of tires – a product which the store had never sold. The story, now a retail legend, illustrates how shared values regarding customer service can align the goals of the organization with the unpredictable day-to-day responses its employees need to be able to summon.
The well known sociologist, Charles Perrow, terms such shared values “premise controls,” meaning that although you can’t possibly develop rules to cover every conceivable contingency in a complex situation, you can create a set of widely shared and well understood premises that will inform the action of individuals when they encounter the unexpected. Shared values can also facilitate organizing to limit spatial interdependence. As we were told in an interview with the Chief Information Officer of a very fast-paced, low-margin distribution operation:
A company with margins like ours, you would think we should be organized efficiently. We’re not. We’re distributed. The Specialty group has their own IT organization and their own President. We have regions that are autonomous. If you look at it from the outside in, you would say “that isn’t a cost-effective way to organize.” … Our CEO is a strong believer that centralized control from a single entity is a bad way to run an organization.
What makes it work is that I run organizations and build organizations based on shared organizational values driving a purposeful culture. We are very aligned with the notion that a strong culture and a strong set of values allow you to need less control. Great leadership is about giving up formal control and giving up the power.
Counterfactuals and triangulation to combat risks of quantitative models
As we noted, the dilemma of using quantitative modeling to understand complex situations is that things may occur beyond the boundaries anticipated in the models. Two decision-making devices that can provide insight into the limits of the models that you use are deploying counterfactual devices and triangulation.
Counterfactuals can point to important clues in dealing with complexity—especially when trying to figure out the interaction of elements in the environment, or the chain of events that lead to a particular unexpected or rare outcome. As an example, here is a list of counterfactuals posed by sociologist Andrew Abbott:
Social scientists use counterfactuals in order to improve their arguments for what did happen. However, they could be equally useful in business settings, especially when trying to build models to grasp phenomena such as network externalities, irreversibility, and path dependence. Consider the question: “Would iPod be a market leader if Apple had not created iTunes?” Or how about: “Would the iPad launch have been successful if Apple did not have thousands of applications that were already available for use on the new platform?” The point of analyzing counterfactuals has to do with being able to better understand the interaction of elements in complex environments. As in the QWERTY example discussed above, the relationship between some elements is not going to be obvious—and this is where counterfactuals can be especially useful.
Triangulation is another simple technique that researchers sometimes rely on, and it can be helpful to organizations in managing complexity. Triangulation means that rather than modeling data from just one source, you triangulate – get information from different sources and see if it leads you to the same conclusion. Triangulation is not simply a matter of gathering more data but of gathering different types of data. It implies looking at an object from two or more perspectives. Organizational theorist William Starbuck emphasizes that triangulation is especially useful when the perspectives are based on different levels of analysis, and use data aggregated at different levels—for example, data gathered from individual employees talking to individual customers, as well as aggregated sales figures of individual stores or stores in a whole region gathered over a three-month period.
Invest in ‘real options’ to address irreversibility
Real options are directed at developing capabilities in perhaps the most important long-term investment in a manager’s arsenal: Learning. An option is a relatively small investment in a business that creates the right — but not the obligation — to make a further investment later on. The goal is to contain risk by limiting your downside, while maximizing the value you can capture on the upside. When you have options tied to a portfolio of small investments over time, you also have the option of keeping “your moves modest and low risk until you have reduced the most significant uncertainties you face,”
An important component of real options reasoning has to do with the effective management of failure by containing costs and risk, while not focusing too much on failure rates. The idea is not avoiding making mistakes, but making them cheaply and early in the game.
Notice how this logic applies to our previous point about developing reliance versus anticipation. In fact, you could say that real options thinking is an anticipatory measure that also provides your organization with coping skills, increasing resilience through the effective management of investments that can go wrong. McGrath & MacMillan suggest the following guidelines for the effective application of real options thinking:
- Make sure all investments have high upside potential—if you do succeed, the success will be worthwhile.
- Make sure the investment required to determine the potential is relatively small.
- Make sure that you can stop making further investments.
- Invest in a portfolio of ideas.
- Stage and sequence funding so that you review the investment regularly across time.
When people initiate small-scale projects there is less play between cause and effect; local regularities can be created, observed, and trusted; and feedback is immediate and can be used to revise theories. Events cohere and can be observed in their entirety when their scale is reduced.
In effect, small wins are useful in minimizing the misattribution of causal effects, which is a very common error committed by individuals trying to make sense of complex environments. Small wins also require less coordination, fewer resources for execution, and they are therefore more resilient to changes in the environment. Because organizations do not have to commit a significant amount of resources to accomplish small wins, irreversibility becomes less of a problem, freeing subparts of the organization for recombination and reconfiguration.
Tempting though it is to wrap up our reflection on decision-making for complex situations with a few simple rules, we will valiantly resist. Instead, we leave you with the thought that even complex situations can be addressed with the appropriate, sometimes counterintuitive, practices, and a hefty dose of humility.