Letter from the Dean
Interview with Dean Daly
Data Mine
Sneak Attacks
Secret Agents
Going the Extra Miles
DotCom Mania
Out of Touch
Interview with Kenneth Laudon
Branding Cotton
Endpaper

 

In business, as in war, strategic surprises can prove devastating. But operating at a permanently high level of alert carries its own potentially damaging costs. Savvy managers can occupy a safe and effective middle ground.

istory records many instances of deadly strategic surprises. Hitler’s invasion of the Soviet Union in 1940. Japan’s attack on Pearl Harbor in 1941. The September 11, 2001, terrorist attacks on New York and Washington.

Fatal strategic surprises can occur in business as well, as companies find that customers or clients may switch suddenly from cooperative to predatory behavior.

Like governments, businesses try to defend themselves through deterrence, and by constantly gathering and interpreting information and intelligence. But guarding against strategic business surprises is difficult, because they are relatively rare. In any given relationship they are likely to happen only once. What’s more, in order to estimate the risk of a strategic surprise one must engage in the tricky task of focusing on behavior that is likely to signal opportunistic intent.

When companies encounter behavior that deviates sharply from established patterns of interaction, they have to decide whether to terminate the relationship, or regard the unexpected behavior as random or accidental. Because they have to act quickly, companies sometimes react mistakenly. They might terminate a relationship unwarrantedly – a mistake that eliminates the risk of strategic surprise, but also could end a valuable relationship. Or they might underestimate the risk of opportunism and thus make the firm more vulnerable to strategic surprise. We define those mistakes as Type I and Type II errors, respectively.

Also, most companies do not rely solely on past behavior when making inferences about the threat of strategic surprise. They also tend to rely on behavioral norms, which constrain opportunism and shape generalized expectations about how business partners are likely to behave. But since there is also only a small amount of data available on how strongly partners adhere to norms, inference is likely to amplify errors in judging the trustworthiness of partners.

Given these constraints, how can companies guard against strategic surprises while avoiding hasty judgments and maintaining valuable business relationships? The answer lies in using tools and theories from economics, psychology, and sociology.

 

Judgment and Strategic Surprises: A Model

Strategic surprises occur when an actor switches from behavior that reinforces cooperation and friendliness to one that expresses an aggressive or non-cooperative intent. The surprise results from speed, which allows little time for warning or defensive measures, and from the contrast between assumptions held before the action and the intent revealed by the action. The greater the contrast, the greater the surprise.

Companies try to insulate themselves from such surprises by gathering information and processing it. When they do, executives routinely classify information either as a signal (a legitimate warning) or as noise (irrelevant information). In acting wrongly on these judgments, they may commit errors.

Gathering more information can help reduce – but not eliminate – judgmental errors. And at the time of decision, it is impossible to reduce the probability of one error without increasing the probability of the other. To expand on this point further, we turn to a model that has its roots in Signal Detection Theory. It is described in Figure 1.

The horizontal axis depicts the severity of the signal – say the number of delays in fulfilling an order – and whether those involved a potentially low or high cost. Here the input is primarily information. When the severity reaches the value Xc , defensive action is warranted. The vertical axis depicts the degree of surprise, which varies with the degree of speed and the contrast between previously held assumptions and revealed action. Here the input is primarily interpretation. Events above Yc are defined as strategic surprises and those below it are regarded as “noise.”

As seen in the grid, there are four possible outcomes: (1) True alarms are cases in which precautions are taken that are justified by subsequent events; (2) False alarms are cases in which precautions are not justified by subsequent events; (3) Strategic surprises are cases in which no alarm was followed by an event that shows precautions should have been taken; and finally (4) True noise are cases in which there are no alarms and no events to suggest that precautions should have been taken.

Looking at Figure 1, it is clear that one can err either by responding to a false alarm (a Type I error) or by failing to interpret the signal properly and falling victim to a strategic surprise (a Type II error).

The challenge for executives and decision-makers, then, is to simultaneously reduce the frequency and cost of both false alarms and strategic surprises. The historical example of the 1941 Japanese attack on Pearl Harbor illustrates how difficult this can be.

 

A Date That Will Live in Infamy

In Pearl Harbor: Warning and Decision, Roberta Wohlstetter concludes that: “Never before have we had so complete an intelligence picture of the enemy.” Before the attack, Japanese codes had been broken, and information from British intelligence, diplomats and journalists was highly accurate. In spite of the constant flow of information pointing to an imminent Japanese attack, American forces were caught entirely by surprise. The surprise is consistent with our model in two respects. Information indicating a Japanese attack led to a series of false alarms, which ultimately undermined vigilance. And, assumptions about the low likelihood of a surprise attack led American intelligence to discount information pointing to an attack.

On three separate occasions – in June 1940, in July 1941, and in October 1941 – information about Japanese intentions led to the declaration of a state of alert in Pearl Harbor. The third alert, on October 16, 1941, directed American commanders to deploy forces in readiness to repel such an attack When an attack did not materialize, these alerts were seen as costly and disruptive false alarms. By the time the third alert was issued, there was a strong tendency to discount warnings and relax vigilance.

Pearl Harbor was also a failure of interpretation in the face of powerful evidence. The possibility of a Japanese attack on Pearl Harbor had long been part of American strategic thinking in the 1930s. In April and July 1941 two separate reports forecast the Japanese attack in some detail. Nonetheless, the American military discounted this potential outcome because they believed the risks involved (for the Japanese) to be too great.

It is tempting to argue that if American intelligence had been truly excellent, it would have surmised the date of the attack and possibly even the target. But there is always a gap between the information available and the information needed for completely accurate prediction. And closing that gap requires not just extraordinary foresight but also a major commitment of resources.

Indeed, reducing the risk of strategic surprise, either by taking precautionary defensive measures or by investing in comprehensive intelligence systems, can be very costly. The Soviet Union discovered that guarding against a surprise nuclear attack imposed crippling and potentially fatal costs. This holds true to an even greater extent for firms that have limited resources at their disposal.

 

Balancing Dependence Against the Risk of Surprise

The limits of reducing the risks of strategic surprises are especially evident when in businesses regulated by non-contractual relationships. In many industries, including automobiles, textiles, publishing, and movies, such relationships are displacing contracts as the main conduit for transactions between buyers and suppliers.

As a stream of orders produces mutual understanding and expectations, the relationships deepen. But these understandings and expectations in turn have important implications for decision-making. Firms are more likely to invest in specialized machinery and production processes if they can rely on future orders from certain customers, for example.

An excessive reliance on a single partner can open companies up to strategic surprises. In the 1990s, TCI Manufacturing Ltd., a Canadian firm, was the sole supplier of computer cases and power supply systems to Power Computing Corp., which produced Macintosh clones. The relationship was governed by a standard purchasing agreement with a thirty-day termination clause. In September 1997 Apple purchased Power's core assets and its license to manufacture clones for $100 million. In short order, Power severed its relationship with TCI, forcing the supplier to shutter most of its operations. The prospect of such an outcome haunts all firms in contracting-subcontracting relationships.

In the 1980s, researcher E.H. Lorenz studied how 10 industrial machine-producing firms located around Lyons, France, dealt with such issues of dependence in non-contractual relationships. The firms developed profitable relationships whereby the manufacturers outsourced the production of key components, while suppliers invested in new technology. But this arrangement carried certain risks. The producer ran the risk of late delivery or poor quality parts, and the supplier ran the risk that after it made the costly investment the producer would fail to place sufficient orders.

The obvious way of reducing such risks is to stipulate them contractually. But drafting contracts is expensive and may foster risk aversion. An excessive preoccupation with the possibility of opportunistic behavior inevitably leads to a proliferation of hypothetical contingencies under which opportunistic behavior could be advantageous. And that produces a heightened sense of risk and a defensive posture.

Instead, Lorenz found that the parties dealt with risks informally, often by making commitments that are intended to build confidence. For example, producers agreed to buy at least 10% of the subcontractor’s output but no more than 15%. This meant specialized investment by subcontractors would be worthwhile, but that no subcontractor would be excessively dependent on a single customer. The subcontractor agreed to invest in new technologies, to be price competitive relative to other suppliers, and to deliver quality components on time. In return, the producers informally guaranteed that they would not instantly drop the subcontractors if competitors were to offer better terms.

The expectations and promises were not always spelled out in contractual language, which preserved flexibility for all parties. But this constructive ambiguity also opens the way for strategic surprises. Confronted with a request for postponement of delivery, firms will be unsure how to interpret the move. Does the lapse represent a clear signal that the subcontractor is behaving with opportunistic intent? And how late does a late delivery have to be before it deserves close attention?

Answering these questions becomes particularly difficult when an event falls into that ambiguous area in which events are regarded as outside acceptable norms, but not sufficiently serious to require action. And this is the moment when entities become vulnerable to strategic surprises.

 

False Alarms and Strategic Surprises

During the Cold War, both the U.S. and the U.S.S.R. built immense early warning systems to guard against surprise, full-scale nuclear attacks. Initially, the greater amount of information that these systems collected reduced vulnerability to surprise attacks. But as the systems became more sophisticated, each side faced a new dilemma. Various innocent activities could be interpreted by an overly sensitive system as the initial stages of an all out attack. So to reduce the risk of launching a false preemptive attack, the superpowers built alert systems that decreased the sensitivity of the system to incoming information.

Let us assume that businesses develop similar early warning systems, with three states of alert. The first state, green, represents business as usual. The second state, yellow, is characterized by increased vigilance by certain managers directly involved in sales or purchasing. The third state, red, denotes a high level of attention on the part of managers, including meetings to decide on what actions are called for. These alerts are similar to the areas marked in Figure 1, where green equals true noise, red equals true alarm, and yellow marks both false alarms and strategic surprise. The yellow alert stage is marked by ambiguity, which must be resolved by a judgmental decision.

So, should a subcontractor adopt a green, yellow, or red alert status if an established customer reduces orders? The answer depends on where you draw the lines between the three alert states. Let us assume that a subcontractor emulates a Cold War superpower. She sets the yellow and red alerts at relatively low levels during initial dealings with an unfamiliar client. If the client’s behavior remains below the yellow alert level, she will inevitably increase trust and raise the threshold. If a client violates the threshold, this may result in less trust. The supplier becomes more vigilant, and spends more time interpreting cues that previously would have been ignored.

However, the supplier may think that it is the alert that is unreliable rather than the client, and therefore decide to raise the threshold level. After all, lowering the alert threshold raises costs to managers and organizations. A yellow alert forces key managers to rearrange schedules and priorities, while a red alert may force organizations into costly preemptive actions. And if they believe that the violation of the yellow alert threshold is accidental and does not reflect their partners’ trustworthiness, they will adjust the threshold upward. This dynamic, illustrated in Figure 2, leads us to the following conclusion:

Proposition. In non-contractual relationships, the estimation of the probability of a strategic surprise is highly sensitive to false alarms. This sensitivity arises from the necessity of inferring the probability of strategic surprise from a small sample of available data on false alarms. In addition, when recent false alarms are associated with high costs, there is a tendency to raise the alarm threshold, thereby increasing the probability of strategic surprise.

 

Norms and Strategic Surprises

Competition and cooperation between companies would not be possible without a variety of explicit norms and sanctions that are rooted in legal and regulatory structures. The assumptions, however, are often also informal and tacit. Industries, after all, are social systems in which repeated interactions give rise to informal norms covering areas such as avoidance of price rivalry and mutual forbearance of entry into each other’s market.

In dependency producing interactions, repeated interaction can change the basic character of the relationship from a casual arms-length relationship to one of partnership. Lorenz noticed the development of “moral contracts” in some cases he observed, a series of undocumented understandings. And for the most part, the effectiveness of such “moral contracts” depends on relatively ambiguous norms.

Because norms that govern cooperative relationships are structurally more complex, their violations are susceptible to multiple interpretations. A delivery that is late by a day, or a batch that has several more defective parts than usual, can be explained away. A problem arises when we enter the aforementioned yellow zone, where delays and defect rates may give cause for concern. It is at this point that managers often begin to suspect that their partners are only paying lip service to the agreement and may not be truly bound by norms.

Indeed, it is not safe simply to assume that a partner will adhere to norms without also evaluating the extent to which he embraces them. But evaluating the degree to which partners embrace norms as opposed to simply following them is difficult. Decision makers tend to rely on the tendency of norms governing one kind of behavior to be related to norms governing other types of behaviors. For example, it is generally believed that a subcontractor who is willing to make last minute changes to an order without extra charge is likely to embrace norms that constrain taking advantage in other situations. Norms can therefore be regarded as social and cognitive constructs that link different populations of events.

In any industry where norms are in place, norm espousal usually precedes norm adherence. Talking about norms makes it easier to behave according to them. In our terms, norms can be conceived as average behavior resulting from long-term interactions that can provide at times a better guide for behavior than judgment based on a few recent events. This moral grounding is reinforced by the social nature of the interaction. Lorenz found that the friendly language used by the companies he studied, conveyed to subcontractors that “when in doubt they should act as if their actions were guided by the norms of friendship.”

Acting “as if” norms are in force is easier if there is sufficient history to suggest that norm espousal is strongly correlated with norm adherence. And this line of reasoning leads us to two important conclusions regarding norms and strategic surprises. The first is depicted graphically in Figure 3. It holds that in industries where norms have developed over a short period of time, firms are more likely to regard behavior that is contrary to norms and expectations as valid indicators of opportunistic intentions. That, in turn, makes them more prone to lower the alarm threshold than firms in industries in which norms have developed over a long period of time. In other words, when companies move the alarm criterion to xc2 on the left, this increases the probability of false alarms, but decreases the probability of strategic surprises.

The opposite problem exists in industries with a longer history of norm espousal and norm conformity. Since there is more evidence to support the belief that norms are in force, there is a greater tendency to conclude that norm following is a strong index of norm embracing. Accordingly, firms place more weight on norms when interpreting their partners’ intentions and less weight on recent evidence of potentially opportunistic behavior. This tendency leads firms to downplay the importance of recent evidence and false alarms. If, in addition, the cost of recent false alarms has been high, firms may raise the level of the alarm trigger. Doing so reduces the probability of false alarms, but also making firms more vulnerable to strategic surprises.

 

Conclusions

While it is not possible to eliminate strategic surprises, there are potential measures firms can take to reduce their frequency. From an economic perspective, firms should attempt to diversify their dependence on buyers and suppliers so as not to reach an extreme level of dependence. Furthermore, companies should include enforcement costs in assessing the probability of opportunistic behavior by partners. Many relationships are built on legal agreements that can deter predatory opportunistic behavior by partners if they are enforceable, and especially if they are enforceable at relatively low cost.

From a sociological perspective, companies should not rely too much on history in predicting their partners’ intentions. Even if historical evidence is systematically collected, it is not a guarantee against potential changes in partners’ intentions. Firms should scrutinize their environments continuously in an attempt to detect changes that may affect their partners’ goals and intentions. Finally, they should develop a sound process of interpreting the information that is gathered from several different sources.

The global marketplace, like the world itself, is fraught with opportunity and danger. And even the most cleverly designed strategies can leave companies and countries vulnerable to strategic surprise. The challenge for government policymakers and corporate decision-makers is not, then, to eliminate risk – that may prove too difficult – but to manage it.

Joseph Lampel is currently professor of strategic management at the City University Business School in London, UK, and was an assistant professor of management at NYU Stern from 1989 to 1996. Zur Shapira is research professor of management and organizational behavior at NYU Stern. This is an abridged form of an article that appeared in Organization Science, Vol. 12, No. 5, September-October 2001.