“There are two times in a man’s life when he should not speculate.” wrote Mark Twain: “When he can’t afford it, and when he can.”
Predicting how someone will fare in a leadership role always involves some degree of speculation; there is no such thing as a perfect batting average. Nevertheless, batting averages can be high or low, and the difference is critical.
Some think the difference in predicting executive performance comes down to human judgment; others say it depends more on process. Regardless, deciding who will lead – next month, next year, or five years from now – is a critical decision. Ask any board chair, CEO or CHRO what keeps them awake at night, and chances are you will hear: “Making sure we have the right talent. Over the next five years, we are going to be facing serious leadership gaps.”
Predicting performance: The four tools
The choices available in assessment tools and processes number in the thousands. While each differs in some way, they ultimately fall in one of four categories:
- Interviews: Holding discussions with a candidate, either one-on-one or in a group;
- Surveys: Speaking with references about a candidate, or gathering feedback from colleagues;
- Psychometrics: Having the candidate complete tests and exercises, from which patterns or abilities might be inferred;
- Simulations: Observing the candidate as he or she participates in scenarios meant to replicate a work situation.
To our knowledge, these are the only tools that an organization has to help predict executive performance.
Is there one best way to assess executive talent
Because our livelihoods depend on predicting leadership success, my partners and I have a huge stake in answering this question. We decided to survey the chief human resources officers (CHROs) of a cross-section of prominent Canadian organizations to find out what works for them. We wanted to know if they had favourite ways of assessing executive talent, and if they had any advice they could share with others.
The organizations included private sector companies, autonomous government agencies, a commercial Crown corporation, a financial cooperative, a large hospital, and a regulated utility.1
Our discussions revolved around two types of assessment:
- Talent Management – Deciding which employees have the potential to become senior executives;
- Leadership Selection – Deciding whom to choose when a critical position must be filled.
Collectively, the CHROs had well over 200 years of experience. The insights they offered were based on their beliefs about what is truly helpful in assessing executive talent.
What our survey found
As we conducted the survey, we realized that we would not succeed if our goal were to find a single best-practice approach upon which CHROs could agree.
We expected that some of the 12 organizations would have their own subtle differences in conducting assessments. We were surprised, however, to find that no two were the same. All 12 were different, and in many cases very much so.
One organization, for example, uses only group interviews coupled with reference reports for external candidates or 360-degree surveys for internals. Another uses one-to-one interviews, a card-sorting exercise, and a simulation exercise conducted by a consulting firm. A third uses as many as 15 interviews, each conducted one-at-a-time by employees all around the position, coupled with a report from an industrial psychologist. Each of the 12 organizations had developed its own unique approach to executive assessment.
We also observed that most of the CHROs had become comfortable with what initially seemed like a paradox: All could recite the weaknesses of assessment tools in general, yet each felt that his or her specific method helped make accurate predictions. All had approaches that differed from the others, yet each had found what they felt worked for them.
Finally, despite the differences in approach, CHROs comments contained common points of wisdom. These points provided us with valuable guidelines that should be helpful to almost any organization, regardless of type or size. While the CHROs might differ about specific assessment methods, they could generally agree on the underlying principles.
This article summarizes several of the guidelines we found as we conducted our survey.
Principles of predicting executive performance
Principle 1: When predicting how an individual will perform, it is important to remember that all four tools – interviews, surveys, psychometrics, and simulations – only provide inferences. They do not necessarily provide facts.
Interviewers, for example, are prone to believe that the opinions formed during an interview are real. Yet with the distortions built into interviewing, they cannot always be so. The interview itself is not a real work situation and candidates are fully conscious of being under the spotlight, which can alter how they respond. Even internal candidates are subject to biases. They win from having superior knowledge of the organization, but lose because their warts are known.
Reference surveys and 360-degree surveys also suffer from numerous built-in biases. The respondent may well be a friend of the candidate, may not want to prevent the candidate from accessing a good opportunity, or may be fearful of repercussions if he or she gives a negative report. In other words, the surveys have a huge potential to distort from the outset.
Psychometric assessments can entirely miss factors that turn out to be critical to performance. In one instance, for example, after a rigorous psychometric process, a large company that has a people-oriented culture hired a C-Level executive who was incredibly insensitive. The psychological assessment had missed this fact. In another case, in spite of receiving dire warnings from an experienced psychologist, a venture capital firm proceeded to hire a particular candidate. After two years, the venture capital firm concluded: “This is one of the best people we have ever hired. The psychologist’s view was wrong.”
Simulations have limitations as well. Do the behaviours of an executive who is being observed as she completes an in-basket exercise really replicate how she will perform on the job?
“No one tool is perfect” concluded one CHRO: “All have their flaws.” This observation leads to the second Principle.
Principle 2: When assessing people for critical positions, avoid relying on a single tool. Combining two, three, or even all four tools is better than relying on one alone.
At first glance, this principle looks to be self-evident. Who wouldn’t agree that two tools are better than one? Deriving the greatest value, however, depends on how the principle is put into practice. To quote one CHRO: “All of the tools are broken. Some are useful, but all are broken, so it is dangerous to become wed to any one of them. Using more than one helps offset the shortcomings of another.”
To the extent possible, the information produced by one tool should be used as a way of checking the information produced by another. Many studies indicate that initial opinions influence how we perceive subsequent information. If one person conducts an interview, and later interprets a test result and conducts reference calls, how unbiased is he likely to be while he views the test result and hears the referees respond?
Ideally therefore, reference checks should be conducted by someone other than the interviewer. If a psychologist is to be used, he or she should be independent. To the degree that the reports produced by the interviewer, the reference checker and the psychologist are independent of each other, the conclusions reached should be less biased, more objective and ultimately, more accurate.
Principle 3: Extra effort equals better results. The value received from an assessment depends directly on the time invested.
On the surface, this principle looks so fundamentally simple that its usefulness is easily overlooked. Most people would agree that two interviews, for example, are likely to be better than one.
When it comes to applying the principle, however, interviewing is precisely where a lot of organizations stop. As a case in point, consider reference surveys. In spite of their considerable limitations, we have found that reference surveys produce much better information if the extra effort to meet referees face-to-face is expended.
The same can be said of 360-degree surveys. Those conducted in person provide substantially more insight than those conducted through an online tool. “Online or paper surveys are filled-out as quickly as possible” said one CHRO. “There is no opportunity to push the participant to think more deeply; no incentive for the participant to provide truly insightful feedback.” Recently we attended a board meeting where the results from a face-to-face survey were presented. Just 12 months previously, a very similar survey had been completed . . . online. The difference between the two caused the directors to voice their collective view: “The insights from conducting the survey face-to-face were head and shoulders above last year’s.”
Principle 3 applies to practically every area of assessment: A battery of psychometrics administered by a psychologist is likely to produce more accurate inferences than a single tool administered by the employer; similarly, conducting several simulation exercises is better than conducting one.
All of the foregoing means extra work and extra expense. There may be no other way, however, for an assessment to produce optimal predictive value.
Principle 4: Effective leadership depends on specific qualities, such as emotional intelligence. It also depends on specific competencies, such as knowing what to do in the context of a particular organization. Don’t just measure one or the other of these. Measure both.
There have long been two schools of thought regarding executive potential.
The first school might be summarized as universal leadership: “A good leader is a good leader regardless of the position.” This view holds that executive capacity depends on the core attributes of the individual: things like cultural adaptability, personal values, and capacity to learn.
The second school might be summarized as situational leadership: “The CEO we need for the future is different from the CEO we needed in the past.” This view holds that leadership competence depends on the skills and knowledge that are needed to succeed in a specific position, in a specific organization, with specific goals to be achieved. As the leadership writer and authority, Chester Barnard, observed 60 years ago, leadership legitimacy also means knowing what you are talking about. Our research shows, for example, that a newly appointed CEO is almost three times more likely to depart within the first two years if he or she has not had prior experience in the same sector.
Which school of thought is correct? Actually, there is no need to make a choice. We can decide that both are important, and use both in making a decision. We can assess the individual’s attributes, and their competencies. We can determine whether they have a fundamental desire to do the job, and whether they have an Achilles heel that will interfere with their competencies. While information is never perfect, we can do a lot to close the gap so our decision is based less on speculation and more on knowledge.
Principle 5: Any approach or tool, used often enough, will gradually become more useful. It matters less whether you use a common test, a specific interviewing approach, or a deck of cards to be sorted by the candidate. You and your reactions to the candidate are the actual measuring stick. The particular tool you use is simply a medium that, with experience, will lead your intuition to be right more often
Several years ago we listened as a seasoned psychologist summarized his views regarding an executive. This psychologist had conducted several thousand assessments. He had been around long enough to learn how his predictions turned out. As he spoke, we noted that he based his conclusions as much on the way the candidate interacted while doing the tests as on the test results. Was the candidate defensive or open? Motivated or anxious? Focused or scattered? The psychologist factored these observations into his conclusions.
An experienced interviewer will do the same. The answer to an interview question may be important, but so is the way the person answers it. To quote one of the CHROs: “Most value for me comes from a light tool. The rest is instinct.” This observation goes a long way toward explaining why twelve CHROs can each have an approach that differs greatly from others, even though each can be effective. To draw upon a point made by Malcolm Gladwell in his book, Outliers, each CHRO has put in more than 10,000 hours observing executive behavior. Each has lived with the consequences of predictions over and over again, until a massive internal database of “what works and does not work” has been developed. The CHRO’s instinct, not the specific tool, is the actual measuring stick.
In closing, we would like to leave you with two more quotes from the CHROs:
- “When predicting executive performance, a healthy balance is needed between art and science. The two need to be blended. If you are missing either the art or the science, find someone who can bring the other side.”
- “After we determine that a candidate has the competencies we need, I ask myself one more question: Do I like the person? Life is too short to work with people I don’t like.”
- The participating organizations included: CN Railway, Export Development Canada, The Bank of Canada, Alterna Savings / Alterna Bank, Hydro Ottawa, The National Research Council, CMA Holdings, MDS Nordion, The Royal Canadian Mounted Police, The Ottawa Hospital, The House of Commons, and NAV CANADA.