How to measure the value of knowledge work is perhaps the major dilemma for managers in today’s information economy. Perhaps the main reason is that knowledge-based business processes are not well defined, and it is not obvious what needs to be measured or how it should be measured. This difficulty is complicated by the human factor, namely that knowledge employees have significant discretion in how they approach and do their work. Moreover, processes sometimes do not work as intended, and measures that were thought to be appropriate may not show all the useful information about either the outcome or how the outcome is achieved.

This article discusses three factors that will help managers design useful measures for knowledge work organizations. They are the requirements for performance measures, a narrative framework that clarifies and helps communicate the measures, and qualitative and quantitative measures, a narrative framework that clarifies and helps communicate the measures, and qualitative and quantitative measures.

These three factors can enhance the effectiveness of an existing performance measurement scheme or provide the basis for building a new one.

For example, consider the Balanced Scorecard. Addressing requirements tailors the Balanced Scorecard measures to the particular decision makers and their decision styles. Building a narrative framework adds flexibility beyond the typical linear learning and growth, internal-business-process, customer and financial framework of the Balanced Scorecard. The inclusion of qualitative measures addresses the intangible factors that are often crucial for business success.

The examples in this article are drawn from my experiences at the John A. Volpe National Transportation Systems Center, U. S. Department of Transportation. The examples are from the field of aviation research and product development programs, conducted with the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA). These programs involve steps that are similar to those a business takes to create a new product: determining product characteristics to meet a market (i.e., enhancing air traffic control operations); understanding customers (i.e., air traffic controllers, pilots, air service operators and aircraft manufacturers); producing prototypes, and testing and refining the prototypes to create usable products. Thus, while the examples are from government aviation programs, the ideas are applicable to a business organization that is involved in knowledge work.


Addressing the requirements issue helps assure that the performance measures will serve a useful purpose. The ultimate uses of performance measures usually fall under decision-making or communication, and while the decisions that the measures will support are usually obvious, the consequences of the decisions and all the elements that influence a successful outcome are often difficult to articulate. Consequently, the connections between the measures and the decisions, and the results, are not always adequately addressed.

We first need to understand the purpose of the measures, which usually means knowing if some objectives are being met. We might call these the outcome measure(s). We also probably would like to know about in-process measures, in order to assess activities prior to the outcome and judge how to improve it.

Once we know the objectives, we can then identify the decision makers and stakeholders who want to know the goals, the decisions they will make, the information they need to make the decisions, and the measures that can help them.


The following general steps have proved useful in defining the requirements for measures. The steps identify the:


  1. Organizational objectives.
  2. Decision makers and stakeholders.
  3. Decisions that decision makers and stakeholders make.
  4. Information that decision makers and stakeholders need from the measures.


Structured techniques are useful to develop requirements. The structured techniques presented below help decision makers avoid the tendency to make routine or snap judgments when asked about their information needs.

Interviews or workshops. These sessions urge managers to discuss decisions they make and what decision-making information would be useful by asking “what if” questions. The Volpe Center facilitated a workshop with NASA Advanced Air Transportation Technology (AATT) program managers to define the overall strategic goals for the AATT program. The AATT program is developing computer-based tools to support air traffic control. The definitions of these goals and corresponding high-level measure categories were refined until a consensus was reached. Later, individual project staff members helped develop specific measures that expressed how their projects had an impact on top-level goals.

Structured program flows. Tracking the flow from possible actions to direct impacts to outcomes is another way of identifying the information desired from performance measures. In-process measures can be formulated for various actions and causal factors, as well as end-process measures for results.

While different, structured techniques can be used, the important point is to probe deeply enough to understand how decisions are made and uncover what information will aid decisions.


Placing measures in a narrative framework helps show how project activities, measured with in-process measures, perform and contribute to the outcome, measured with end-process measures. The influences of external factors and intangibles can also be shown.

Flow framework. One example of a measure narrative is a flow framework, which shows how activities contribute to outcomes. An example of a flow narrative framework is the Balanced Scorecard’s representation of a learning and growth perspective that leads to an internal-business-process perspective, then to a customer perspective, and finally to a financial perspective.

Hierarchical framework. The hierarchical framework shows how a variety of specific activities build to results, and hence to enterprise outcomes. A hierarchical framework is useful for demonstrating how a variety of parallel project activities form a cohesive program to support overall goals.

The best framework for any case depends, of course, on the particular situation, the decision makers and how they can best interpret the information presented by the measures. Information for deciding on a narrative framework can be gathered during the requirements phase.


Besides gathering requirements for measures and formulating a narrative framework for presenting the measures, we also have to develop the measures themselves. With the requirements and the framework already defined, it is then easier to develop the measures.

A useful approach for developing measures is to integrate the requirements with an assessment of the actions to be measured. This assessment enables us to understand the actions and their results sufficiently to identify what might be measured. Potential measures can then be generated to capture the information desired by the requirements and checked against the activities to determine if the measures are realistic and calculable. Alternatively, possible measures can be generated to capture the outputs and outcomes of the actions and checked against how well they provide useful information to decision makers and stakeholders. Iterating between these two views produces a set of useful measures.

Most books and articles that deal with measures insist that measures have a quantitative value. Certainly, quantitative measures are preferable. But what if that is not possible? Insisting on only quantitative measures will often mean that important issues will be skirted. It is possible to assess a qualitative item by assigning it a rating, such as high, medium or low, or from 1 to 10. Ratings can be assigned by soliciting expert opinion within the company or by conducting a survey.

The Volpe Center developed measures for the Federal Railroad Administration (FRA) to use in selecting research projects for its safety research and development program. The selection criteria included both quantitative and qualitative measures for comparing the projects. Qualitative measures indicate how a project supports the leverage the FRA has to affect safety (i.e., rulemaking, enforcement, and best practices and standards) and the likelihood of the project’s success. Values were assigned to the qualitative measures using high, medium and low ratings based on expert opinion.

As another example, consider what could be measured to assess the effectiveness of a company’s cross-functional teams. Quantitative measures might be measured by counting the number of cross-functional teams, the number of employees involved on teams, and the number of actions generated by these teams. These measures have some use, but they do not reveal whether the company is getting value out of its teams. To do so, the company might rate the degree to which managers feel teams are contributing. Team members might be surveyed and asked how he or she rates the concrete contributions of the teams.


Addressing decision makers’ and stakeholders’ requirements for measures supports their decisions and decision styles. Presenting measures in a narrative framework shows the context and causal relations of what is being measured. Using both quantitative and qualitative measures enables all important items to be measured. Addressing these three items can enable decision makers to integrate the intangible factors related to intellectual capital, organizational culture and work styles that are so important to success in knowledge work organizations.