Focus on the essentials with key performance indicators. Developing KPIs is not easy. There you are. You have to determine suitable indicators or adapt to the existing ones. How you should approach, that is much less. As soon as you delve into the problem, more and more questions arise. 

How do I find suitable KPIs? 

How do I know if existing indicators are still useful? When does it make sense to work with key performance indicators, and when is it useless or even counterproductive? And what degrees of freedom do I have when developing a performance measurement system such as a Balanced Scorecard?

These are not easy questions, and there is not always an unambiguous answer. In this article for ManagementSite, we discuss several common problems and how to tackle them based on some real cases from our practice.

Placing your own accents within a larger whole

A subsidiary of an American multinational has developed a series of KPIs in recent years. Management somewhat euphemistically called this their ‘Balanced Scorecard’. 

But in fact, it was nothing more than a patchwork of numbers that had once proved useful. It concerned more than 80 indicators that were kept on six different PCs in all kinds of spreadsheets. There was actually no ‘system’ for dealing systematically with those indicators. Some indicators had even fallen into disrepair and were no longer properly tracked.

The usefulness of others was increasingly questioned. On the other hand, no indicators were available for certain, strategically important activities and developments.

The management team, therefore, decided to build a new system from scratch. The starting point was the question: what is essential to know whether you are doing well as a company? We initially focused on the indicators that each management team member should be aware of and which should also be regularly discussed in the management committee.

First, we critically examined all quantitative information that had to be passed on to America’s parent company. Every month it concerned dozens of figures, mainly of a financial nature.

We could use two figures as KPI: the headcount and the total turnover per quarter.

Based on the other figures, we created five additional key performance indicators that captured its essence.

We also developed our ‘own’ mission, vision, and strategy for the subsidiary. Of course, this local strategic framework was consistent with that of the parent company, but was much more specific and had its own emphasis. And above all: it lent itself well to attach a number of specific indicators, for example, about its own innovation capacity and about the contribution to the group’s European turnover.

This exercise yielded sixteen indicators, mainly related to activities and results that should evolve in the coming years.

2 questions to all services and departments about KPI’s

The third source of indicators resulted from the bottom-up approach. We asked the same two questions to all services and departments – production, sales, administration: 

(1) “On what basis will you be able to say within a few years that the effectiveness and efficiency of your service/department have improved?”; and 

(2) “How can you prove that you have met set goals and externally imposed requirements?” Based on these questions’ answers, the number of key indicators was determined for each service and department. The number varied between two and eight. This also formed the core of their own dashboard for each group. Some of these key performance indicators (for sales and production) were also included in the executive committee’s set of indicators.

KPI - Key Performance Indicators

KPI – Examples of Key Performance Indicators

End result: a dashboard of 28 indicators for the executive committee. Of these, 11 are discussed monthly, 13 per quarter, and 4 once a year. The four executive committee members now have a joint overview of all important developments in the company. 

Common insight has grown, and the management team comes to joint decisions more quickly. In addition to the common indicators, each board member also has his ‘own’ set.

This consists of the indicators of the services and departments that report to him. This has resulted in a coherent system in which all critical aspects of business operations are covered with a reasonable measurement effort.

Building a barrier against the abundance of information

Some time ago, there was some grumbling within the board of directors of a large university of applied sciences about a large number of figures the governing body was confronted within just about every meeting. At meetings, agenda items were lined up with full pages of tables on student numbers, details of income and expenditure, lists of projects, etc. Once a year followed the annual report, peat of 150 pages chock-full of data.

In fact, the board members were inundated with information that they could not always properly interpret. Therefore, the board of directors asked the management to compile a dashboard that would contain a maximum of 20 indicators, focusing on the strategic challenges of the university college. The administration asked a project group to work on this.

In the working group, we quickly established that our assignment was anything but obvious. A limitation to 20 indicators meant that choices had to be made about what was a priority and what was not. It implied that not all important aspects of a particular theme could be covered with indicators. Take the theme of ‘internationalization’. 

Before that, six indicators had already existed in the university college for a long time. But there was only room for two in the dashboard. Which one to choose? That was not an easy discussion, but in the end, we settled it.

The working group also discovered another problem: in the different services and departments, there were already a lot of figures and indicators in use, but often with slightly different definitions and calculation methods. 

That is why those figures could not simply be compared with each other. So there was not only a need for focus at the highest level, but also for more coherence and mutual comparability of data at the lower levels in the company.

The exercise eventually took a whole year. In the end, we succeeded in clocking down on 23 indicators, slightly higher than the number that had been set by the board of directors. But the qualitative objective was achieved: a clear set of indicators that were very relevant to the management level of the university college.

We were guided by a number of principles in the development of the dashboard with KPIs.

What to consider when choosing Key Performance Indicators?

  • First and foremost: make a clear distinction between indicators for the board of directors and the other (additional) indicators for internal use. In practice, this led to several mutually consistent sets of indicators in the university college. If you want an example of key performance indicators, many indicators at the board of directors level are an aggregation of the indicators used at the level of the training courses. The calculations for both types of indicators are now done in exactly the same way (and even by the same person).
  • A second important principle was to establish the regularity with which the indicators should be discussed. After all, not all indicators need to be discussed at every meeting of the board of directors. In addition, some can only be calculated once or twice a year. Our exercise resulted from the 23 indicators, 15 are calculated and discussed only once a year, and most others, only 2 or 3 times. By dividing the annual key performance indicators’ discussion over several meetings, only 5 to 8 indicators are covered at each meeting. This leads to meaningful discussions and also ensures that all indicators are thoroughly discussed at least once a year.
  • The third principle: always compare each indicator value with previous values ​​and present this time series graphically. This place facilitates a discussion of the trend evolution: Are we on track? Is the trend going in the right direction (and if not, why not)? Does it look like we will meet our medium-term goal? This principle’s application required that past values ​​had to be (re) calculated retroactively for all new indicators. That was not an easy job; for some indicators, it was also not possible due to lack of data. But the effort was worth it and was greatly appreciated by the board of directors.
  • The fourth principle was that the dashboards of the services and departments should always contain the relevant KPIs from the set for the board of directors (usually after ‘disaggregation’: if the indicator at board level contains a total value for the entire university college, then the departmental indicator is the departmental total). Dashboards for services and departments can also contain specific indicators; that are even encouraged. By implementing this principle, all related indicators are calculated identically. This has resulted in more transparency and even less discussion about the redistribution of resources between the programs.

The fact that many KPIs from the board of directors’ dashboard are already available ‘disaggregated’ provides an additional advantage in practice. Suppose questions arise about certain evolutions, e.g., the increase in absenteeism. In that case, it is often sufficient to look at the lower levels’ indicators to determine whether this is a general problem or whether it is mainly situated in certain services.

The system has been in operation for about one year now. Not only the board of directors reaps the benefits. The development of the dashboard has also led to more focus on strategic issues and more transparency in the operation of the university of applied sciences.

Avoid long lists of KPIs.

We conclude this contribution with a common problem: indicators that relate to short-term realizations. It concerns many examples of key performance indicators, the start-up or completion of (mutually different) activities, the production of (each time different) outputs, the launch of (different) initiatives, etc. Businesses or services in which these types of activities are central and have to develop KPIs tend to determine one or more key performance indicators for each of those activities. 

This leads to endless lists of indicators, which are often both trivial and virtually worthless once the activity has been completed. There are companies, especially in the public sector, where hundreds of such indicators have been introduced “Because we simply need to have KPIs for our activities.” It is then (rightly!) Complained that there are far too many indicators and that they provide little insight.

If you are confronted with dozens or hundreds of activities and need to develop an insightful set of KPIs for them, don’t fall into the trap of creating an indicator for each of those activities. The operationally engaged people in those activities do not need an indicator that is trivial to them.

Those directly responsible also know that an activity has been completed. And those a little further from the operational does not care about long lists of binary indicators.

Then what can you do? In such situations, the most meaningful approach seems to be to work with ‘meta-indicators’. The key is to group related activities and then indicate how many of them have been realized that meet a certain common criterion (timely, flawless, according to specifications, etc.). Suppose you have to deliver nine different reports within the year.

You could create two KPIs for each report separately, one that indicates whether the report is ready on time and a second that indicates whether any further corrections were needed. This method creates no less than indicators.

A better alternative appears to be two meta-indicators, which then lead to indicator values ​​such as ‘7 of the 9 reports were ready on time’ and ‘8 of the 9 reports did not require any subsequent corrections’.

Meta-indicators, therefore, provide a general picture of a situation. They have the added advantage that they also last for several years (“Last year we reached our target for 10 out of 12 campaigns, but this year it is only so for 7 out of 11″). If negative trends are observed, it is possible to zoom in on those groups and activities that cause less good overall results.

Conclusion KPI – key performance indicators

Targeted implementation of indicators in your business can deliver real added value: better insight, greater internal cohesion, and faster and better decisions. But when used inappropriately, it causes annoyance, bureaucracy, and even perverse and evasive behavior. 

So work in a focused way and always keep focusing on the essentials: what information we need to make good decisions or be sufficiently sure that we are evolving in the desired direction.