Objectives and Key Results Software (OKR), when correctly deployed, is an immensely powerful tool for guiding an organization towards its critical successes. However, the way the OKR software industry has evolved means companies are frequently faced with a choice of choosing a solution with innate bias.
These biases lean towards either task management or employee assessment with each bias carrying with it a set of risks and challenges limiting the efficacy of the overall OKR implementation.
When we engage with customers and prospects, we get one question that gets asked more often than the others. Will OKRs improve the performance of underperforming teams or underperforming employees?
This is a common question most of our customers ask us while testing waters with OKRs. “There are great performers and there are mediocre performers. I would like OKRs to help mediocre performers to become better performers”. This is the problem statement that we wanted to focus on in this article. “How can I boost my team’s performance using OKRs and make my average performing team members better performers?” While we think that OKRs are the best possible way to improve a team or employeès performance, how you go about doing that is really important.
The Business Process Equation
From a 30,000 foot view, business has several core business processes, which consume several inputs — human resources, materials — and use some tools to generate a set of output. These outputs, in turn, lead to meaningful outcomes for the business.
The primary reason for this question is that most of us are biased towards output or inputs.
- Do I have the right people?
- Are they doing the right tasks?
All successful organizations realize at some point in their evolution, that measuring the journey is a critical component of ensuring timely and cost-effective arrival at their intended destination.
This necessity to effectively measure ˿pathway progress̀, has created a marketplace for a multitude of vendors, each purporting to have the best measuring tools of the day. A subset of these tools is the specialized area of OKR. Inevitably, not all measuring tools are created with equal degrees of usefulness. Some are fabricated from innately elasticated material, with selective data sets that may send confusing signals to those who rely not just on the veracity of the metrics being reported but also their wider organizational relevance.
This is especially true when a measuring tool may have been hastily selected without due consideration to the environment within which the tool has to operate. Decision making on flawed or biased data sets holds within it the potential for catastrophic outcomes
What happens when you are focused on Output?
To explore the latent ineffectiveness of this approach, let̀s look at the following hypothetical.
A finance director is monitoring the activities of his credit control team via a dashboard. The activities of each individual credit controller are broken down into measurable tasks. Each task, in theory, triggers another task that should lead to effective credit control management and help the organizatioǹs overall struggling cash flow. The cash position of the organization is a major focus of the executive team.
They have a few hundred medium size clients with varying degrees of late payment tendencies. The dashboard shows that all tasks, and indeed all follow-ups are being conducted according to the prerequisite KPIs. The dashboard shows an overwhelming array of green lights and ticked checkboxes. And yet the organization still struggles with significant cash flow problems.
The root cause, in this hypothetical example, is that the organization has one very large client whose relationship is managed outside the purview of the credit control team. This one client accounts for 50% of the organizatioǹs problem debt. The dashboard reports green lights because it has been deployed to focus on certain tasks and does not have the ability to cater for the impact of this one large, late-paying client.
While this example might seem extreme and easily mitigated by effective management communication and cross-functional collaboration, often it is not as simple as that to fix. The more complex an organization gets, and the more elaborate its silos and departmental structures, then the more dependent it becomes on systems and reporting to inform the executive team of its progress towards its mission.
In our credit control example here, the finance team can happily report that it is meeting or even exceeding its performance standards, and all tasks are being conducted according to plan. And yet the company is still struggling.
The focus on output missed the point that one critical event and relationship were not being captured within the system.
What happens when you focus on Employees?
To properly understand the dangers of misaligned OKR solutions let̀s look at another example of poorly implemented OKR.
Our cash strapped client in the previous example has changed their focus on the people who are responsible to keep credit risk under control. The organization took the view that a task focus had not delivered the right data framework and they are now hoping for a more effective approach.
This time, the approach is more biased towards employee development. There is a heavy emphasis on skills alignment and trying to ensure that goals are cascaded down from the executive leadership team to individual team members.
This sounds great in theory. The problem is that the very same issues exist with this approach as with the previous task focus. They are just masquerading under a different set of metrics. Developing an employeès skill set sounds like an in arguably good idea and yet, it does not take too much of a leap of imagination to construct a set of circumstances where it could be extremely counterproductive in the absence of a more holistic data set.
Let̀s assume that this company has an ambitious expansion plan. Their path to success is deemed to be largely dependent upon the performance of its incumbent sales team. They are a field sales team that was inherited from a recent merger.
The data suggests that this team is collectively underperforming in a certain part of the sales cycle. The HR function acts upon this data and recommends an aggressive and expensive learning and development program to upskill the team.
Nothing about this so far should seem particularly contentious, and yet, just like in our credit control example, the metrics fail to identify that the key fundamental opportunity for rapid improvement would not be found by training. In this particular example, the incumbent sales function were expensive field agents, and the company would have been better served by replacing them with high volume, low-cost telesales specialists.
In both of these examples, we have used deliberately facile scenarios and one would hope that any organization would be adroit at identifying and solving these. And yet due to the very nature of how complex organizations can become, it is frequent that solvable issues remain hidden from the vision of the solvers.
These are examples of how data can be unrelentingly misleading when the framework has been constructed using the wrong shade of timber.
The larger and more complex an organization gets, the greater the need for this framework to be robustly constructed using tools that doǹt have an innate bias to either task management or employee assessments.
Where in this equation do you set your OKRs?
In this equation, you can set OKRs at every level — Outcomes, Outputs, Inputs and even go to the individual input component. But the key to a really good OKR is to see if you can stay to the right of this equation. Focus on the outcomes.
Instead of saying I want to improve the performance of my underperforming employees, and set goals to improve them, try to set OKRs that will focus on the outcomes they produce. For example, I operate an NOC and I feel that that group is not working out well. Instead of focusing on how many tickets each one closes, which is a good indication of their output, set your key results that focus on uptime, reducing repeated tickets, and so on.
Outcomes are clear. There is no ambiguity. Either you got them or you didǹt. And, good outcomes most certainly are influenced by good outputs, which in turn is a result of good inputs. But sometimes, it might be harder to set your key results based on outcomes. In those cases, focus on output rather than the input.
For example, instead of asking a call center agent to work 8 hours, ask them to make 100 calls a day. Assuming you know that on average you get 5 leads out of 100 calls, this will make sense.
Finally, if there is no other choice, get your key results focused on inputs.
Business Process Interactions
But we know that processes doǹt exist in isolation. There are a lot of interconnections between processes. One process̀s output or outcome will be the input for another process. For example, the outcome of the “lead gen” process is “qualified leads.” “Qualified leads” is the input to the “sales process”. You will have to visualize this interconnection, understand the different outcomes and set key results for the appropriate people based on that.
For example look at the sales process below.
You can see that there are 3 different groups involved in this process — Marketing, Sales, and Account Management. Marketing runs campaigns that result in MQLs or marketing qualified leads. Those then are passed on the next group in line, to qualify and general SQLs or sales qualified leads, and move them through the process as appropriate.
So, as you can see, one group or functions outcome is typically an input to another function. These interplays and the KPIs have to be thoroughly thought through and understood while defining your OKRs.
The Google Problem
Another key aspect of “target setting” is to consider your current culture. Google is one of the most well-known proponents of the merits of OKR software, proudly utilizing it as a methodology to drive success. However, using Google as a case study for OKR best practice carries any number of challenges for those organizations seeking to emulate their success.
Consider this statement from Don Dodge in 2010 pertaining to the way Google sets and tracks its goals.
“Achieving 65% of the impossible is better than 100% of the ordinary”
Organizations have taken this as a mantra for goal setting within OKR frameworks. It̀s another version of the “shoot for the stars and you might reach the moon” philosophy. And on the surface, it seems both plausible and laudable in its aspirational approach. And yet, unchecked, this Google ˿goal emulatioǹ philosophy may do more harm than good within certain organizations.
Think about an organization that may have evolved with well moderated, conservative, and regularly achievable goals. A company like this might bleed to death with the wounds of disillusionment if it suddenly switches to a set of objectives that are predetermined to be aspirational and unattainable.
This is a significant risk for organizations deploying OKR software where a misguided vendor might encourage a goal-setting philosophy that is inconsistent with the companỳs culture and pedigree.
A workforce that has become used to receiving praise for 100% goal attainment may not feel that “65% of the impossible” is anything like success.
The GLUT effect and the Anna Karenina Principle
What links these companies? Google, LinkedIn, Uber and Twitter?
Well, aside from offering up a convenient if unfortunate acronym, they are all famously passionate advocates of OKR software. In fact, the most cursory of investigations into OKR case studies will likely return these four companies as beacons of best practice.
Is this a problem? Potentially yes. And in part it is because of the Anna Karenina Principle. To those familiar with their Russian literature, the opening line to Tolstoỳs Anna Karenina may already be ringing in your ears. As a reminder, it generally translates to;
“All happy families are alike; each unhappy family is unhappy in its own way”
Authors Lutz Bornmann and Werner Marx posited that the analogy could be extended into the field of scientific research, essentially stating that one can often learn more from failures than from successes. The very same premise can be adopted when looking at just what makes successful companies successful.
The GLUT companies certainly adopted OKR Software in extremely efficacious ways, but they had many additional and arguably unrepeatable factors perpetuating their success.
Copying the OKR approaches of these high profile success stories is highly unlikely to yield comparable successes without a more holistic approach to goal setting and OKR vendor selection and implementation. It requires and deserves a far more consultative approach.
Good implementations deliver accurate measurements into the hands of decision makers with as much objectivity as possible ingrained in the data. It should empower those entrusted with organizational success to increase the likelihood of a stated destination being reached within a commercially acceptable set of parameters.
And yet, software vendors have infiltrated the OKR space with one of two biases likely to skew the focus of the very clients to which the software is intended to serve.
Cultural approaches focused on output or inputs (that translate to task management or employee performance) will disproportionately focus on these areas as panaceas of improvement while potentially missing opportunities to remedy or enhance other areas of organizational functionality that their software may be less adept at capturing and measuring.
So, the idea is to “be on the right” side of this business process equation. You will bring in a very positive, outcome oriented culture that focuses on business results instead of worrying about making people work harder or smarter. Show your employees where you want to go, and make them understand the bigger picture and what waits for them when they get there. You will see everyone giving their best effort to get there.