Setting up the product performance scoreboard is one of the most critical tasks of the product guy. But perfecting the granularity level of the information is hard.
Think it is this way. Dow Jones and NASDAQ tell if the stock market is about to fall off any cliff, but using them for stockpicking is risky. ‘42’as the meaning of life has value, but just comedy value. So let’s bake in more parameters, right?
Be careful what you wish for.
During my years at Nokia I saw product performance charts that convinced me that the idea of Balanced Scorecard combined with the infinite capability of Microsoft Excel to add rows and columns can be a lethal combination for human nerve.
Instead of searching for the perfect index or the basket of metrics, the product guy should recognize that the scoreboard needs to be used for at least four different purposes – focus of attention, feedback loop, sense of urgency and rewards – each one of them having different drivers from each other.
Focus of attention
Based on a highly proprietary focus group research, I believe there’s a high correlation between program and project managers and people who love air traffic controller movies.
Those dashboards, those PowerPoints or Excels with a lot of graphs and numbers covering every conceivable dimension of the execution tell them whether the gap between their dream and reality is closing or not. The emotional kicks those people get of staying ‘on top of things’ are hard for the normally wired people to understand, as evidenced by the lousy IMDB scores for the two great flight controller movies – Ground Control (only 5.6) and Pushing Tin (only 5.9).
Now, facing such complexity, the natural reaction is to try to dumb the metrics down, or to craft a super-index. Doing so would be about as useful as removing the altitude parameter from the tool set of the air traffic controllers. Or combining all approaching American Airlines flights under one AA code. Surely, Kiefer Sutherland or John Cusack would have no problem, but any lesser talent would be in trouble.
The highly quantitative approach with a lot of relative numbers can also cause losing sight of the big picture. For example, I worked with feature phones that usually ship in huge volumes. So, for example, in the dashboard of a product family shipping 50 Million units, the yearly product return rate moving from 4% to 5% may look like a blip. But in absolute terms that would mean 500,000 more visits per year to the customer care center. That’s the equivalent of 2,5 days of average total passenger flow through Heathrow Airport.
The dashboards serve best when they are used for understanding where to focus the attention, or what questions to ask. And such data should be kept in the hands of people who know how to interpret them, or at least always supplemented with an expert interpretation.
I saw first hand at Nokia that not much good happens when, for example, sales guys, in need for the product schedule, data tap into the intranet wiki of the product development team, upload some data points and start making their own extrapolations out of the bug curves. Even the highly multitalented Bruce Willis in Die Hard 2 didn’t get involved with air traffic controlling, but let Fred Thompson to figure out how to “stack, pack and rack” all the planes safely home.
Now, qualitative feedback i.e. written comments – be it via the scoreboard mechanisms or from the friends of the Board members – is another potential source of information overload. Again, it would be a mistake to try to bottleneck and reduce the volume of such valuable input. The more the better.
But in consumer-facing businesses, the amount can be overwhelming, beyond even Kiefer’s or John’s processing capability.
For qualitative feedback, the product guy should focus on building the most efficient “routers” that somehow split the feedback and route it to the right experts. If there’s a comment about “still camera noise levels not being quite right”, the router needs to be intelligent enough to send the feedback to the camera guys based on the word ‘camera’ (instead of sending to the audio guys based on the word ‘noise’).
Surprisingly often, the free form comments contain anecdotal weak signals about a problem (or opportunity) that only shows up on the dashboard when the crisis (or the unplanned success) is in the pants already. Routing data to the right person fast enough can prevent ugly from turning to bad or make good great.
But right routing doesn’t guarantee right action. Which brings us, again, to human behavior.
Sense of urgency
During my Nokia years, the product quality “lessons learnt” studies often revealed that somebody somewhere had detected the problem and the information was even routed to the right place. But nothing happened.
Hindsight is of course dangerous. These days only product managers and the referees of FIFA-governed football games seem not to have the luxury of video replay. A lot of product information comes in all the time and things can be missed.
Through experience people will get better with the information triage. If they have the right attitude. If they don’t….well….there is no easy solution to ignorance. But the product guy can impact behaviors that result in the ignorance blossoming.
First, stop measuring against yourself. “We met the plan. Hence we succeeded” is a train of thought that doesn’t account how often the plan sucks, or the measurement is too massaged from the get-go. Sales guys have a worldwide reputation for being experts in low-balling and sandbagging, but it is only because the product development community is lousy with their PR work. Further, in getting things done, following the plan isn’t the goal. That’s not to say plans wouldn’t be important. They are but they also partially exist as change management tools so that you know that you change for the better and can execute it in a synchronized manner.
Second, find ways to get around the ‘burden of proof’ problem to separate an anomaly from an epidemy. Do you have to prove that generally rappers don’t make long-lasting box office hit actors? Or do I have to prove that Mark Wahlberg wasn’t a fluke? Or is it inevitable that we end up arguing whether he was much of a rapper so start with? A tough one. But this level of intellectual, quality dialogue about the product feedback is needed.
It is impossible to start a discussion about performance and rewards without ending up to the links to Dan Pink’s talks and articles on why Carrots & Sticks don’t work, and how the Autonomy, Mastery & Purpose is the new religion. So let me save the effort and link to some of the most read and viewed material – here, here and here.
Don’t get me wrong. I am definitively not saying “So what?”. I very much believe Mr. Pink is onto something.
I can’t say I have a lot of evidence on my claim. The compensation schemes at Nokia – as often in massive companies – were set very top-down. So there was not a big sandbox to experiment in. But still I saw enough. I became convinced that the more clever & complex (meaning Ph.D. in Definitions required) or the more bold & binary (“If X happens, you get Y (as in yelling) and Z (as in zilch)”) the incentivisation scheme is, the more likely it will fail miserably, stifle creativity and create an attention-diverting headache to manage, stealing focus from what really matter – the product, the consumer and the team.
At the same time, the thinking of not linking performance and rewards at all is a little bit of utopia. Equality is a good starting point because, whether people admit it or not, there’s a socialist streak in every compassionate human being. It just may not reach the level of accepting the pay level of the most un-deserving member of the team. It may just be to the level of protecting that nice guy from being booted, which in realistic terms would be the way to free salary budget for others. And every knowledge worker surely is socialist enough to agree that not all the profit from the great wisdom unleashed should go to the capitalist class.
So a hard problem to solve….
Hence, the scoreboard design must take into account that its data ends up most likely being used for rewards purposes. Hopefully just not through some arbitrary, difficult-to-predict formula, but rather as a data point, or a piece of evidence, people can use to describe what consistutes ‘fair’.
The simple scoreboard that sucks the least
I wrote earlier how I think the combination of three metrics – Net sales, gross margin, Net Promoter Score (NPS) is the closest thing I’ve found to a working scoreboard.
Those metrics are from perfect and can easily be executed wrongly as any other scoreboard (ref: my post about the NPS stimulus problem), but around those metrics I believe it is doable to create a holistic system that works and is cost efficient.
Think NPS as the overall thermometer. If that is in red color, it is proper to declare a higher DEFCON class even if every other single metric is in green. Because something is not quite right. The program manager’s dashboard combined with the written NPS comments from the feedback used in the Dr. House way is then the way to figure out what the root of unhappiness is and how to improve.
In closing, in perpetuity
Unlike in sports, the buzzer of the product scoreboard should never sound full time. It’s a perpetual scoreboard that also becomes better – more accurate, more understandable and less latent – all the time.
That’s because the product guy plays a game that never ends. As the late management thinker, and the only official guru of the product guy series, Peter. F Drucker said, “The purpose of the company is to acquire customers. And keep them”.
Happened in the previous episodes of Product Guy series:
Stay tuned for the next episode: Product guy – find your inner hipster