The world is in the midst of a statistical revolution. Everything from professional sports to advertising to education to technology performance to climate can be measured and quantified with more precision than ever before. Statisticians and big data analysts are in demand and finally being listened to
Yet the world still runs on heuristics and simple stats – numbers that don’t reflect what they purport to show; figures that can be faked for profit and political gain; facts that are simple, intuitive, and completely misleading. You know them in sports with stats like batting average, wins, passing yards; in education with test scores; in technology with prices and analyst rankings. And as we’re finding out in collaboration with the leaders of Bhutan, in the wealth of nations with GDP.
RampRate’s purpose in life can be described in many ways – removing corruption and waste in technology; driving social impact; aligning incentives with performance. But one key ingredient through all of this is getting decisions to be made on the right set of numbers rather than the old standbys that can mislead – or be actively used to mislead.
What Can Go Wrong with Numbers
Broadly speaking, there are three primary data faults:
- No measurement – going by gut feel and heuristics. Examples abound in everything from sales made based on golfing outings to educators resisting any measurement of performance outside of peer opinion.
- Wrong measurement – the data doesn’t say what you think it does. Old-school sports stats are the most glaring examples, with numbers like batting averages and wins only loosely correlated with actual player performance and the Moneyball era of the last 2 decades fixing century-old misconceptions in a highly visible way.
- Juking the stats – as the Wire put it, when you make robberies into larcenies, majors become colonels and mayors become governors. For every stat that is used to allocate resources or power, there is usually someone with the incentive to fake that stat for personal benefit.
Measurement Failures in Corporate IT Sourcing
In our world of IT sourcing, we deal with all three on a regular basis:
- The first error is most common in high growth environments with no internal controls – where the first available or top branded solution is bought as a substitute for market research because there’s no time or resources for market resources. But it’s also omnipresent in risk assessment, which oscillates between ignorance and hyper-avoidance.
- The second error is most common in stabilizing environments where CFOs get control of technology spend and demand the wrong metrics are optimized – like a cost per server or employee rather than for the job that staff member or machine gets done. Focus on cost itself, rather than value is a core one – as is ignoring social impact and reputational risk of associating with providers that pollute the environment, mistreat their employees, or mislead their investors.
- The last one – juking the stats – obviously shows up in unscrupulous sales tactics. But it is also the natural reaction of otherwise conscientious IT staff when faced with unreasonable numbers requests and evaluation standards – when they see management measuring the wrong thing, they fake the results so that they can keep doing what they think is right.
Our job is to bring in the right numbers, organize them for things that matter and verify that they are correct – creating a whole new decision scorecard that synthesizes individual dimensions of value into an overall picture of supplier and solution fit to a problem. And that means not just focusing on the known dimensions, but adding social responsibility to the mix for the first time – something we’ve included in our scorecards for the last decade.
Verification of claims and tracing of responsibility is one area that is most difficult and labor intensive in that process. And that need for verification (along with the commitment to social impact) is one reason we are so deeply invested in blockchain ventures. The ability to arrive at a shared truth even when everyone in the environment has incentives to cheat has a core value in our increasingly post-truth world far beyond the rollercoaster price of cryptocurrencies.
Measurement Failures in Social Well-Being
If the problem of “what is a good technology” is difficult, the question of “what makes a good society” can seem downright intractable. The very definition of “good” has as many answers as there are people. Economists have long ago made pure utilitarianism less popular by demonstrating that utility is not easily cross-comparable between individuals; and ethicists and their pesky trolley problems have sown doubt in the wisdom of sacrificing the well-being of the few to help the many. So for many, the question stops here, mired somewhere between philosophical debates about social choice theory and ¯\_(ツ)_/¯ – a data failure of the first kind.
As with any complex problem, there is also a contingent of opinions that recommends the solution that is simple, straightforward, and wrong – namely that economic success such as that measured by GDP (or GNP) is the lowest common denominator, making countries with high per capita economic output and growth (Norway, US, Switzerland, UAE and the like) de facto successful and the ones without these material resources failures. This has been know for a long time – even 50 years ago Robert Kennedy has a famous speech going through detail of how GDP measures everything except that which is worthwhile. This is an error of the second type.
But just as boutique consultancy RampRate tackled the near-intractable problem of balancing the needs of finance, technology, risk, time, cost, and cultural fit and translated it into a 99%+ success rate in building strong long-lasting technology partnerships, a small country – Bhutan – dared to take on the abdication of responsibility and overly simplistic stats and tried to approach the problem in a better way. Its Gross National Happiness Index, first envisioned in 1972 by King Jigme Singye Wangchuck balances nine domains (similar to RampRate’s 8 dimensions of IT value) and rolls up 33 individual indicators into a synthetic metric that balances the complex tradeoffs of governance, community, social impact, and economic progress.
As fellow designers of complex metrics models, we could quibble with some aspects of the GNH index. The data collection is based on a relatively small survey sample, with all the attendant issues of ensuring proper representation, especially of disadvantaged minorities. The measurements are deliberately weighted equally, although we’ve found that adjusting priorities based on measured contribution to overall performance is a key success factor in identifying the fit of a particular measure to the environment it’s in, rather than recommending one size fits all views. Incentives to fake the data may be under-examined, and again, contribute to the extensibility problem (one can only imagine the results of a happiness survey run in North Korea for example). And after 46 years, some Bhutanese are growing a bit impatient with the progress the approach has produced as international, rather than internal, surveys of happiness put the country far down the list (97th as of the 2018 World Happiness Report).
But the core idea is correct – that one should not be daunted by the complexity of the problem or distracted by simplistic and misleading solutions. There is a right way to measure well-being beyond GDP, and we’re excited to be part of discovering that way with our knowledge of solving a simpler, but strikingly similar problem in the buying of technology.