What Are You Measuring?

In an urgency to do human centred research work, it’s easy to jump right into action, throwing money and time at setup, recruitment, staging and then the actual interviews, workshops, onsites, surveys or analytics that form the backbone of qualitative and qualitative research practices.

It’s an understandable impulse.

Measuring something, measuring anything, makes it feel like we’re doing genuine product, service and experience management. It makes everything feel official.

This impulse to just get going is even more powerful in an age of agents and automated ‘artificial intelligence’ solutions that can seem to produce research work quickly. ‘Just fill out a form, click a button and we’ll get started,’ they exhort you. Then, days, or even hours later, you’ll have ink-still-wet research outputs in your hand.

So progressive. So exciting!

But as the trope goes, with great power (i.e. capability) comes great responsibility.

Take a moment to think.

That very admirable impulse to just get started has to be tempered with an important, but often forgotten question; “What are you measuring?”

It sounds so simple on the surface. But asking this question early in a piece of research unfolds a series of ever-deeper questions that get at the heart of the work you’re about to do.

Measurement is fundamental. What we don’t measure, we ignore. The right measurement can help you make better product, service, experience and organisational decisions. The wrong measurements can mislead, at best, causing misinvestment and badly made products, services or experiences. At worse, the wrong measurements create disasters. Experiences that do the wrong thing, or do it wrongly.

Interestingly, the first question, ‘What are you measuring?’ leads to more questions.  How are you measuring? When? Who with? And importantly, Why?

For example; are you going to issue the previously mentioned surveys? Or talk to people in person? There are benefits and costs to each approach. In these sessions, are you going to gather qualitative (descriptive) or quantitive (numerical) information? What is the bias inherent to a selected method of measurement and can it be controlled? Will my results be repeatable and consistent across different sessions of measurement?

Each question leads to a few more, but with each consideration, the plan gets better.

Crucially, how valid is the measurement we’re planning to make? This is a technical word used to ask how a measurement actually matches the real world. Put another way, if I’m only measuring a small sample of the people, things or events in my experience, is the measurement representative of the larger population? And, have I gathered enough of the context to make sense of my measurement?

Are the measurements being taken from just one holistic population, or from two populations undergoing different experiences, settings, or contexts? With the second option, we’re beginning to enter the territory of a fully experimental measurement.

Suppose you split a population of users into two groups. The first uses their existing method of completing a task (call them the control group) and the second uses a new tool you’re offering to do the task better (call them an experimental group). With measures taken from each group, you’ll be in a better position to demonstrate whether the new experience works better than existing methods of solving the task.

This example takes us into the land of statistical inferences— mathematical proofs made about measurements. Statistics opens up a new set of questions, including: were the variations we saw in the measurements were likely because of chance? If we’re confident our measurements weren’t because of chance, how big a difference, or ‘effect size’, did we really observe?

Of course, in the background of all of these questions about measurement is a much deeper question, ‘What aren’t we measuring and why?’

It sounds like a lot of work, these questions floating around in some sort of research soup. But it doesn’t have to be. Expert researchers have good organisational skills, they know to ask the right questions at the right time, and they have good answers based on domain, type of problem and other considerations.

It’s worth noting, none of these ideas offers a perfect recipe, task-list or plan for measurement. Instead, it’s a reminder that even a simple question like ‘What are you measuring,’ can prompt a series of useful questions that will make for better research and therefore better design choices.

As first said by poet A. E. Housman and then tweaked by novelist Andrew Lang, ‘Some individuals use statistics as a drunk man uses lamp-posts—for support rather than for illumination.’

We could say the same for measurement.

How do you avoid being the drunk with the lamppost?

It’s easy.

Start asking questions about what you’re measuring and then find a team that will provide excellent answers to your questions.

Next
Next

Finding The Secret Stories Of How Things Work