I think every UX researcher’s heard the quote.

“If I had asked my customers what they wanted, they would have said a faster horse.”

Or maybe it’s this one:

“It’s not the consumer’s job to figure out what they want.”

The first one’s from Henry Ford; the second, Steve Jobs. You usually hear them right after you’ve been passionately making the case for actually talking to some users before diving headlong into a development project.

People who recite these popular quotations think they’re being clever. Actually, they’re showing that they don’t know much about what research is or what it can offer.

They might be surprised to hear that most researchers wouldn’t disagree with Ford or Jobs.

That’s because, when you are doing research, you aren’t trying to find out what users or customers want. You’re trying to learn about their circumstances they’re in and the problems that they’re facing. You’re trying to learn what’s important to them, and what’s not. You want to know what they care about, and what they think could be better about their current situation. It’s all about understanding context.

Steve Jobs wasn’t trying to create something his customers wanted. What he wanted to understand was what his customers wanted to become. He wanted to know what they valued. How did they want to feel while they were using a computer, or a phone? How did they want others to perceive them?

And Ford didn’t need people to tell them they wanted a car. But it was probably pretty clear to him that people wanted to get from point A to point B in an expeditious fashion.

In many cases, someone you’re interviewing can’t even articulate these things themselves. Being a good researcher means interpreting and reading between the lines. It requires the ability to tease out what’s left unsaid—to trace symptoms to their root causes.

This is why research needs to be a continuous process. You can’t interview five, a dozen, even a hundred people, slap together some personas, and call it a day. Everything is a hypothesis: a theory to be tested. That’s just part of the journey.

Some things are best tested in live environments. Users aren’t very good at speculating about their own behaviour, for any number of reasons. How they assume they will react to situations, and how they tell us they will react, can be very different from what they will actually do.

One of my favourite anecdotes—I wish I could remember the source—is about a designer who had frequent requests from users to add an advanced search to his website. It seemed like a slam dunk: actual, real-life users were telling him loud and clear that it was a feature they wanted.

But instead of hunkering down and devoting hours into building the feature, he ran an experiment. He added a link labeled “Advanced Search” in a prominent position next to the search bar of the site. Its destination was a page that informed users that the feature was under consideration and offered an opportunity for them to provide feedback about what they might need from such a tool.

Not a single user accessed the page. The designer’s clever experiment helped avoid a costly distraction.

A good researcher has a comprehensive toolkits of methods like this one that they can use depending on what they need to learn. Depending on the situation they might go and talk to some users, or launch a survey. They might use a card sort to understand how people are inclined to categorize different kinds of content, or give users a prototype to interact with.

It’s important work. It not only builds understanding and empathy for the user, but it helps mitigate risk of spending time and money on something that seems like a good idea, but that will be a dud in practice.

But no matter what, it always has to start with a question. And that question is almost never “Can you just tell us what we should build?"