Create a realistic UXR case study by avoiding this approach

Sometimes less is more.

Lawton Pybus
Bootcamp

--

One dead giveaway makes UX research case studies look fake.

I call it the “kitchen sink” approach. Rather than focusing in on one particularly appropriate method for a case study, candidates throw everything but the kitchen sink at it. It showcases every tool they’ve learned about. But the end result is awkward, and it’s unclear that the candidate has a deep knowledge of anything they’ve done.

A realistic case study, on the other hand, has three key ingredients:

  • Starting with the broader context around the project.
  • Sharing how decisions were made about the approach.
  • Talking about lessons learned and future work.

Scope, resources, and timeline weigh into every decision.

Given a brief, a researcher could take a project in a hundred different directions.

Early discussions with stakeholders define the scope, or the focus, of the research project. In a briefing or kick-off call, researchers and stakeholders discuss the motivation for the project and the problems that need attention.

A good researcher then translates that information into a set of prioritized research questions. That means addressing the most important questions, but leaving other interesting questions off the table for future studies. Agreeing on the scope with the stakeholder frames what this project will—and won’t—accomplish.

In other words, what you’ll tackle is governed, in part, by what’s in the realm of the possible.

Limited resources are a challenge for every team. We could do a lot more with an infinite number of researchers on the team and an unlimited budget for participant recruiting. You will have access to some tools but not others — and there may be policies restricting the full use of what you do have. A good research plan should therefore reflect your individual bandwidth and the organizational support you have.

And usually, stakeholders won’t be able to wait forever for their answer. A key question for every kick-off: when do you need these results? Product teams have their own roadmaps and deadlines. Your project will need to be finished well before they ship, in time for decision-making conversations on the product or feature.

Effective, realistic projects are compromises between these constraints and the ideal methodology.

Context informs the methodology, participants, and stimuli you choose.

Hidden in your stakeholders’ problems are the keys to the methodology they’ll need.

You’ll usually have to decide between a qualitative or quantitative approach, although most studies will benefit from a mix of the two. Qualitative studies are the best fit when stakeholders need to understand a process deeply, how and why users approach it, and the difficulties they have. Later on, it’s important to size up these problems and insights in a quantitative study.

Probing deeper on your stakeholders’ goals will give you an idea of how many participants you’ll need, although you’ll need to further define the audience.

There are few products with such a broad appeal that the userbase reflects the general population. Even when you’re working on one of those, refer back to the goals of the study and consider which groups would best address them.

Now you can defend your choice of who and how many people you need to reach.

Having defined the audience, think through what you’re going to show participants. Is there a live product, or is it still in prototype form? Perhaps things are in an early-enough stage that it doesn’t make sense to have a prototype. Instead, you’ll need to understand how users currently accomplish this goal, and what else they need.

How you answer these questions will guide you towards a proper method, whether a traditional usability test, a think-aloud test, an interview or a survey.

Talk about what could have been done differently, and what’s next.

The perfect study hasn’t been done yet.

What might have been the ideal method in a vacuum is likely impractical given the circumstances. Once you wrap things up, you may wonder whether another study design would have better served the project.

It’s both humble, realistic, and expected to talk through these things with fellow researchers. It’s also how we get better at what we do. Your portfolio case studies may have been personal learning projects. That’s okay: learning by doing is often the most effective way.

The important thing is being able to describe what you’d do differently.

Perhaps you should have soft-launched the survey to make sure you were capturing the right answer options. Or, you would have changed the entire approach had you had access to a specific tool. By candidly acknowledging the limitations of your work, you show that you’ve learned important lessons.

Finally, talk about what you’d like to tackle next.

With every set of answered questions, new ones emerge. If you were to continue working with these stakeholders, what would that next study look like? Share enough detail to show you’ve thought this through.

Summary

Avoid the “kitchen sink” approach. An effective case study doesn’t just present a brief and a dozen learning activities with no clear connection. Instead:

  • Explain the context. What was the scope of this research?
  • Defend your approach. Which method did you choose and why? What other decisions did you make?
  • Be honest about limitations and lessons learned. Some things you’ll see only in hindsight.

Rather than showing that you understand what each method entails, demonstrate that you know why to use it to make your case studies more persuasive.

Looking for more career direction? Some other articles I’ve written:

Follow me on Medium and LinkedIn or subscribe to my newsletter for more.

--

--

UX research consultant, Principal at Drill Bit Labs, human factors PhD. I share monthly UXR insights at https://www.quarterinchhole.com