8 maxims to help you think straight about UX research

Wit and wisdom for UX Researchers to improve and communicate their work

Lawton Pybus
7 min readDec 31, 2023
Vaporwave-style collage depicting a statue of the goddess Athena. In the midground an impressive stone formation stands against the sea. In the background a compass is overlaid against the sky.
Created with Midjourney

There are many ways to learn a lesson — by formal instruction, from a book, by watching someone’s example, or by making a mistake, to name a few — but one underappreciated tool is the proverb. A well-worded turn of phrase can capture the essence of a challenging topic and stick in your mind for decades.

UX research is a complex multidisciplinary domain. Holding all the necessary considerations in your mind can feel daunting. A handful of carefully chosen maxims and aphorisms can help you clarify your task to yourself, your team, and your stakeholders.

Let’s look at eight such adages that I’ve found useful in my own practice.

“The first principle is that you must not fool yourself — and you are the easiest person to fool.”

Most afternoons, I like to have a break with a bit of dark chocolate. It’s delicious, and touts a few health benefits — but is weight loss one of them? Many of us would certainly like to think so.

A study suggesting just that became a viral sensation back in 2015, getting coverage from dozens of news outfits worldwide. Unfortunately, the study was an intentional hoax designed to show how credulous popular reporting about science often is. The author had used both a false name and institution, and paid to have it published in a predatory journal. Despite these warning signs, few journalists even bothered to contact the researcher with questions.

Since Kahneman and Tversky’s groundbreaking work on cognitive biases, there has been greater understanding that we humans are not creatures of pure reason. Instead, we are emotional, self-interested, and apply a biased and motivated lens to much of the information we receive and process.

Though researchers are often especially aware of such biases, we’re not immune to them as we carry out our work. Proceed with a healthy dose of mistrust in yourself.

“If you torture the data long enough, it will confess.”

That bogus study about eating chocolate gets worse. It was also designed to get a fraudulent outcome.

The author set out to achieve this in a few ways. He measured 18 outcome variables. This made a false positive — where a result is caused by random chance — much more likely, since the standard significance threshold for academic publishing is just 1 in 20 (i.e., p < .05). The data was then given to a statistician instructed to massage the data as needed.

There are many ways we can mistreat data. We can run dozens of analyses and cross our fingers until one sticks. We can work backwards from the results to research questions that seem prescient. We can ignore the places where responses might be missing in a meaningful way and use sophisticated ways of filling in the gaps, perhaps under the banner of artificial intelligence. Intentional or not, these compromise both the findings and our credibility.

Ultimately, data isn’t an infallible guide to the truth. It’s something that we must treat with care and respect.

“Correlation does not imply causation.”

In 1999, researchers from the University of Pennsylvania Medical Center published an article in the prestigious peer-reviewed journal Nature, suggesting that young children who sleep with a light on at night are more likely to develop vision problems later in life.

Parents everywhere whose toddlers require night lights were put on alert. But it was later found that the original study did not account for the vision of the parents, which ultimately explained both the poor vision and the nighttime lighting.

Our minds are prone to find patterns anywhere, even where they don’t exist. So it can be counterintuitive to learn that finding two related variables doesn’t mean that one causes the other. It’s also possible that something else causes them both, or it could be a simple coincidence.

We often need to dig a bit deeper to understand why we see the relationship. It’s important to leave our minds open to possibilities before jumping to any conclusions.

“Absence of evidence is not evidence of absence.”

Many of us spent the early days of the pandemic glued to news feeds, updating our mental models of this strange new disease and our risk mitigation strategy in real-time as new information came in. Since the virus had no track record and there had been little time for formal scientific inquiry, there was little that we could say conclusively that we “knew.”

This led to headlines like the following:

In the spring of 2020, we lacked reliable evidence demonstrating that many of these propositions are true as we now believe them to be. The evidence we had was at best plausible based on anecdote or conjecture, and it’s sensible for scientifically-minded individuals to withhold some judgment until higher quality data comes.

But simply lacking evidence isn’t the same as evidence that the proposition is untrue. For researchers, it’s often an invitation to investigate further.

“The plural of anecdote is not data.”

“But my sister didn’t have any problems using it!” “I sent it to all my friends and they thought it was great.”

As researchers, we do our best to conduct high quality research and communicate the findings as effectively as possible, but ultimately what our stakeholders do with the finding is up to them. When our findings contradict stakeholders’ expectations, they will sometimes lean upon personal experiences or those of friends and family members.

In situations like these, it’s our job to explain the relative weight we give to data from a well-designed study compared to stories about people who may or may not represent our user base and may or may not have been using the product in a realistic context. One such anecdote, or even several of them together, shouldn’t overshadow or remove the need for proper user research.

“Be approximately right rather than exactly wrong.”

A storm path projection figure from the NWS and NOAA, showing a hurricane in the Caribbean moving northeast along the US seaboard.
Hurricane Joaquin’s cone of uncertainty on October 1, 2015

If hurricanes are a part of life in your area, you may be familiar with visualizations projecting the storm’s path.

Starting from the storm’s present location, these figures plot a line over the next several days, bounded on both sides by a “cone of uncertainty” that expands as time goes on. Although such visualizations are imperfect and can be misinterpreted, they help the intended audience to think as scientists are trained to do — that is, probabilistically.

We work under conditions of uncertainty, often trying to extract a useful insight about an entire user base from just a small sample of them. Beyond a certain point, reducing uncertainty is impractical for cost and timing reasons, and can still yield an inaccurate result.

Instead, we seek to communicate our findings in a probabilistic way, sharing with our audiences the bounds of our certainty and where the likely outcomes might lie.

“All models are wrong, but some are useful.”

The world isn’t flat — but it isn’t round either. In fact, the Earth is an oblate spheroid: squashed at the poles and swollen at the equator. It’s not a perfect one at that, having lots of pits and bumps called valleys and mountain ranges.

When Copernicus placed the Sun at the center of the solar system, one reason that people resisted his model was that the old one was working well enough. With the Earth at the center and celestial bodies moving around it in circles within circles (or epicycles), astronomers of the age could fairly accurately project the motion of Mars and Jupter.

It was useful. Nevertheless, it was wrong.

The process of science depends on the gradual improvement of inaccurate models. When we use models in our research (like those for technology adoption), or create them for our stakeholders’ use (like personas), our goal is usefulness, not perfect accuracy.

Did it help us design a valid study, better understand our users, or produce a better-informed decision? Then it did its job, even if it turns out to be wrong.

“Research is formalized curiosity. It is poking and prying with a purpose.”

Both halves of this formulation explain something profound about the work that we do.

Researchers are curious people, but our profession takes that natural proclivity and places a formal discipline and practice around it. We do so not for curiosity’s sake, but for the benefit of both our users and our organizations.

Sometimes research is uncomfortable, both for the researcher and for the participants. In fact, it’s often more uncomfortable for participants than we realize. Part of our job is to ensure that the poking and prying that we do truly benefits the larger user base, and that it’s properly balanced with the incentives and benefits we offer those who participate in our studies.

In summary

Across the eight, a few themes emerge:

  • We love data — but data can mislead, especially if abused. UX Researchers should take care to avoid both bias in study design and shoddy analysis practices.
  • Before jumping to conclusions, explore alternative hypotheses. Think of different ways to explain an observed relationship (or lack thereof) in your user research data.
  • Even the best findings are probabilistic and provisional. We can’t give stakeholders absolute certainty; instead, we share the context and bounds around our uncertainty.

But this synthesis doesn’t do justice to the punchy memorability of the proverbial phrases.

Committing a few timeless quotes to memory can help UX Researchers adopt the right mindset as they approach study design, analysis, and communicating findings. And they can help to communicate that approach to stakeholders and partners who lack our research background, or to juniors still developing the fundamentals.

Which have you used in your practice?

A version of this article first appeared in The ¼″ Hole, a newsletter about UX Research. Thanks to Jeff Scott for reviewing a draft of this article.

In order of appearance, these quotes have been attributed to: Richard Feynman, Ronald Coase, various authors including Karl Pearson, William Wright (and popularized by Carl Sagan), Kenneth Kernaghan and P. K. Kuruvilla, John Tukey, George Box, and Zora Neale Hurston.

--

--

Lawton Pybus

UX research consultant, Principal at Drill Bit Labs, human factors PhD. I share monthly UXR insights at https://www.quarterinchhole.com