#design #research - 5 mins read

Supercharge your user research with affinity sorting

There are three opportunities I see in many teams and organisations:

  • Cross-functional team members are not as close to users as they could be
  • The outcome of user research is not as actionable as it could be
  • User research does not have the profile and impact that it could in the wider org

There is one technique that I’ve seen that can address all of these. It was introduced to me as “affinity sorting”. It's really simple to apply, with minimal barrier to entry. And if you’re the one who usually leads user research at your organisation, the added bonus is that it can really lighten your load.

Affinity sorting, in four steps:

Step one - wrangle some observers

Convince two to four others to observe a user research session and take notes. Any fewer then you won’t get enough output from the session. Any more and you could end up with too much output, making it hard to converge later on.

Ideally, these people will have a stake in the subject of your research. It’ll be a problem that they’ve an interest in, or a prototype that they might have a hand in building, or supporting, or marketing, or selling.

If you can’t get people to participate due to scheduling or time constraints, you could always look to do this in a separate playback session.

To bring some structure to the observation and note-taking, make sure you prime the observers beforehand. Spell out exactly what it is you want them to do. You’re probably looking for them to capture things like:

  • pain points
  • the candidate's comments and questions
  • obstacles
  • mistakes
  • misconceptions
  • their own thoughts, questions, concerns, or ideas

If there are things that you definitely don’t want your observers to call out, then tell them. If there’s a specific sentence style or format you find works best, provide some examples.

Ideally, lay out your interview guide on a wall or a Miro board, or if it’s a prototype or a product you’re stepping through, have the screen flow up there. Ask the observers to put notes on stickies and put them on the relevant screens / questions. Use different colours for positive, negative, and neutral.

Step two - run the user research session

There’s no need to do anything out of the ordinary here - just run the session as you usually would. I’d always encourage you to record it if you can, and this will obviously be essential if you’re looking to run a separate playback session as the basis for your affinity sorting.

The only real additional consideration here is to ensure that the presence of your observers doesn’t affect your candidates. Don’t have them in the same room. If you’re doing the research in person, try to broadcast to a big screen in another room where the observers can watch. If you’re remote, have the others dial in and sit in the background. Explain to your candidate that there may be other observers, but that they’re just there to make notes.

Step three - affinity sort

Once the session is over and you've thanked your candidate and wished them farewell, it's time to put your observers to work. And the sooner you do this after the research session the better, as it'll be fresh in the observers' minds.

There’s no need to talk through every single sticky - just ask the group to silently cluster stickies that appear to be related. Five minutes should do it.

Once they’ve run out of steam, try and come up with a label for each cluster. This could be a single word, or a theme, or a sentence of description. You could do this solo, or you could put this on the group again.

One little nod to AI (sorry, not sorry): I’ve tried to use AI tools to affinity sort and draw insights from user research notes, and also open-ended survey responses, but they always fall short. They drop clangers. I don't trust them.

Nothing matches a bunch of humans working through this. But even if it did, I think that’d be a sad day as you’d lose so many of the benefits of this process. Developing empathy. Bonding as a team. Ruminating over the details that you might otherwise miss.

Step four - debrief and write up

By this point, you’ll have all of your findings neatly clustered and labelled, with the volume and colours of the stickies a clear indication of relative severity. Transfer these on to our simple table and talk through as you go, giving the group a chance to add colour as you unpick each one.

This is a great way of turning qual into quant, which tends to speak to those in the sessions who have a more numerical brain, and can be incredibly useful for boiling learnings down into punchy, actionable insights when playing back to more senior stakeholders and decision-makers.

Here’s what a typical write-up might look like.

InsightPrevalenceRecommendations
Difficulty locating the "Export Data" feature90%Relocate the "Export Data" button to a more prominent position and add a quick tutorial
Confusion when setting up user permissions70%Add step-by-step guidance and examples within the user permissions setup page
Inability to easily find the "Create Report" function40%Introduce a more intuitive layout for the report creation tool, with labeled sections
Issues understanding the data visualisation options40%Provide in-line descriptions and a preview for each data visualisation type
Frustration with the search functionality in the document library20%Enhance search algorithms to return more relevant results and add filters

In summary

Not only is this a great way to bring others in your organisation closer to the users of your products, it also puts them to work for you and minimises the risk of bias by bringing a mix of perspectives to the table.

In being asked to take notes and perform the actual affinity sort, it makes your observers feel much more than mere observers. Noting findings. Synthesising outputs. It gives them that little bit more skin in the game. And they’re primed for getting stuck into ideation, having witnessed the problems firsthand.

Plus, it sets you up perfectly for the next round of research. You’ve got a weighted table of findings that you can use as a yardstick, adding to it with each future round to give you a neat and tidy framework for ranking your findings and recommendations.