Good user research is a foundation of good design, and thus a good product. It’s important to do it and equally important to do it right, but for some reason it tends to be one of the biggest struggles for designers. It’s time consuming, meticulous, involves lots of planning, and worst of all, doesn’t involve colours and shapes! Everyone knows user research is important but really, is it any wonder that many designers would rather get on with things like…designing stuff?
Well good user research doesn’t have to be all those negative sounding things. More accurately, it does, but it doesn’t have to be as bad as they sound. What’s more, even a little research really can help improve the user experience of your product. Read on to see how we do user research at the University of Bath.
What do we mean by user research?
OK, let’s be clear about this from the start. What most people think of when they use the term ‘user research’ is user testing – getting a sample of your user base and interviewing, surveying, or otherwise observing them to learn things about how they use your products. The ‘research’ bit of that term can refer to other research-based activities that usually come under the umbrella of ‘UX research’. These activities are more about getting to know your users before you’ve got a product for them to use. Things like creating user personas, empathy maps, competitor audits etc. Framed in terms of the design thinking process, UX research happens at the start, whereas user testing happens later on when you have something you can test.
For the purpose of this post, we’ll be talking about user testing rather than UX research.
User research at a university
In many ways, conducting user research at a university is no different than it would be at any other organisation or company. However, there are certain peculiarities about universities that means user research works a little differently. For one thing, working at an institution that itself conducts rigorous, world-changing scientific research does tend to put the comparatively simple research we do into perspective – it does sometimes make you feel like a toddler playing with a toy digger while around you the grown ups are operating real ones.
Among the more positive aspects is the wealth of users that you have ready access to. This one is sure to make some seethe with jealousy – you see, in most companies that make digital products, their users are anonymous, far-removed, and difficult to access. On the Digital team, we work on a variety of products – some internal, some external - but arguably the main one is the University website itself. The main users of the website are prospective students looking to come and study at Bath. These former prospective students are now actual students here on campus in their thousands. If we need to carry out user testing, we can simply walk out on to campus and ask them!
So you must do user testing all the time right?
You would think that with this incredible well of users to draw from, we’d be conducting testing all the time. In reality we've tended to have periods of time where we do lots of testing, making use of events like open days, and then periods with very little testing. There are lots of reasons for this including changes in the Digital team, prioritising other kind of work, etc...
The pandemic didn't help either, cutting off our ability to carry out in-person testing and generally up-ending everybody's workflows.
It's only in the last year or so that we’ve been able to shift focus back to research and feature development that benefits from it. Thankfully, we’re now making a concerted effort to get back to doing more user research on a more regular basis.
Our approach to testing
Preparation is key
We start out by thinking carefully about our research questions and really try to understand what it is we’re trying to learn from doing user testing. This is easier said than done, but if you can devote some time to doing this at the start it well help you to ask the right questions of your participants, as well as inform the type of testing you carry out, and generally save time that you might otherwise waste by conducting research that doesn’t tell you anything useful.
Regardless of the type of questions, we always try to write them in a way that’s open-ended and designed to give participants room to tell us what they really think. If you ask questions which are too direct, you’re more likely to either lead the participant to a particular answer or get a short answer which doesn’t tell you much beyond a simple ‘yes’ or ‘no’ – basically a data point. There are of course times when this is appropriate, but in general we’re looking for a deeper understanding of how our products are used. In simple terms, short direct questions are useful if you want quantitative data, whereas open-ended questions are more useful if you want to understand what your participants really think or feel about your products.
A hybrid method
When it comes to the type of testing we select, it depends on what we’re trying to find out. If we know a particular page or user flow isn’t working well but we don’t know why, we’ll do a usability study. We’ll do an A-B test if we want to know user preferences between a couple of variations. Recently, when introducing new features that could have radically different solutions, we’ve found ourselves doing a hybrid of A-B testing and guerrilla testing. We take the basic methodology of A-B testing – presenting users with a smaller number of design variants and asking questions to find out their preferences – and streamline it so that each session only takes around 10 minutes. We do this in a venue with a lot of foot traffic, somewhere you’re likely to have access to a lot of participants. This is where we borrow the guerrilla testing approach, where we try to get as many people to participate in a session as we can in a given time period. This may diverge a little from the usual way of doing either of these research methods, but we’ve found that by using it, we tend to get the benefits of both methods - the focus of A-B testing, with the insights and larger sample size of guerrilla testing – with few downsides.
Of course, testing features related to the main website lends itself to guerrilla testing – if we use locations on our campus, full of students, chances are they’ve all interacted with our website before. This isn’t always the case with all our products. For example, when working on Typecase – our bespoke CMS platform – we need to use a more organised approach as its users are staff at the University who have busy schedules and can’t just drop what they’re doing at a moment’s notice. To complicate matters further, many of them work remotely for at least some of their time. We’ve had success carrying out tests over video call using platforms like Microsoft Teams or Zoom but they do come with drawbacks that are worth considering.
You need to make sure you’re organised before doing any kind of user testing, but it’s even more important when doing it remotely because fixing anything that goes wrong is that much more difficult when you're not in the room. Don’t underestimate that extra technological barrier between the tester and the participant. Some of the things you should consider:
- make sure you’ve reviewed your script, questions, and tasks, ideally with a colleague
- arrange a convenient time to conduct the session and leave yourself plenty of time before and after to make sure you can prepare yourself and to tidy up any notes you made during the session
- don’t try to cram multiple sessions back-to-back
- check before the session that your participant has what they need to make the session successful:
- do they have a reliable internet connection?
- is their device suitable for the kind of test you’re conducting?
- are they somewhere free from distractions?
All of this probably sounds like obvious stuff but we got caught out by almost all of them during sessions that we’ve carried out.
During the sessions, give participants time to familiarize themselves with the product or feature that you’re testing and allow them space to express what they’re thinking. It’s the same kind of etiquette that we all had to become used to with remote meetings during the pandemic – not talking over each other, waiting for your turn rather than jumping in, and the same applies here. We’ve found that there does need to be a little more hand holding than would ideally be necessary if you were conducting the testing in person. After all, you don’t want to lead participants to an answer, you want them to arrive at one themselves. However, the extra barriers introduced by remote testing seem to make a bit more guidance necessary. If you explain this clearly at the start of the session your participants will likely be very understanding and more than happy to accommodate any difficulties.
We also find recording these sessions is very useful (so long as you get prior consent from the participant of course). It’s easy to get caught up in the moment during sessions and, while notes you made during the session will make sense at the time, there are bound to be some that make you scratch your head when you refer back to them in a few days when you come to write up your findings. Better to refer back to a recording that accurately captured what happened. This of course goes for in-person testing too, but recording a remote session actually has some added benefits. Many platforms like Microsoft Teams and Zoom come with the ability to automatically create a transcript of meetings which makes note-taking much easier. You also get to re-watch the video. This may not sound like a big deal, but it lets you get a second viewing of how the participants reacted during the sessions, reminding you of things like their facial expressions and body language that can reveal things about how they felt about your product/feature that an audio recording may not.
Practice makes perfect
To wrap-up, we’ve really enjoyed getting back into the swing of conducting regular user research and testing but we’ve very much been learning (or re-learning) as we go. There’s still room for improvement but hopefully, with time and practice, we’ll get it down to a fine art.