As with all businesses, we want to be the best we can be. For us, that means building excellent performance driven apps for our customers, and usability testing is a vital element of our delivery.

When you’re working on every detail of creating a product it’s natural to become incredibly familiar with it. This can lead to assumptions based on expectations, making user testing an essential module of any build to get an impartial view of the product, allowing improvements to be made.

As designers and app developers, we know that our expectations are often based on a level of digital literacy and awareness that many end users won’t have, so it’s important that we build a user-friendly product fit for purpose to ensure its success.

Many people have concerns over the time and resources needed to carry out user testing, but this needn’t be the case. While lab-based testing is always the most effective, there are some simple ways to carry out a session which are more budget and time friendly, giving valuable insights into a product.

1. List all your assumptions, focus on details, but observe the whole flow.

Once you have defined the details you are trying to validate from your user testing session (e.g. would they press that button? Could they find that menu item? Is there anything hidden?), write the tasks for the participants in a broad way so that they won’t be guided to the correct answer. You need them to give you an accurate reflection of usability for testing to be valuable.

In practice:

One of our clients’ needed a registration screen which would allow them to collect information about their customers. The design we created required the user to scroll in order to complete their registration. We wanted to be sure users would know to scroll.

Even though we were only testing this small area, we gave users an objective to complete the whole registration so that we could observe their reactions across the whole flow.

While designing the product, we assumed the illustration progressing vertically to the bottom of the screen would drive the users to scroll down. That wasn’t the case; a good example of our familiarity with the build leading us to make an assumption about a user, which wasn’t fulfilled by the test group.

Not only it was clear that scrolling wasn’t intuitive, observing the testers randomly tapping, swiping and even blanking the screen gave an in-depth analysis of user behaviour, and enabled us to make vital changes to the product to improve its usability.

2. Prepare two prototypes

If you are lucky enough to work with people who enjoy giving a lot of feedback around designs, you’ll know that sometimes it’s difficult to please everyone and also to define whether an idea is going to fit in well or not.

In order to test, it’s worth designing two different versions of a prototype product and observe whether a different detail in the design can create an improvement.

In practice:

One of Yorkshire’s finest drink manufacturers came to us an app that would enhance their customer experience. They have solid brand guidelines and we wanted the app to reflect these; communicating the same values and feelings. To ensure this we trialled two designs to test user behaviour and reactions to the look and feel.

Version A had a bottom tab navigation with three items, different icons and style of imagery. Version B had four tab items and a different visual style. The overall feedback was that the layout looked more balanced in version B, and the information was easier to digest and flowed better. Small details can often make a big difference.

Key tip: If you can develop two prototypes for testing, try to avoid asking the same users to test both. The second prototype will be influenced by the testers familiarity developed through using version A, skewing their behaviour and in turn your results.

3. Give context

For your session to be valuable, it is vital to give the users a lot of context. In the ideal scenario, you would test the product in the actual moment they would use it if it was a functioning software.

Sometimes, however, this is not possible and therefore you have to make sure that people who are testing the prototypes know what they are doing and why, to give you an accurate test result.

In practice:

Before running a user testing session with a selected target audience, we normally do a trial in the office. We’ve learnt that this should be treated as a normal test session would, with a similar level of detail.

On one occasion, to deliver a quick result, we called on some of our colleagues to do a quick trial test. However, because of the lack of context, it was really difficult for them to understand the reason behind the task and this ended up impacting their usage. We didn’t give them enough information about how they would get to use the app, why were they doing it, and what the goal was, which frustrated them and skewed the results.

We then defined a scenario for the participants and printed it out (one for each prototype), giving them information about the experience and their persona, which meant they had a reason and context for why they were using the app, making their tests more accurate and valuable. Invest in explaining things well to all test groups if you want accurate feedback.

4. Plan, plan, plan

Normally, there isn’t a huge amount of time allocated to these sessions, so it’s really important to have everything planned in advance to maximise your time.

In practice:

  • First, design the prototype and create an interactive version of it (we normally use Invision, Figma or Proto.io). Something to keep in mind is that Invision flashes the hotspot when the user taps on the wrong place so it can influence the results.
  • Next, define the scenarios and try and be as empathetic as possible in giving information about the persona. We want the users to feel like they have something in common with the persona of the end-user they are imitating.
  • Define the tasks, then have someone (ideally a colleague, who has never seen the prototype before) go through them and double check whether you can find answers to the assumptions you listed out at the very beginning. Add or remove information from the tasks accordingly.
  • Create two google forms (one for each prototype) and have a mixture of closed and open questions. We normally divide the form into different sections: usability, interface and overall feedback. In the overall feedback, leave space for people to openly express their opinion, by using open questions only.
  • Have someone go through the final version and take note of how long it would take more or less. It is always easier to keep people engaged when they have a rough idea of how long they will be busy.


5. Aim to reach saturation

One of the evergreen questions regarding user testing is “how many people do you need?”. Which, is a very valid point.

The Nielsen Norman Group, indicate five users as the best number of people to interview in a user testing group. This is because with this number of testers you are likely to have found nearly all of the usability problems that you’d find even if using a larger group.

This is also called reaching saturation.

In our experience, it happened after more than five users at a time, but after only three at others. Our suggestion is, aim to reach saturation. If you need a starting point, ask for five people per prototype at least, but carry on until your data is repeated and you can observe a pattern into users behaviour.

Ensuring effective user-testing is part of your build process will enable you to address any common issues and develop a fantastic, successful product which represents your business and makes your customers happy.