When Askable sat down with the Head of User Experience at Jumbo Interactive, we weren’t expecting to get such great tips as well as awesome feedback! Find out why Greg and his team incorporate User Testing as a go-to in their toolkit.

Jumbo Interactive are an established brand of 20 years and have been improving their online presence and customer facing app. Greg Le Seuer (Head of User Experience) coordinates the tests for both projects to ensure Jumbo is delivering what their market need. Let’s get into it!

 

How often are you running user tests? Why this frequency and what triggers a test?

Our last round of user testing was approximately 1 month ago, with tests generally being run at least once every 3 months. Pending the goal we are trying to reach, subsequent tests are determined by the outcomes.

Our tests are triggered by 2 main factors. The first is when we are looking to roll out a new feature or design, the second is more of a maintenance style trigger. Using our data on visitor flows we look for significant drop offs or other indicators of issues. If the data is signalling an issue, we will run tests to resolve.

 

What differences do you experience between running a focus group vs. a 1 on 1 test? Which do you find is more valuable?

Personally, I hate focus groups, the crowd influence factor is too large. The loudest voice tends to sway the other users so we only do 1 on 1s now, singular opinions are always more valuable. There are statistics demonstrating that once you go beyond 5 tests, the unique results will drop off significantly.

 

How do you run the 1 on 1 tests?

When we run tests in person, we use an online interface such as Invision to run through our mockups. We tried remote testing in the past but found the remote testing tools out there to not be as good as in-person.

 

What is the largest saving (money or time) that you have witnessed from user testing?

One of the biggest examples that comes to mind is when we decided to redevelop our app. We completely redesigned the app from user feedback which lifted the rating from 2 stars to 5 and our revenue at checkout increased as well.

We ran tests pre and post dev at a cost of approximately $5K per test, which in turn redirected $250K of investment that we had earmarked elsewhere. This was a significant pivot of the funding allocation and we’re very happy we invested in the testing!

 

What result have you found most surprising? How did this differ to the initial assumption?

As we rule, we don’t make assumptions, we have been doing this too long to make that mistake 😉 Something that I have found surprising however, is how apathetic people are towards new features. This may be because they only associate our service with one core feature, but we thought the new one’s would get more of a reaction as opposed to a completely apathetic response. I do wonder if the reaction would be different on a live feature, rather than on the beta, however this can only be confirmed post go live.

Earlier in a user experience career there are usually big surprises. These days, often there are smaller items that are I think are going to be picked up that are not, and vice versa.

 

Speaking of those early in their careers, what advice would you share?

Be very careful of orchestrating a test in a way that you can get a genuine result. Don’t lead the test to get the results you want, the goal is to get an honest review. Use an alternate person to run the test if you’re too close to it.

Read up on ‘Jobs to be done’ for interview styles and “Competing Against Luck” by Clay Christensen, as humans we have different jobs, and we hire solutions to do these jobs. Good tests are not about an if/then scenario, it’s about identifying behavior and then finding out why someone did or didn’t repeat that behaviour for a given scenario.

 

To wrap up, would you recommend decision makers test early / frequently / both?

Yes both early and frequently. Early as it will define the problem and confirm if you are solving the problem. Frequently as this will further refine the solution.

User testing will give you better odds at getting it ‘right’ when it goes to market and move success rates from a 50/50 to 99% sure that it will hit the mark.

 

Thank you very much for your time Greg and letting us Askable our questions!

You can follow Greg on Twitter @GregLeSueur