Halfbrick Producer Jason Maundrell Chats with Askable

September 13, 2017

With coffees in hand on a sunny Wednesday morning, Askable had a great conversation with Jason Maundrell from Halfbrick. Jason currently wears multiple hats including Producer, Scrum Master and (what we think is most important) UX Design.

Hands on with testing, Jason is responsible for the recruitment, screening, interviewing, consolidating feedback, sharing insights with the team, then ultimately helping the team implement change from the test case scenarios.

Read on to find out more about how these Australian Video Game Developers continue to impress gamers, with the help of gamers!

How often are you running user tests? Why this frequency and what triggers a test?

Weekly! Currently we have 5 people per week, but we’re looking to increase it to 10. This is to aiming to build confidence in choices, as there is sometimes a split occurring in the 5, we’re hoping 10 will give stronger validation. Game development is triggering the tests, for example this could be creating new characters, challenges etc and finding out what is motivating people in other games to see if there is any crossover.

Interestingly, tests were running years ago… then it stopped, and then Alex (Design Coach) and I got it going again, stemming from the heart of design. We test on each other first and then on participants.

What differences do you experience between running a focus group vs. a 1 on 1 test? Which do you find is more valuable?

We only do 1 on 1, I’m not a huge fan of focus groups because there is a chance that someone will influence the whole group and muddy the opinion of the group. Usually the tests are 30-60 mins and our demographic of participants is 25-35 y/o (females tend to be slightly older), but it depends on which games.

How do you run the 1 on 1 tests?

Always face to face with the 5, but we’re looking to go a 60/40 split towards remote to speed up the process. Also upcoming is going to be remote tests with the Chinese market, we have 5 booked this week.

What is the largest saving (money or time) that you have witnessed from user testing?

Money is really hard to measure as there are so many things that happen. The only thing that can really be judged and is the best indicator of review scores is the star rating within the app store. This needs to be at 4.5 or higher.

There are 1 million people per day using the games so bad feedback can come back so quickly.

What result have you found most surprising? How did this differ to the initial assumption?

Even though people say they want something, doesn’t mean they will play it. This makes it really hard to know what to build right. Usually I will have a build in mind, print out a mock-up in different versions and ask for preference and interpretation. The most popular will get implemented and then tested. That makes it really like 2 stages of smaller tests.

Would you recommend decision makers test early / frequently / both?

As soon as possible! You can test so early that you can fake it. You can fake a game until you can see if people like it. Then you can test both the game and features like multiplayer without the investment.

How do you help decision makers see the value in User Testing?

It comes down to individual people in the company who really want it. You need to prove it to the company and override the people who think they know better (so to speak). I don’t know if there is an easy way to sell it… We just started testing the games and then presented the feedback. We ran the tests on all the games, got the data and then presented it back. Now we have weekly tests running with full support.

What advice would you share for those early in their careers?

There’s a few things; first: don’t have leading questions – that one’s important. Then, test your tests before you do it on customers. So many times you think you’ve got a good set of questions and then you realise it won’t give you the data you need, maybe from how it’s written etc.
Split testing almost everything, even recruiting! Lastly I would say give testers the option to not want anything, if there are only 3 to choose from, 1 will be chosen out of necessity, but if there is an option for none at all, they may not want it at all, and that’s data to know.

Something I also do is get the team involved, bring them into the test so they can see what’s happening, then they can run one per week as well.

Thank you very much for your time Jason and letting us Askable our questions!

You can follow Halfbrick on Twitter @Halfbrick

Struggling with research capacity?

Get 10x more research done with Askable, without the hassle

Let's chat

Scale your research with Askable

Run end-to-end research with Askable and get to insights 10x faster

Contact sales

Latest articles

View all
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.