A new approach
Okay, so you’ve got a project that you’re working on and it’s time to start recruiting people for research or ux testing. How do you make sure that you get the right people? With a growing Australian database of over 20,000 participants and access to external sites (like Facebook and Linkedin) with over 15 million users, how can you make sure you get the people who you need?
At Askable we’ve enabled designers and researchers to place over 2,000 participants for their projects. From business owners to online shoppers to international students, we’ve seen a huge variety of project requirements. Here are the basics for you to get started with writing your own screening questions that will ensure you get exactly the kind of participants you need to give you the most actionable insights. I’m going to provide as many real-world scenarios and examples as possible as we go.
Define your audience
The first step is to very clearly define who your audience is. This can be as broad or as niche as you like, but you must have a clear definition because that will set the tone for all of your questions. If you’re looking for a mix of people that’s fine but depending on how granular you want to be, sometimes it might be better to split them up into different groups and create separate bookings for each sub group. We’ll go into more detail on that later.
If you’re working with a team, it’s especially important you communicate clearly with the team and have your audience written down somewhere. I recommend writing it down on some cards and sticking them up on a wall where your project team works. We’ve had many cases where a designer on the product team has come along, created a booking, set up screening questions and published the opportunity, only for the person who’s actually running the interviews to email us later saying these are the wrong questions, we’re getting the wrong people!
Part of clearly defining an audience is to quantify the requirements as much as possible. That means don’t just write down “high frequency”, write down exactly what “high frequency” is. Does that mean once a week? Once month? Three times a day? By quantifying as much as possible, you’re minimising the risk of a participant interpreting your definition of “high frequency” incorrectly. While it’s not bulletproof (a participant might have forgotten how many times they shopped online in the last 3 months), it’s much better than using unquantified phrases.
Here’s some examples of good and bad audience definitions:
- Young female (How young? What’s ‘young’?)
- Shops online frequently (How often is ‘frequently’?)
- Mix of devices used for shopping (What mix? Tablet? Phone? Laptop? Desktop?)
- Shops at Myer, David Jones, Showpo, The Iconic (Do they have to shop at ALL of these stores or ANY or SOME?)
- 18-23 year old
- Shops online at least once a week
- Even mix of people who mostly (more than 75% of the time) shop online using their mobile phone, and people who mostly shop online using their desktop or laptop
- Shops with ANY of these retailers: Myer, David Jones, Showpo, The Iconic
- Owns a smart phone
In order to set yourself up for success and avoid disappointment, you must get this step right. This is because all of your screening questions will be based off your requirements. Having unclear requirements will lead to unclear screening questions and ultimately – unusable insights. I’d recommend investing most of your time into this step and working with the team to make sure you get this step right.
Of course, later on you can always relax your requirements if you’re finding that you’re not getting enough eligible applicants (e.g. in the above example, you might let someone who’s 24 years old through if they fulfil all the other criteria).
The Red Herring
Now that you have a clearly defined audience with quantifiable criteria, it’s time to start creating your screener questions. The number one mistake we see people making is not using red herrings.
Here’s a classic example. Say you’re looking for people who own a Holden. You might simply ask:
Do you own a Holden? Yes / No
While that might just work well enough, a better way to set up this question would be:
What kind of vehicle do you own?
– Other Brand
– I don’t own a vehicle
While we do our best to screen people based on the quality of their responses and behaviours, there will always be some participants who will try to answer the questions based on what they think you’re looking for and not the truth. By creating questions in this way, it dramatically reduces the chances of you getting liars who just want the incentive money. The more questions you set up that have red herrings, the lower the chance someone could guess their way through your screener.
This is hands down the best way to ensure the quality of your participants.
Cover all bases
When you’re creating your screener questions, it’s easy to get caught up in the mentality of “what kind of people do I want to interview?”. Sometimes its also important to think about what kind of people don’t you want. Think about what kind of traits, behaviours or characteristics that would actually ruin your data. I also recommend that you go over each one of your screening questions at the end, and consider them through this lens:
Is it possible for someone to answer this question truthfully, and be 100% the wrong participant?
You’ll be surprised how often a participant can answer every single question correctly and because you forgot to cover one additional scenario or case, they ended up being completely ineligible for your research.
Often times this comes down to your wording. You need to be explicit in spelling out exactly what you want (and don’t want). Let’s take a look at a (real) example.
Customer A want’s to interview people who’ve made a car insurance claim in the last 3 months. Okay, seems simple enough. So this is the question they created:
When was the last time you made a car insurance claim?
– Within the last month
– Within the last 3 months
– Within the last 6 months
– Within the last year
– More than a year ago
– I’ve never made a car insurance claim
Hmmm. On the outset that looks pretty good. After the first session we got a phone call from the customer. “The first participant was the wrong type of person! We couldn’t use their feedback at all!”.
The participant had indeed made a car insurance claim in the last 3 months. But it was on behalf of a business for a company vehicle. They were testing the claims process for personal car claims, made by the direct owner of the car.
So whilst the requirements, the question and the participant were all technically correct, the customer didn’t consider that scenario. A better question would have been: “When was the last time you made a personal car insurance claim?”.
We encourage you to err on the side of being too explicit, rather than not being explicit enough for exactly this reason.
As a bit of a counter-point to the previous two steps, we recommend you keep your screener under 10 questions if possible. Whilst creating a monster 50 question screener might let you get down to the number of blue T-shirts they own, you’re actually just creating a ton more work for yourself later when it comes time to pick the applicants you actually want to interview. Not to mention you’ll probably get a ton of people drop off during the application process because they passed out while trying to answer all your questions. Keeping your screener short & explicit sounds like a bit of an oxymoron but another way of saying it would be: aim to have just a few very clear and explicit questions.
If you need additional data points, ask yourself if that’s something that truly impacts the eligibility of a participant, or is it just a ‘nice to know’. If it’s the latter, consider asking the participants those questions during the interview itself and collect it as part of your dataset, rather than cramming everything up front and using it as part of your screening process.
Another way that we’ve seen people trap themselves when it comes to over-filtering is what we like to call “the impossible last person” effect. Let’s take a look at an example:
The customer is looking to interview 4 people. Here are the requirements (these are from an actual booking):
– 2 male and 2 female
– at least 1 person who has teenage children (13+) living at home
– at least 1 person who’s in a couple with no kids
– at least 1 person who has a young children (0-7) living at home
– 2 native english speakers and 2 non-native english speakers
– 2 people with a salary of over $100,000
– 2 people with a salary of under $99,000
– at least 1 person who is part time employed
– at least 1 person who is self employed
If you look at each requirement individually, they seem realistic enough. At the outset of recruitment, it’s likely that you’ll get registrations from people who fulfil at least some of the criteria. But what happens once you start filling up positions?
Let’s take a look at the eligible applicants.
Has teenage children living at home
Non-native english speaker
Salary under $99,000
Full time employed
Okay, cool she fits nicely. So we’ll slot her in for an interview.
Has young children living at home
Native english speaker
Salary under $99,000
Great, she’s eligible too and still covers the requirements. Let’s book her in.
Has teenage children living at home
Native english speaker
Salary over $100,000
Full time employed
Looks good, let’s lock him in.
But wait. All of a sudden, of the hundreds and hundreds of applications you’ve received, no one is eligible for the fourth and final slot.
Because as you filled each slot, the requirements actually became more and more narrow. Each time you fill a slot, you narrow the possible combinations of the final slot, which in turn makes finding that person an order of magnitude more difficult. To the point where in order to actually fill the last spot in our example, while still adhering to the original criteria, we’d need to have:
A non-english speaking male, in a couple with no kids, with a salary of over $100,000 who works part time.
This is what we call the “impossible last person“. It happens when you have so many specific criteria that as you start to fill your slots, and thereby crossing off potential combinations of your criteria, you can trap yourself by ending up in a position where your final combination is actually impossible (or highly improbable). Even though if you read each requirement one by one, they seem fair enough. You’re not asking for one-eyed, three fingered vegan astronauts, after all.
The easiest way to avoid getting caught in this situation is to remove some requirements from the up front eligibility screener, and either collect that data as additional info during the interview, or collect it as a non-screening question (you can do this by accepting all answers).
If you can nail these three main steps, you’ll be well on your way to creating better screener questions and setting yourself up for success when it comes to the quality of your data and insights.
Don’t have an Askable account yet? Create your free account today and start recruiting!