Category Archives: PhD

Prolific: Recruiting participants for online Psychology studies

I’ve just run a research study in which I paid to recruit participants for the first time. I used the online Prolific recruitment website and it went really well. So I thought I’d share my experience.

Using Prolific for recruiting participants to my PhD studies

Prolific is a website for recruiting human participants to research studies. People (participants) sign up to do studies, get notified when studies for which they are eligible are available, and get small amounts of money in return if they participate in a study.

According to Prolific’s About page, the company was set up in the last 2-3 years by a former PhD student, Katia (and her friend Phelim), who had struggled to recruit human participants for her research. So the focus is very much on recruitment for research studies with a personal appreciation of what’s involved in such activities. As I write this, the participant pool seems to include about 37,400 people and they have recently been upgrading their website so it seems to be going well.

How does it work?

Prolific does not host the studies themselves. You can set up your actual study using all kinds of software; for example, a survey on Qualtrics or Survey Monkey, or an experiment on Gorilla. Prolific just helps you recruit participants to it.

You can register for Prolific with a researcher account or a participant account (or both). As a researcher, you can set up studies and add credit to your account to run them. As a participant, you provide some personal information so that Prolific can offer you only studies for which you are eligible. If you participate in a study, you get paid for your time and you can receive that money yourself or donate it to charity.

How do you use Prolific as a researcher?

When you’ve set up your online study outside of Prolific, you create a new study in Prolific and give it details of your study, including the URL to the study itself:

I set up my study in Gorilla. Gorilla integrates nicely with Prolific in that it generates a URL that collects participants’ Prolific IDs automatically (so the participant doesn’t have to manually enter it). As a researcher, you can also enter the Prolific ‘completion URL’ into your Gorilla experiment so that participants don’t need to manually enter a completion code into Prolific before they can receive payment. This mostly worked seamlessly for me, though I messed up the set up at one point because I accidentally limited my Gorilla study to one fewer participant than on Prolific; don’t do that, it means that Prolific sends the last participant to Gorilla which then rejects them and Prolific sends another participant, and so on…

As you can see in the screenshot, Prolific calculates up-front the cost of your study. The cost depends on the number of participants you want to recruit and how much you want to pay them, plus Prolific’s commission. Prolific enforces a minimum payment to participants of the equivalent of £5 per hour (I paid participants £1.25 for a 15-minute study; some took longer than 15 minutes, most took much less but all received the same amount).

You then provide a textual description of your study so that potential participants can decide whether they’re interested in taking part. After that, you can optionally select to ‘prescreen’ participants according to their basic demographics or other more specific features of their lives:

I selected that participants should be UK residents which, Prolific helpfully informed me, restricted me to about 18,000 people in the participant pool. Aside from basic demographic details, most of the prescreener questions are optional for participants to complete (though not completing them restricts how many studies they’re eligible to participate in). The fewer prescreeners you include in your study, the more potential participants are eligible to do your study.

Prolific are quite firm that you must not include screening questions in your actual study (e.g. asking participants if they are a certain age and ending the study if they are not). Instead, you must use the prescreeners so that ineligible participants don’t even get offered your study. This is because it gets really annoying, as a participant, to be offered a study and then start it, only to then be told you’re not eligible.

Finally, you have to confirm that you’ve tested your study and various other things. I specified not to include the page in an iframe. When, as a participant, you start a study, Prolific displays a panel above the study that contains your Prolific details. For studies where the participant has to manually enter their Prolific ID etc for tracking their participation, that’s maybe useful. For my study, though, Gorilla handled all that automatically and an extra panel on the page just unnecessarily used up screen space.

You can now publish your study, as long as you’ve credited your account with enough money to cover the calculated cost (you can request a refund for any credit you don’t spend). At this point Prolific displays your study to eligible participants and email subsets of eligible participants to notify them that there’s a new study they can take part in. It’s quite good fun watching the live dashboard update as participants start your study:

Prolific keeps recruiting until it reaches your target recruitment number (21 in the screenshot above). You then have 21 days to ‘approve’ participants so that they get paid. Prolific has a few criteria you can legitimately use to approve or reject participants. I included some ‘attention questions’ in my study and only participants who got a certain number correct were paid (in practice, all of them were fine). I also did some other checks but ultimately accepted all the complete sets of data.

One participant, for some reason, was not presented with all the questions but otherwise completed the study. This appeared to have been a weird technical blip in the study itself so I approved the participant even though I couldn’t use their data because it wasn’t their fault. I also, separately, gave a bonus payment of 25p to one participant who had tried to take part but had been bounced out of my study because of my mistake in setting it up (see above) and they contacted me to let me know.

I ended up running the study in Prolific three times. The first time had participants whitelisted to just my participant ID (I’m registered with a participant account as well as a researcher account) so that I could test it (the recommended way to test that Prolific integrates properly with your study software). The second time, I collected 20 data from participants and then checked that everything was going okay. I then approved their payments but that automatically ‘completed’ the study so I couldn’t just add 21 more participants to the recruitment target. Instead, I had to create the study again by just duplicating it in Prolific (which retained all the same details to integrate with Gorilla) and then screening out anyone who had taken part previously. This worked fine but was a bit unnecessary and annoying. The workaround is to ‘pause’ your study before approving and adding additional participants, then unpausing the study to continue running it with the new recruitment target.

All in all, it’s all pretty easy to use, though it’s worth reading relevant parts of the Prolific documentation to understand how it works and what it can do for you, especially with prescreening and with integrating Prolific with your online study software. I was a bit slow setting up my first study but in future it will be quicker.

Isn’t the sample biased by recruiting through a website like Prolific?

All samples are biased unless they’re completely random and, even then, randomly-selected participants will drop out (or just refuse to take part) so you get some bias of self-selection. This happens in all research that involves human participants. What’s important is that you try to get as representative and suitable sample as possible for the population that you are studying.

My research is on people’s perceptions of household energy. Because experiences of household energy vary according to the country you live in (e.g. in the US, aircon is far more prevalent than in the UK), I decided to design my studies for people with experience of living in UK households. A large proportion of Prolific’s participant pool is UK-based, which suits my studies well.

The demographics of Prolific’s participant pool are biased towards Caucasian participants (though about representative of the UK) and towards younger and middle-aged people. I think it can be assumed that it is also implicitly biased towards people who are willing, comfortable, and able to use websites to participate. Interestingly, despite the large proportion of students in Prolific’s participant pool, the majority of participants in my study were not students (which was perfect for my studies). If you’re interested, Prolific have a collection of links to resources about online versus lab-based studies.

For my study, Prolific was great. I’m getting close to the end of my PhD and I just need to run some small, exploratory, online studies quickly on people who live in UK households. To check the findings of these studies with other sub-demographics (eg older people in the UK, greater ethnic diversity of people in the UK, people in the UK who are less comfortable with using websites and computers, people not in the UK), in future studies, I would need to find another method of recruitment to complement this one.

How is Prolific different from Amazon’s Mechanical Turk?

I did look into using Amazon’s Mechanical Turk (MTurk) after a friend’s positive experience of using it for her own PhD study. MTurk is a similar kind of service but for any type of work that can be done online (not just research studies), though researchers have taken to using it quite a lot. The problem for me was that MTurk’s participant pool is mostly in the US and India and I needed to recruit UK residents. Prolific describe that and other differences between them and MTurk. They also provide a link to an independent study that found Prolific was generally better than MTurk for research studies (at least along criteria that I cared about).

Isn’t there a danger of recruiting only professional study participants?

Prolific claims to avoid this problem (which has been observed on MTurk) by notifying different subsets of eligible participants so that it isn’t just the fastest people in the participant pool who get to participate in all the studies all the time.

Any problems?

I was initially uncertain about how well my study would go because I’d also registered as a participant to get a feel for the experience from that perspective (I recommend doing this) and experienced a couple of problems. These were partly technical problems caused by the site upgrades. I’d also wondered how reliably participants got offered the studies because, as a participant, I’d had to complete many, many prescreener questions before being offered a very small number of studies (though I think this was maybe because of Prolific’s policy of not encouraging the same few participants to do all the studies).

Ultimately, my study was fine and I recruited my initial 20 participants in about 18 minutes, which was amazing! And they seemed to be fairly representative of the participant pool with a greater range of ages than I’d expected.

The other main problem was that I discovered that, as a researcher, I could download a lot more information about my participants than I’d expected or was ethically cleared to obtain. As both a researcher and a participant, this made me uncomfortable. However, I emailed the team and they quickly investigated and addressed the problem, prioritising it to get fixed within a few days. Researchers now have access only to a limited set of non-identifying data about participants plus the participants’ responses to any prescreeners that were selected for the study.

The Prolific support team has been brilliant. You can contact them by email or there’s an in-site messaging system; if there’s no one available, they’ll email you later. They’ve responded helpfully to every contact I’ve made and they regularly update their help/FAQ system.

Is it worth using Prolific?

I will definitely use Prolific again for another study in the next few weeks so I, obviously, encourage you to sign up as a participant. 🙂 Based on my overall positive experience as a researcher, I recommend it to other researchers and students as an option to consider for their own studies. If you want to give it a go, it’d be great if you could use my Prolific referral link which gives me credit towards future studies I run.

Promoting research ideas with social media: A nice example

So you’re a researcher and you want to get your cool new idea out there. You want other researchers to adopt it and promote it further for you. What do you do? (Hint: if you’re as cool as your idea, you probably mention The Web, Facebook (or Google+, if you prefer), and Twitter at this point, even if you secretly wonder what they are and what the point of them is.)

In the past…

Traditionally, you would probably publish papers about your idea in peer-reviewed academic journals so that people interested in that area would read about it and think “that’s a cool idea; I must adopt that approach too”. Similarly, you might present about it at conferences where your audience of like-minded people would listen and think “that’s a cool idea; I must adopt that approach too”. If you had teaching responsibilities, you likely also taught your students about your new approach, explaining the weaknesses of the old approach and why this new approach is better so that when they come to doing their own research projects they think “that’s a cool idea; I must adopt that approach too”.

Except (I’m guessing here) it probably doesn’t always work like that. Especially if your cool new research idea is a statistical method. Especially if your new statistical method requires its users to sit down with a calculator and manually work through an equation instead of just opening a data file and pressing some buttons in SPSS, the statistics package popular with psychologists, marketing people, and others.

I work in usability and user experience in my non-student life. But it doesn’t take a usability expert to work out that if your audience is made up of people who most likely have just GCSE-level (high school) Maths (like me) and often (I’ve noticed) The Fear of all things mathematical, you’re not going to get far in convincing them to use your new statistical method, even if it’s what they really need to use and they would actually quite like to use it. I don’t really have The Fear myself but I do glaze over when presented with less-than-simple equations and strange clusters of weird characters because I just don’t know how to read them.

The unfortunate upshot is that your cool new statistical approach just doesn’t really get off the ground, no one else writes about using it (so you don’t get the all-important citations in other people’s publications), and it just slides quietly away into the ether.

In the 21st C…

If you are as cool as your cool new research idea, you might also embrace the wonders of the world of social media and online communications. Obviously, publishing in peer-reviewed journals, presenting at conferences, and teaching your students are all good and necessary things to do. But they’re probably not enough in some cases–and I’d guess that statistical methods is probably one of those cases.

I don’t know whether Hayes & Preacher (or Preacher & Hayes) went through that exact thought process when thinking about how to promote their cool new statistical methods to psychologists and other social scientists, but it seems that usability was one of their aims (for example, Andrew Hayes suggests that people have tended to stick with the older methods, rather than adopt the newer and better methods, because the old ones are “simple and widely understood”; Hayes, 2009, p 411).

Facebook Discussion list of topics
So Hayes & Preacher have done two things:

  • Written macros to extend SPSS
    Users can use the macros to (fairly) easily run the tests using SPSS, an environment they’re already familiar with. Macros are a bit fiddly to work with so, for one of their tests, they’ve even written a custom dialog that you can install in SPSS which adds a new entry to the Analyze menu so that you can just open a standard-looking dialog box to select the appropriate variables names and run the test. All this is available for free download from their website.
  • Created a Facebook group to answer questions
    You can start a new topic (thread) to ask a question or describe a problem, or you can browse the existing 1636 (and rapidly rising) topics (at least, I’ve been able to before but today it seems the back/forward links have gone walkabout). You can also use Google to search for specific topics. Both Preacher and Hayes typically respond to questions and problems within a day. When I was having some technical problems, they asked for a my data file and ran the test on their own machines to check whether it was just my installation of SPSS that was the problem (it was).

Benefits for users

As a student trying to understand the statistical procedures by reading and re-reading their journal papers multiple times, it was invaluable to be able to ask the authors themselves (via Facebook no less) to clarify specific details as they applied to my particular experimental design. Browsing the 1000+ topics of discussion was also very educational as I came across answers to questions that I hadn’t even thought to ask yet.

Benefits for them

The benefits for them are surely great too. Obviously they have to spend time writing, testing, and supporting their macros etc, and they also have to spend time responding to help requests on Facebook. In return, though, they vastly improve the ease of using their statistical procedures, while also giving you (the user) a warm and fuzzy feeling about the procedures (the power of positive affect) and that there are many other people out there trying to use the procedure too (the power of social norms), all in all making you (I would guess) more likely to keep trying and to talk about the procedures to others. Those are the intangible and difficult-to-measure benefits of a good user experience.

In addition, they’re getting loads and loads of feedback from their users on where their procedures or explanations are difficult to understand, or where users commonly have problems, so that when they write a book on it, they’ve got valuable material to respond to and include which should make the book incredibly useful to users. We’ll see if that’s true when their book, and accompanying new macro, comes out next year. And there’s another thing, while they’ve got you in a discussion on Facebook, it’s practical (but also good promotion) for them to refer you to one or other of their papers, or to mention the book coming out next year. And there’s a list of up-coming events at which they’ll be conducting workshops on these statistical procedures. It all helps to boost citations.

Everyone wins

I think it’s brilliant. Not just because they helped me by answering a question within a day and diagnosing the problems I was having running their macros. But because they’re tapping into resources that are free and much of their target audience already use. And by doing this, they’re making their cool ideas as accessible as possible, which can only really be a good thing for everyone concerned.


Hayes, A. (2009). Beyond Baron and Kenny: Statistical Mediation Analysis in the New Millennium. Communication Monographs, 76(4), 408-420. doi:10.1080/03637750903310360


I work for IBM, who own SPSS.