Category Archives: Technology

Prolific: Recruiting participants for online Psychology studies

I’ve just run a research study in which I paid to recruit participants for the first time. I used the online Prolific recruitment website and it went really well. So I thought I’d share my experience.

Using Prolific for recruiting participants to my PhD studies

Prolific is a website for recruiting human participants to research studies. People (participants) sign up to do studies, get notified when studies for which they are eligible are available, and get small amounts of money in return if they participate in a study.

According to Prolific’s About page, the company was set up in the last 2-3 years by a former PhD student, Katia (and her friend Phelim), who had struggled to recruit human participants for her research. So the focus is very much on recruitment for research studies with a personal appreciation of what’s involved in such activities. As I write this, the participant pool seems to include about 37,400 people and they have recently been upgrading their website so it seems to be going well.

How does it work?

Prolific does not host the studies themselves. You can set up your actual study using all kinds of software; for example, a survey on Qualtrics or Survey Monkey, or an experiment on Gorilla. Prolific just helps you recruit participants to it.

You can register for Prolific with a researcher account or a participant account (or both). As a researcher, you can set up studies and add credit to your account to run them. As a participant, you provide some personal information so that Prolific can offer you only studies for which you are eligible. If you participate in a study, you get paid for your time and you can receive that money yourself or donate it to charity.

How do you use Prolific as a researcher?

When you’ve set up your online study outside of Prolific, you create a new study in Prolific and give it details of your study, including the URL to the study itself:

I set up my study in Gorilla. Gorilla integrates nicely with Prolific in that it generates a URL that collects participants’ Prolific IDs automatically (so the participant doesn’t have to manually enter it). As a researcher, you can also enter the Prolific ‘completion URL’ into your Gorilla experiment so that participants don’t need to manually enter a completion code into Prolific before they can receive payment. This mostly worked seamlessly for me, though I messed up the set up at one point because I accidentally limited my Gorilla study to one fewer participant than on Prolific; don’t do that, it means that Prolific sends the last participant to Gorilla which then rejects them and Prolific sends another participant, and so on…

As you can see in the screenshot, Prolific calculates up-front the cost of your study. The cost depends on the number of participants you want to recruit and how much you want to pay them, plus Prolific’s commission. Prolific enforces a minimum payment to participants of the equivalent of £5 per hour (I paid participants £1.25 for a 15-minute study; some took longer than 15 minutes, most took much less but all received the same amount).

You then provide a textual description of your study so that potential participants can decide whether they’re interested in taking part. After that, you can optionally select to ‘prescreen’ participants according to their basic demographics or other more specific features of their lives:

I selected that participants should be UK residents which, Prolific helpfully informed me, restricted me to about 18,000 people in the participant pool. Aside from basic demographic details, most of the prescreener questions are optional for participants to complete (though not completing them restricts how many studies they’re eligible to participate in). The fewer prescreeners you include in your study, the more potential participants are eligible to do your study.

Prolific are quite firm that you must not include screening questions in your actual study (e.g. asking participants if they are a certain age and ending the study if they are not). Instead, you must use the prescreeners so that ineligible participants don’t even get offered your study. This is because it gets really annoying, as a participant, to be offered a study and then start it, only to then be told you’re not eligible.

Finally, you have to confirm that you’ve tested your study and various other things. I specified not to include the page in an iframe. When, as a participant, you start a study, Prolific displays a panel above the study that contains your Prolific details. For studies where the participant has to manually enter their Prolific ID etc for tracking their participation, that’s maybe useful. For my study, though, Gorilla handled all that automatically and an extra panel on the page just unnecessarily used up screen space.

You can now publish your study, as long as you’ve credited your account with enough money to cover the calculated cost (you can request a refund for any credit you don’t spend). At this point Prolific displays your study to eligible participants and email subsets of eligible participants to notify them that there’s a new study they can take part in. It’s quite good fun watching the live dashboard update as participants start your study:

Prolific keeps recruiting until it reaches your target recruitment number (21 in the screenshot above). You then have 21 days to ‘approve’ participants so that they get paid. Prolific has a few criteria you can legitimately use to approve or reject participants. I included some ‘attention questions’ in my study and only participants who got a certain number correct were paid (in practice, all of them were fine). I also did some other checks but ultimately accepted all the complete sets of data.

One participant, for some reason, was not presented with all the questions but otherwise completed the study. This appeared to have been a weird technical blip in the study itself so I approved the participant even though I couldn’t use their data because it wasn’t their fault. I also, separately, gave a bonus payment of 25p to one participant who had tried to take part but had been bounced out of my study because of my mistake in setting it up (see above) and they contacted me to let me know.

I ended up running the study in Prolific three times. The first time had participants whitelisted to just my participant ID (I’m registered with a participant account as well as a researcher account) so that I could test it (the recommended way to test that Prolific integrates properly with your study software). The second time, I collected 20 data from participants and then checked that everything was going okay. I then approved their payments but that automatically ‘completed’ the study so I couldn’t just add 21 more participants to the recruitment target. Instead, I had to create the study again by just duplicating it in Prolific (which retained all the same details to integrate with Gorilla) and then screening out anyone who had taken part previously. This worked fine but was a bit unnecessary and annoying. The workaround is to ‘pause’ your study before approving and adding additional participants, then unpausing the study to continue running it with the new recruitment target.

All in all, it’s all pretty easy to use, though it’s worth reading relevant parts of the Prolific documentation to understand how it works and what it can do for you, especially with prescreening and with integrating Prolific with your online study software. I was a bit slow setting up my first study but in future it will be quicker.

Isn’t the sample biased by recruiting through a website like Prolific?

All samples are biased unless they’re completely random and, even then, randomly-selected participants will drop out (or just refuse to take part) so you get some bias of self-selection. This happens in all research that involves human participants. What’s important is that you try to get as representative and suitable sample as possible for the population that you are studying.

My research is on people’s perceptions of household energy. Because experiences of household energy vary according to the country you live in (e.g. in the US, aircon is far more prevalent than in the UK), I decided to design my studies for people with experience of living in UK households. A large proportion of Prolific’s participant pool is UK-based, which suits my studies well.

The demographics of Prolific’s participant pool are biased towards Caucasian participants (though about representative of the UK) and towards younger and middle-aged people. I think it can be assumed that it is also implicitly biased towards people who are willing, comfortable, and able to use websites to participate. Interestingly, despite the large proportion of students in Prolific’s participant pool, the majority of participants in my study were not students (which was perfect for my studies). If you’re interested, Prolific have a collection of links to resources about online versus lab-based studies.

For my study, Prolific was great. I’m getting close to the end of my PhD and I just need to run some small, exploratory, online studies quickly on people who live in UK households. To check the findings of these studies with other sub-demographics (eg older people in the UK, greater ethnic diversity of people in the UK, people in the UK who are less comfortable with using websites and computers, people not in the UK), in future studies, I would need to find another method of recruitment to complement this one.

How is Prolific different from Amazon’s Mechanical Turk?

I did look into using Amazon’s Mechanical Turk (MTurk) after a friend’s positive experience of using it for her own PhD study. MTurk is a similar kind of service but for any type of work that can be done online (not just research studies), though researchers have taken to using it quite a lot. The problem for me was that MTurk’s participant pool is mostly in the US and India and I needed to recruit UK residents. Prolific describe that and other differences between them and MTurk. They also provide a link to an independent study that found Prolific was generally better than MTurk for research studies (at least along criteria that I cared about).

Isn’t there a danger of recruiting only professional study participants?

Prolific claims to avoid this problem (which has been observed on MTurk) by notifying different subsets of eligible participants so that it isn’t just the fastest people in the participant pool who get to participate in all the studies all the time.

Any problems?

I was initially uncertain about how well my study would go because I’d also registered as a participant to get a feel for the experience from that perspective (I recommend doing this) and experienced a couple of problems. These were partly technical problems caused by the site upgrades. I’d also wondered how reliably participants got offered the studies because, as a participant, I’d had to complete many, many prescreener questions before being offered a very small number of studies (though I think this was maybe because of Prolific’s policy of not encouraging the same few participants to do all the studies).

Ultimately, my study was fine and I recruited my initial 20 participants in about 18 minutes, which was amazing! And they seemed to be fairly representative of the participant pool with a greater range of ages than I’d expected.

The other main problem was that I discovered that, as a researcher, I could download a lot more information about my participants than I’d expected or was ethically cleared to obtain. As both a researcher and a participant, this made me uncomfortable. However, I emailed the team and they quickly investigated and addressed the problem, prioritising it to get fixed within a few days. Researchers now have access only to a limited set of non-identifying data about participants plus the participants’ responses to any prescreeners that were selected for the study.

The Prolific support team has been brilliant. You can contact them by email or there’s an in-site messaging system; if there’s no one available, they’ll email you later. They’ve responded helpfully to every contact I’ve made and they regularly update their help/FAQ system.

Is it worth using Prolific?

I will definitely use Prolific again for another study in the next few weeks so I, obviously, encourage you to sign up as a participant. 🙂 Based on my overall positive experience as a researcher, I recommend it to other researchers and students as an option to consider for their own studies. If you want to give it a go, it’d be great if you could use my Prolific referral link which gives me credit towards future studies I run.

Monki Gras 2015

Monki Gras happened again! Though, in its Monki Gras 2015 incarnation, it acquired a heavy metal umlaut and a ‘slashed zero’ in its typeface; an allusion to its Nordic nature: Mönki Gras 2Ø15

What is Monki Gras?

Well…

And Ricardo makes a good point, explaining why I, and others, just keep going back:

There’s a single track of talks so you are saved the effort of making decisions about what to see and you can just focus on listening. The speakers entertain as well as inform, which, I really like.

While it is a tech conference, there’s little code because it’s about making technology happen rather than the details of the technology itself. So there are talks on developer culture, design, and data, as well as slightly more off-the-wall things to keep our brains oiled.

In James’ very own distinctive words:

Why go all Nordic this year?

All the speakers this year were Scandinavian in some way. It was probably the most rigorously applied conference theme I’ve ever seen (mostly, conferences come up with a ‘theme’ for marketing purposes which usually gets mostly forgotten about by the time of the conference itself).

James talks a bit more about this on the Monkigras blog. A surprising amount of tech we know and love comes out of the relatively sparsely populated Scandinavian countries. For example:

And, apparently, Finland leads the EU in enterprise cloud computing:

Are the Nordics really that different from anywhere else?

Well, this graph seems to say they are, if only for their taste in music:

Which suggests there is at least something different about Nordic cultures from the rest of Europe, let alone the world.

So several of the speakers delved into why they thought this led to success in technology innovation and development. For example, there’s the attitude to recognising when you’re failing and giving up so that you can be successful by doing it another way:

A Swedish concept, lagom, which means ‘just the right amount’ was credited with the popularity of the cloud in the Nordics. And, indeed, with pretty much anything we could think of throughout the rest of the event.

Similarly, you could argue that lagom is why Docker is popular among developers:

One fascinating talk, by a Swedish speaker based in Silicon Valley, was about the difference between startups in the Nordics and Silicon Valley. For example, the inescapable differences between their welfare systems were credited as being responsible for different priorities regarding making money. (Hopefully, videos of the talks will be put online and I’ll add a link to it.)

Obviously, all this talk about culture can, and did, drift into stereotyping. I did get slightly weary of the repeated comparisons between cultures, though interesting and, often, humorous.

Developer culture

One of the things I’m most interested in is hearing what other companies have learnt about developer culture and community. For example:

There’s more about this talk on Techworld. And Spotify have blogged some funky videos about the developer culture they aspire to (part 1 and part 2), which are well worth watching if you work in software development.

Something that I’m working on at IBM is increasing the openness of our development teams so, again, I’m always interested in new ways to do this. This is something that Sweden (yes, the country!) has adopted to a surprising extent:

Innovation and inefficiencies

One important message that came across at Monki Gras 2015 was that you have to allow time for innovation to happen. It’s when things seem inefficient and time is not allocated to a specific activity that innovation often occurs.

A nice example of this is the BrewPi project. At Monki Gras 2013, Elco Jacobs talked about his open source project of brewing beer and using a Raspberry Pi to monitor it:

I bumped into him this year and what had been a project now occupies him full-time as a small business selling the technology to brewers around the world. A pause in his education when he had nothing better to do had enabled him to get on with his BrewPi project and, after graduation, turn it into a business.

Data journalism

There’s a lot talked about open data and how we should be able to access tax-funded data about things that affect our lives. The Guardian is taking a lead with data journalism and Helena Bengtsson gave a talk about how knowing how to navigate large data sets to find meaning was vital to finding stories in the Wikileaks data.

She started out in data journalism in Sweden where, in one case, she acquired and mapped large data sets that revealed water pollution problems around the country, which triggered a several stories.

It’s not just having the data that matters but the interpretation of the data. That’s what data journalism gives us over just ‘big data’:

Also, I found out a fascinating fact:

Anyway, that’s about as much as I can cram in. We also found out random things about Scandinavian knitwear and the fact that Sweden has its own official typeface, Sweden Sans. And we ate lots of Nordic foods, drank Nordic beer and (some of us) drank Akvavit. And, most importantly, we talked to each other lots.

The thing I really value about Monki Gras (on top of the great talks, food, drink, and fun atmosphere) is the small size of the event and all the interesting people to talk to. That’s why I keep going back.

P.S. A good write-up of the talks

Aurasma and the Universal 100 app

I bought a copy of Mamma Mia! The Movie on Blu-ray. To celebrate Universal’s 100th anniversary, it’s releasing some of its films with ‘augmented reality’ (AR).

Which sounds cool.

As you can see in this photo, the front cover of the cardboard sleeve (why is a cardboard sleeve necessary?!) contains a sticker claiming “I COME ALIVE THROUGH YOUR SMARTPHONE”:

 

bluray-cover

Which sounds exciting.

Imagine my joy when, whilst watching the movie, I could point my phone at the TV and get contextual information about the film (commentary, background information about the beautiful locations, song lyrics, etc).

Ah, no. That’s what happened in my head.

In reality, I followed the instructions on the back of the box. I downloaded the Universal 100 app, pointed my phone at the front cover of the DVD, and got this:

aurasma-in-action

 

Basically a sort-of movie trailer for Mamma Mia! in response to recognising the image on the front of the box. The video is displayed inside an AR-style, 3D surround.

Kinda fun but disappointing and not really what I was expecting from the hype on the box that claimed the Universal 100 app would “reveal the magical 3D movie experience”.

The best use case I can think of for it is that people can point their phone at the box in a real-world shop to watch the ‘aura’ (the video) before buying it. Not that anyone buys DVDs/Blu-rays in shops any more.

Anyway, so I Googled “aurasma”, the technology inside the app and found a TED talk that explains what Aurasma is. I’m a bit more impressed now.

It’s probably actually what one of the Sony apps on my Xperia phone is based on. That app does cool things like recognise the label on a wine bottle, gives you information about the wine, and tells you the nearest place to buy it. Oh, and adds dinosaurs or fish when you point your phone at your friends.