Welcome to the blog! Here, we track the progress of the project and share some behind-the-scenes thoughts on implementing the research, collecting data, analyses and interpretations as we progress with the project.
Research does not take place in isolation – time to talk to some experts.
Now that the first analyses are done, it’s a good time to get some feedback about the way we think about and present the project before beginning the write-up process. To do so, we presented a first glance at the data at two conferences. To hear the opinions of experts in using tools like eye-tracking, we presented at the European Group of Process Tracing Studies (EGPROC) Meeting in Vienna. To hear the opinions of experts in behavioral economics, the project made an appearance at the meeting of the Society for Experimental Economics Research (GfeW) in Erfurt. Taking home the comments and ideas that came up about the project from these occasions, we’re now heading towards the next stage: finalizing analyses and writing our papers.
In this project, we collected a lot (!) of data. This blog post ist about managing the wealth of data we have.
When you run a study across 17 countries with 100 participants each, that makes 1700 data files to handle. More if you count the data from participants who didn’t complete the study, there are even more files to sort through. And within each file, we have data on more than 40 decisions in social dilemmas that participants made. And for each decision, we collect data about individual gazes that participants direct at the screen – and you can imagine that making a decision requires more than just a brief glance. At the lowest level of analysis, therefore, we have a lot of observations. A lot.
This wealth of data means we need good strategies for handling it. In part, this has to do with securing appropriate processing power for conducting our statistical analysis. In part, this means having to distill the data into manageable chunks (for instance by summarizing gazes into fixations). In part, this refers to simply being able to keep an overview of the data (for instance by running meta-analyses on country-level responses).
At the same time, this wealth of data makes it especially important that we preregistered our work. Deciding before obtaining the data how the data will be handled is helpful for organizing this process and for making traceable and comprehensible decisions.
This is just the shortest of short updates: The data is in!
In the past weeks, we have been collecting data for our two web-cam based eye-tracking studies. The plan was to collect data from 17 countries, with 100 participants each for study 1, and from 4 countries with 120 participants each for study 2. All in all that's more than 2000 responses. It took about a month to collect these data, with various obstacles in the way. These included everything from server outages to data not being saved correctly to having to calculate and implement individual bonus payments for our participants. Our plans for collecting the data had to change because one participant pool was unresponsive. But after managing these challenges, we are now ready to dive into the analyses!
Some things about how we would implement this project were clear as day from the beginning. We were sure we would be eliciting incentivized decisions in intergroup contexts. It was clear we wanted to measure eye-gaze as an indicator of cognitive processes going on during the formation of decisions. But other questions were more difficult to answer and took a lot of contemplation. This post is about these complicated decisions.
The biggest difficulty in this project was introduced through our plan to use web-cam based eye-tracking. This new technology allows us to estimate participants' gaze location while they take part in empirical studies on their computers with web-cams – without requiring them to come to our lab. The benefit is obvious: as long as they have a web-cam, internet and a well lit room, everyone can participate. This makes it possible to collect data internationally, which was precisely our plan. Although the team has ample experience with eye-tracking devices we would use in the lab, web-cam based eye-tracking came with all the challenges you would expect from a novel method. To be able to program the experiment, we had to learn a programming language we'd never used before. We had to learn a lot about web hosting to be able to run the study online. We needed to learn how detailed the data that web-cam based eye-tracking would give us was going to be and adapt our experiment, and more. Clearing these hurdles was time-consuming, but with the support of colleagues and lots of reading in programming discussion forums, we got it done. In the end, it left us with a working paradigm that can now be adapted to different research questions using web-cam based eye-tracking.
A second challenge was deciding about the experimental paradigm.The plan was to study more or less generous decisions between groups. But what groups should we select? Some comparisons seemed more relevant than others, particularly in the scope of the project: We wanted to learn about how people handle crises, and crises such as climate change, global health and migration take place on a global scale. The most relevant group comparisons for studying generosity, therefore seemed to be national groups. We decided to dedicate one study to asking participants from four different nations to make decisions affecting others either from their own nation, or from one of the other three nations. But the rich group identification of people thinking of themselves and others as belonging to different nations may bring with it a very particular kind of mental setting. We also wanted to see more general mechanisms of decisions to be generous to people from one's own or other groups. So we decided to dedicate a second study to experimentally induced groups. Participants were allocated either to a Team Green or a Team Blue, and made decisions to share resources with their own or the other team.
The third challenge for us was deciding which countries to select for data collection. We based our decisions on prior literature discussing generosity in different groups and aimed to get data from countries that had been sampled before. This would allow us to estimate whether our behavioral results were similar to what previous research showed. Moreover, accessibility played a large role. We knew we needed data from about 100 participants per country for the statistical analyses we planned. So for each country from which we wanted to collect data, we needed to make sure that the number of people who would be willing and able to take our study was sufficiently large. We got information about available participant pools from different online panels, and after lots of comparing and juggling the numbers, settled on 17 countries.
Certainly, there were more decisions to be made and trade-offs to consider. But these three were the toughest nuts to crack for us – so far!
In this project, we investigate how people decide to be prosocial: to help others, to be generous and to support each other. Sometimes, we all could use a helping hand. Often, getting a boost from others is what makes all the difference. In this blog post, we reflect on the help we got in the project so far.
Often, it's not just a lone-wolf genius researcher who will implement a project. Often, it's a whole team, whose members bring their diverse skill sets to the table and put their heads together to get a project on the ground. This project is no different. Our core team is small, but brings together very different skill-sets and an overlapping research interest. It has been a momentous experience to help each other out and to try to bring our – often diverging – perspectives together.
In addition to helping each other, we have relied on the support of two wonderful research assistants who took over tasks such as making drafts for our website (hi!) and literature searches to get an overview of prior work. Colleagues have supported us with sharing their code and research materials, with giving advice and feedback about abstract and specific issues, and with sharing their expert opinions. The administration at our institutions have been a tremendous support in ensuring the project is well organized. Folks sharing their programming issues and solutions online have helped us solve several coding problems. And of course, the funding institution has made this project possible in the first place. This is surely not an exhaustive list of all the support we have had in putting our project into practice. But it serves as an example of how much of research is built on cooperation itself.