Transparency in Experimental Political Science Research
With the increase in experimental studies in political science research, there are concerns about research transparency, particularly around reporting results from studies that contradict or do not find evidence for proposed theories (commonly called “null results”). One of these concerns is called p-hacking or the process of running many statistical analyses until results turn out to support a theory. A publication bias towards only publishing results with statistically significant results (or results that provide strong empirical evidence for a theory) has long encouraged p-hacking of data.
To prevent p-hacking and encourage the publication of results with null results, political scientists have turned to pre-registering their experiments, be it online survey experiments or large-scale experiments conducted in the field. Many platforms are used to pre-register experiments and make research data available, such as OSF and Evidence in Governance and Politics (EGAP). An additional advantage of pre-registering analyses and data is that other researchers can attempt to replicate the results of studies, furthering the goal of research transparency.
For researchers, pre-registering experiments can be helpful in thinking about the research question and theory, the observable implications and hypotheses that arise from the theory, and the ways in which the hypotheses can be tested. As a political scientist who does experimental research, the process of pre-registration has been helpful for me in designing surveys and coming up with the appropriate methodologies to test my research questions. So, how do we pre-register a study and why might that be useful? In this blog post, I first show how to pre-register a study on OSF and provide resources to file a pre-registration. I then demonstrate research transparency in practice by distinguishing the analyses that I pre-registered in a recently completed study on misinformation and analyses that I did not pre-register that were exploratory in nature.
Research Question: Peer-to-Peer Correction of Misinformation
My co-author and I were interested in knowing how we can incentivize peer-to-peer correction of misinformation. Our research question was motivated by two facts:
- There is a growing distrust of media and government, particularly when it comes to technology.
- Though many interventions had been introduced to counter misinformation, these interventions were expensive and not scalable.
In order to counter misinformation, the most sustainable and scalable intervention would be for users to correct each other when they encounter misinformation online.
We proposed the use of social norm nudges – suggesting that misinformation correction was both acceptable and the responsibility of social media users – to encourage peer-to-peer correction of misinformation. We used a source of political misinformation on climate change and a source of non-political misinformation on microwaving a penny to get a “mini-penny”. We pre-registered all our hypotheses, the variables we were interested in, and the proposed analyses on OSF prior to collecting and analyzing our data.
Pre-Registering Studies on OSF
To start the process of pre-registration, researchers can create an OSF account for free and start a new project from their dashboard using the “Create new project” button in Figure 1.
Figure 1: Dashboard for OSF
I have created a new project called ‘D-Lab Blog Post’ to demonstrate how to create a new registration. Once a project is created, OSF takes us to the project home page in Figure 2 below. The home page allows the researcher to navigate across different tabs - such as adding contributors to the project, adding files associated with the project, and most importantly, creating new registrations. To create a new registration, we click on the ‘Registrations’ tab highlighted in Figure 3.
Figure 2: Home page for a new OSF project
To start a new registration, click on the ‘New Registration’ button (Figure 3), which opens a window with the different types of registrations one can create (Figure 4). To choose the right type of registration, OSF provides a guide on the different types of registrations available on the platform. In this project, I choose the OSF Preregistration template.
Figure 3: OSF page to create a new registration
Figure 4: Pop-up window to choose registration type
Once a pre-registration has been created, the researcher has to fill out information related to their research that includes hypotheses, the research design, the sampling design for recruiting respondents, the variables that will be created and measured in the experiment, and the analysis plan for analyzing the data (Figure 5). OSF provides a detailed guide for how to create registrations that is helpful for researchers who are creating registrations for the first time.
Figure 5: New registration page on OSF
Pre-registering the Misinformation Study
My co-author and I pre-registered our study on peer-to-peer correction of misinformation, detailing the hypotheses we were interested in testing, the design of our experiment (the treatment and control groups), how we would choose respondents for our survey, and how we would analyze the data we collected through Qualtrics. One of the simplest tests of our study included comparing the average level of correction among respondents who received a social norm nudge of either acceptability of correction or responsibility to correct to respondents who received no social norm nudge. We pre-registered how we would conduct this comparison, including the statistical tests relevant and the hypotheses they corresponded to.
Once we had the data, we conducted the pre-registered analysis and found that social norm nudges – either the acceptability of correction or the responsibility of correction – appeared to have no effect on the correction of misinformation. In one case, they decreased the correction of misinformation (Figure 6). Because we had pre-registered our experiment and this analysis, we report our results even though they provide no evidence for our theory, and in one case, they go against the theory we had proposed.
Figure 6: Main results from the misinformation study
We conducted other pre-registered analyses, such as analyzing what influences people to correct misinformation when they see it. Our proposed hypotheses based on existing research were that:
- Those who perceive a higher degree of harm from the spread of the misinformation will be more likely to correct it
- Those who perceive a higher degree of futility from the correction of misinformation will be less likely to correct it.
- Those who believe they have expertise in the topic the misinformation is about will be more likely to correct it.
- Those who believe they will experience higher social sanctioning for correcting misinformation will be less likely to correct it.
We found support for all of these hypotheses, regardless of whether the misinformation was political or non-political (Figure 7).
Figure 7: Results for when people correct and don’t correct misinformation
Exploratory Analysis of Misinformation Data
Once we had our data, we presented our results to different audiences, who suggested conducting different analyses to analyze it. Moreover, once we started digging in, we found interesting trends in our data as well! However, since we did not pre-register these analyses, we include them in our forthcoming paper only in the appendix under exploratory analysis. The transparency associated with flagging certain analyses as exploratory because they were not pre-registered allows readers to interpret results with caution.
Even though we did not pre-register some of our analysis, conducting it as “exploratory” gave us the opportunity to analyze our data with different methodologies – such as generalized random forests (a machine learning algorithm) and regression analyses, which are standard for political science research. The use of machine learning techniques led us to discover that the treatment effects of social norm nudges may be different for certain subgroups of people. Variables for respondent age, gender, left-leaning political ideology, number of children, and employment status turned out to be important for what political scientists call “heterogeneous treatment effects.” What this meant, for instance, is that women may respond differently to the social norm nudges than men. Though we did not explore heterogeneous treatment effects in our analysis, this exploratory finding from a generalized random forest provides an avenue for future researchers to explore in their surveys.
Pre-registration of experimental analysis has slowly become the norm among political scientists. Top journals will publish replication materials along with papers to further encourage transparency in the discipline. Pre-registration can be an immensely helpful tool in the early stages of research, allowing researchers to think critically about their research questions and designs. It holds them accountable to conducting their research honestly and encourages the discipline at large to move away from only publishing results that are statistically significant and therefore, expanding what we can learn from experimental research.