With Eva Anduiza and Carol Galais
Experiments have become a common resource in political scientists’ toolkit. At the same time, studies of political attitudes increasingly rely on online surveys. The affordability of online surveys has promoted the use of experiments, as well as the development of online panel surveys that track changes in political attitudes. Most online surveys rely on the sampling frames of commercial firms, which are, in turn, based on a pool of respondents (panelists) recruited by these firms. While most commercial firms rely on a large pool of respondents, it is fair to assume that most of their panelists take several surveys each year. This can be problematic inasmuch as the effects of experiments conducted in one survey (or a previous wave of a panel survey) might affect the attitudes measured in another survey (or a subsequent wave of a panel survey). In this paper we assess this threat relying on a 12-wave online panel survey fielded in Spain. The survey includes 18 experiments on a diverse set of topics, as well as multiple measures of political attitudes. This paper analyzes if experiments conducted in a previous wave affect the attitudes measured in a subsequent wave. Moreover, we assess if the type of experiment, the experiment’s topic, the timing between waves, as well as the quality of debriefs moderate these effects. The results of this paper will not only be of interest for those using online panel surveys, but also for all those who rely on commercial firms to field online surveys.