The Generative Counterfactual Framework: Harnessing AI to Explore Alternative Realities in Public Opinion Research

Abstract

This paper introduces the Generative Counterfactual Framework (GCM), a novel methodological framework for conducting counterfactual analysis in public opinion research using Large Language Models (LLMs). Building upon Rubin’s potential outcomes framework and recent advances in synthetic data validation, we present two studies that demonstrate the validity and utility of this approach for exploring alternatively constructed realities. First, we validate the GCM using a natural experiment surrounding Donald Trump’s felony conviction, comparing synthetic opinion shifts with real-world survey data. Our framework successfully predicts the magnitude and direction of Republican opinion changes regarding felon eligibility for presidency, with open-weight models demonstrating particular accuracy in capturing these shifts. Second, we apply the framework to assess public opinion responses to hypothetical environmental policies in Maine, demonstrating its potential for anticipating public reactions to policy changes before implementation. Results reveal nuanced patterns of institutional approval and political attitudes across different policy interventions, highlighting the GCM’s capability to capture complex opinion dynamics in counterfactual scenarios. Through these studies, we establish the GCM as a promising tool for systematically exploring how public opinion might evolve under alternative realities, offering researchers and policymakers unprecedented capabilities to anticipate and understand opinion shifts across diverse contexts. While further validation is necessary, our findings suggest that properly constructed synthetic populations can reliably simulate complex opinion dynamics, opening new possibilities for counterfactual analysis in social science research.

Keywords: Counterfactual Analysis, Public Opinion, Large Language Models, Synthetic Populations, Causal Inference.

Full paper.